Gemma model support

Started by sybersitizen, August 01, 2025, 01:43:43 AM

Previous topic - Next topic

sybersitizen

I see there is a Gemma 2b model that appears to not be supported in IMatch:

https://ollama.com/library/gemma3n

Is there a reason why it's not? My GPU VRAM is only 4GB, so I'm seeking the smallest possible model.

Mario

Neither Ollama nor LM Studio offer currently support for the "vision" version of Gemma3n:4b. That's all still in the works.

Both LM Studio and Ollama offer a wide range of models. For IMatch, only vision-capable models are of interest.

sybersitizen

I see. So, once LM Studio and Ollama offer vision support for the new models, IMatch can as well.

sybersitizen

Oh ... I've just been reading that there are QAT (quantization aware trained) versions of the Gemma 3 models that are smaller in size:

https://developers.googleblog.com/en/gemma-3-quantized-aware-trained-state-of-the-art-ai-to-consumer-gpus/

https://ollama.com/library/gemma3

The 4b QAT model should work easily in 4GB but I suppose IMatch would have to be modified in order to support it?