Drop down Menu of available Models - AutoTagger

Started by Lincoln, May 29, 2025, 12:49:50 AM

Previous topic - Next topic

Lincoln

Using AutoTagger locally with LM Studio latest 3.16 version 8. When I go to the AutoTagger drop down menu to select a Model it shows a list of Models that I can't find in the default LM Studio Models download folder
(C:\Users\linco\.cache\lm-studio\models\lmstudio-community)?
I am guessing that AutoTagger is showing/looking at another directory.
 
If I select the Gemma 3 12b Model AutoTagger works fine. In LM Studio the Models showing in AutoTagger don't show up. I haven't downloaded the others through LM Studio so that is expected.

Mario

AutoTagger only supports a list of models I have tested and configured. It does not look for models you might have installed in LM Studio or Ollama. You can use only the models AutoTagger has configurations for.

See AI Service Providers for a list of supported AIs and models.

You did post your questions in a board that is meant to provide FAQ's, tips&tricks and tutorials to other users.
If in doubt, post in General Questions.
I will move this thread there.

Lincoln

Thanks for pointing me in that direction. Was starting to think that was the case.
I have a 24gb Vram Graphics Card will there be any options for using these larger LLM's in the future?

Thanks for moving the post to the correct section.

Mario

Which vision-enabled model do you want to run locally? You cannot use any other model with AutoTagger.
The largest vision enabled model LM Studio offers is Gemma3 12B, and that's supported by IMatch.

Lincoln

LM Studio version 3.16 has the Google/Gemma 27b 4q

Very happy with the version I am using at present but if it is possible :)

Mario

I doubt you will notice any difference in AutoTagger.
And I cannot test such large models here.

Lincoln

Will have agree to disagree ??? Every model has different powers otherwise there would be no point in creating them.
No problem though I can manually run an image through LM Studio if I need to. Autotagger is such a pleasure to use that you have spoiled us and now we what more :)

Mario

#7
QuoteEvery model has different powers
Yes. But Gemma 3 is mostly for reasoning, math and coding. I'm not sure if a larger model will have better image analysis capabilities. Jumping from 12B to 500B (cloud), definitely. But from 12B to 27B with only 4 bit prevision left after quantization...not sure. Which GPU do you use?

You can try this, if you want.

Close IMatch.
Make a backup copy of the file "C:\ProgramData\photools.com\IMatch6\config\ai-services-202501.json" and then open the original in a capable text editor that supports JSON (Visual Studio Code or similar).

Scroll down until you see the entry for gemma3:12b.
Copy the entire section, rename it etc. as shown in this screen shot. I don't thing we need to change any of the control parameters for this model.

Image1.jpg
Let me know if you notice any difference, get better results (and for which kind of prompt and motive) etc.
Note: Unless I add this officially, this change will be undone when the next release of IMatch is installed. Then you have to add the section again.

Lincoln

Using Intel NUC 10th generation 64gb RAM + eGPU Dock with AMD Hellhound RX 7900 XTX 24gb card
The Graphics card was on special AUD 1200. Use it for Topaz AI Photo - Dust & Scratches but that is well and truly in Beta.

Only done a couple of tests but looks very promising!!
27b got 100% first time with only the City, State, Country Metadata provided and [[-c-]] Short description in magazine style.

Mario

How to the results differ between the 12B and 27B model for the same set of, say, 10 sample images?
How is the performance (time per image?) Usually AI models are designed to run on CUDA (NVIDIA) and support for AMD GPUs is not that wide-spread.
That would be a good topic for the AI Tips & Tricks board.