how to run Auto tagger in 2025?

Started by kirk, August 19, 2025, 12:55:21 AM

Previous topic - Next topic

kirk

There was sort of self explaining app "autotagger" in 2022 version where  I saw a suggestion to use  3d party accounts.   Now I expected  there should be an inbuilt one, right?.   How can I run it?    I see no Autotagger app in app manager  now.   When I hit F7 as it shown in some video I see some cryptic panel with some default "ollama"  settings and when hit "Run"   nothing happens.    Help seems just say what it supposed to do but not how I supposed to run it.    Chat GPT 5 tells me  its app  in app panel . which  i don't see here.  Zero help from it .   


I see  imatch AI chat app . is it is?    it shows me: It appears the Ollama software is not running or not accessible.
Start it or close and start it again to reset.

Afterwards, click into this app and press <F5> to reload it.
. F5 does nothing .   

sybersitizen

#1
The built-in autotagging of previous IMatch versions is gone. You now must either subscribe to a cloud-based AI tagging service or install a local AI model on your system (if you have a GPU that can support one).

After that, you must configure AutoTagger under Preferences > AutoTagger to use whatever options you chose.

After that, you start AutoTagger by selecting some files and using the Commands menu > Image > AutoTagger command or pressing the corresponding keyboard shortcut F7. You might also need to submit a prompt that describes in detail what you want the AI to produce.

https://www.photools.com/help/imatch/auto-tagger.htm

https://www.photools.com/help/imatch/ai-services.htm

https://www.photools.com/help/imatch/auto-tagger-prompting.htm

Mario

See the IMatch 2025 - What's new? release overview and video: https://www.photools.com/imatch-2025-whats-new/ which explains about AutoTagger and all the other exciting new features introduced in IMatch 2025 six months ago.

Technology has moved on an IMatch now supports the very powerful LLM AI technology available to us for cheap from companies like OpenAI, Google and Mistral. IMatch also supports running local AI using the easy Ollama software and LM Studio for users who prefer privacy and have suitable graphic cards.

In addition to the help links @sybersitizen has included above, which explains how AutoTagger works, how to use and configure it, I have also made several videos which show how to install Ollama and use AutoTagger. You can find them in the IMatch Learning Center: https://www.photools.com/imatch-learning-center/

Don't expect everything to be so intuitive that it requires no documentation. Sometimes some configuration is required, nothing I can do about it.

Starrt with The IMatch AutoTagger which links to AI Service Providers that explains how to download, install and configure Ollama: Ollama(Local AI) if you have a suitable graphic card. If not and you want AI to create descriptions and keywords for your images, open an account with OpenAI, Google or Mistral and configure AutoTagger to use it. Basically, just add your API key.

AutoTagger is a very powerful tool. Since I've released it six month ago, many IMatch users now consider writing descriptions and keywords for images as a "solved problem" that AI can do for us.

Depending on your personal preferences, e.g. the style and length of the descriptions you prefer, the language, and how you like your keywords, you will have to spend a bit of time with the prompt you send to the AI. If you have used AI before, you know how this works.  I explain this, and provide many pre-made prompts and tips in the Prompting for AutoTagger in the help system. Well worth a read.

kirk

Thanks a lot Mario and sybersitizen. 

    Does anyone tried  autotaggin  tile-able  material  images  aka textures ?  And photogrammety series of closely taken,   texture like photos ?   Wonder which of those AI can do it .  

Mario

What kind of description would you expect the AI to return for textures? Which attributes are you interested in (aka traits)?
Not sure if this is a good use case.

If you attach a sample image, we can run it with Gemma 3 (local) or the cloud AIs and post the results.

What prompt would you use to the describe the AI what you expect it to return in the description?