Auto Tagger - The AI failed to process one or more files...

Started by ilgenealogist, October 25, 2025, 12:50:31 AM

Previous topic - Next topic

ilgenealogist

I have just started poking around using AI for image metadata creation, and would love to see if Auto Tagger can ease my workflow. I followed the online tutorials and help prompts to set up the tagger, including downloading and enabling the Ollama engine.

When I select an image file and press F7, the Auto Tagger dialog appears as expected. However, selecting "Run" just generates an error: "The AI failed to process one or more files or returned invalid responses."

This is repeatable regardless of image file size, type (JPG, TIFF), or settings in the Auto Tagger dialog.

I am running iMatch 2025.6.2 on Win11 Pro 24H2, 32GB RAM. Files have been tested on external HDD and internal SSD with same results.

My goal is to create Metadata text for several fields in each file, based on Dublin Core\XMP. Based on the prompts provided, the text should be modeled after NARA archival image metadata tags. I have used Chat GPT to do this manually, and it works quite well.

I just can't seem to get the Auto Tagger function off the ground.

Any ideas\help would be appreciated!


 

Tveloso

If the connection to Local AI ends in error, the IMatch Log File will contain details on the cause of the failure.  Just search for "E>" or "W>" in the log file.

You might simply be running into a timeout condition, which you can solve by increasing the timeout value (shown in the middle-right of your first ScreenShot).  Maybe set it to 60 seconds to see if that allows the AI to return successfully...

Also, that ScreenShot shows that you're using one of the LLaVA models.  Mario now recommends one of the Gemma 3 Models for local AI, as discussed here:

https://www.photools.com/help/imatch/ai-services.htm
--Tony

Mario

Attach the log file so we can learn more about the problem.
I assume it's a timeout.

You also have a whole bunch of traits (including description, which is one of the 3 standard AI tags IMatch already supports). And creator, which is where you usually put in your name or the name of the photographer.

Which type of graphic card do you have? For this amount of work (10 traits?) Ollama will take a long time to produce a result, even on a 4080+ or 4080+ NVIDIA card.

If your graphic card has only 4GB VRAM (or less), and Ollama must use the normal processors instead, response time will go up by a factor of 10 or 20. The default timeout is 30 seconds, which usually enough when the graphic card is fast and the prompt is not overly complex.

As Tveloso mentioned, the Gemma 3 model produces much better results.

Try this:

Create a new setting which only sets the XMP description. 
Use a simple prompt for the description, like the examples shown in the help.
Then run it on a few images.

Attach the log file (see log file) from that session to your reply.

If this works, its a timeout condition. Your graphic card is not fast enough to produce responses for the many trait tags you use in 30 seconds. Increase the timeout or split your setting into multiple settings and run them one after each other.

QuoteI have used Chat GPT to do this manually, and it works quite well.

Then consider using AutoTagger with OpenAI to get the same results. This will also be the way if you have only a "normal" graphic card, not suitable for running large local models. LLava can definitely not do what you want, but Gemma 3 12b might - but also requires a graphic card with 16GB of VRAM.