Autotagger fails with problems

Started by CJMACKE, September 04, 2025, 05:17:39 PM

Previous topic - Next topic

CJMACKE

Hope to be able to get some guidance on the new AI.  I watched the videos to get Ollama installed including the models.  I run the autotagger but keep getting the message it has failed to process the files.  I see api calls but nothing seems to be written.  I have loaded Imatch to my D drive so am wondering if this is messing with finding the right location for the model to analyze.  Any help would be appreciated. Thanks.

Mario

Please always include the ZIPped IMatch log file (see log file) when you report things like this. The log file will contained detailed error information.

Things to check: Have you downloaded the Llava13b model you want to run using the run command in Ollama?
A 13B model requires a graphic card with 16GB VRAM for good performance. Which graphic card do you use?

In 2025, the Gemma3 4b or Gemma 3 12b is the preferred model, much better than LLava. To install the version suitable for your graphic card (4b or 12b) use these commands on a command prompt:

ollama run gemma3:4b
ollama run gemma3:12b
After Ollama has downloaded the model, you can use it in AutoTagger.

See also Ollama(Local AI) in the IMatch help for additional details.

CJMACKE

Thanks for the help!  I downloaded the Gemma3 4b file.  I have 8Gb VRAM.  I still get the same error. Attached the latest log file. Went to the link in the email to review procedures.  Not sure what to do next.

Mario

This is a Windows 10 system. I see warnings about high memory utilization (80%+), but nothing serious.
When IMatch tries to connect to Ollama, it runs into a

PTCAIConnectorOllama: 2 HTTP Status Code: 408 'Timeout.'

Ollama is not responding within 30 seconds, which is quite unusual. Did you start Ollama before running IMatch? When it runs, you see a little icon in the Windows taskbar:

You cannot view this attachment.

Usually the Ollama installer installs Ollama in a way that makes the Ollama server start automatically with Windows. You can manually start Ollama from the START menu.

Or, maybe you run some sort of "security" software / virus checker / firewall that blocks IMatch from accessing Ollama or Ollama from accessing connections?

CJMACKE

I reloaded Ollama.  Redownloaded the gemma3 4b model.  Still the same.  I checked my firewall that both Imatch and Ollama are allowed thru the wall. Ollama is showing running in the taskbar.  Perhaps running Imatch from the D drive and having Ollama on the C drive is a conflict? 

Tveloso

You might try increasing the timeout a bit, since you run a local model.  As described in the AutoTagger help:

QuoteIf you run a local AI (Ollama) on a system without a powerful graphic card or dedicated NPU, response times will be much longer than those of cloud-based AIs. If the local AI takes longer than 30s to respond to an AutoTagger request, increase this timeout to prevent AutoTagger from aborting the request before the AI had time to answer.
--Tony

Mario

Same error: PTCAIConnectorOllama: 2 HTTP Status Code: 408 'Timeout.'

Ollama did not respond within 30 seconds.
Open the AutoTagger settings and increase the timeout to, say, 120 seconds. Then retry. When Ollama actually responds, IMatch logs the response time in the log file.

Which graphic card do you use?
The oldest card I've tested AutoTagger on was NVIDIA 2070TI, about 5 years (?) old. The Ollama response times, even for larger prompts, were less than 10 seconds with small 4B models (LLava at the time).

If you use an unsupported card, or the model does not fit into the graphic card memory, Ollama will offload to the CPU (normal processors), which can be 10 times slower than running the model on the graphic card. 

CJMACKE

Thank you!   Setting the time to 120s did the trick! 

Mario

Very good. But if your graphic card takes more than 30 seconds for one image, you won't have much fun with this.
Which graphic card do you have?

sybersitizen

Quote from: Mario on September 05, 2025, 08:28:25 AMIf you use an unsupported card, or the model does not fit into the graphic card memory, Ollama will offload to the CPU (normal processors), which can be 10 times slower than running the model on the graphic card.
Just to clarify, Ollama doesn't actually need a supported GPU to work with IMatch if speed of operation is not a priority?

thrinn

Quote from: sybersitizen on September 06, 2025, 07:11:45 PM
Quote from: Mario on September 05, 2025, 08:28:25 AMIf you use an unsupported card, or the model does not fit into the graphic card memory, Ollama will offload to the CPU (normal processors), which can be 10 times slower than running the model on the graphic card.
Just to clarify, Ollama doesn't actually need a supported GPU to work with IMatch if speed of operation is not a priority?
Correct. For example, I have an AMD graphic card which is not supported. Technically, it works by using only the CPU, but it is no fun to use, in my opinion. I also have run times of more than 30 seconds per image. Especially when you are experimenting with different prompts it gets cumbersome.
Therefore, I decided to subscribe to one of the cloud services in addition. Fortunately, IMatch gives us all these options.
Thorsten
Win 10 / 64, IMatch 2018, IMA

Mario

Ollama (and LM Studio) can use the CPU if no supported graphic card is present.
They will also "offload" data to the normal RAM if the model loaded is too big to fit into the graphic card VRAM.

But the performance penalty is quite severe, though. The CPU execution time is 10 times slower than a mid-range graphic card, and 20 times or more slower than a top-range (and very expensive) graphic card.

On the other hand, adding descriptions and keywords to 10,000 to 20,000 files with OpenAI or Gemini will cost maybe 10,- US$. About the same for Mistral AI, with added European privacy.

If privacy is not an issue, a cloud-based solution is usually preferable. If you already have a "gamer" graphic card fine. Else, cloud. AutoTagger processes up to 8 images in parallel with OpenAI or Gemini, which means several hundred or even a thousand image in a few minutes.

As @thrinn said, AutoTagger will always work the same (except for some variations in the results), whether you use local AI or cloud AI.