Main Menu

Recent posts

#1
AutoTagger and AI / Re: AI Descriptions with face ...
Last post by Mario - Today at 10:14:33 AM
"static description prompt"?
Which kind of conditions?

Keep in mind that IMatch combines the description/keywords/landmarks/trait prompts into one AI query. You cannot e.g. reference the description in your keywords prompt via a variable - because it does not yet exist.
If you need to do such things, create individual presets and run them in the sequence you need, e.g. first run AutoTagger for descriptions, then run AutoTagger with a preset for keywords. Or vice-versa.
#2
AutoTagger and AI / Re: AI Descriptions with face ...
Last post by Stenis - Today at 10:08:47 AM
Yes I did that when I didn't succeed with the pasting in the input window.:-) ...but when testing I started with it as an ad hoc case, thats why.

When adding conditions and instructions at the static description prompt does sequence play any role - I mean if one condition is dependant on the result of another.



#3
Thanks, Tony
I have fixed the glitch (same reason) in the Copy/Move dialog.
Since you move the file, you get one column less.
Not sure why this does work for you with the \\, but it now works here in all cases, and that's good enough for me right now. Busy times. Thanks for checking.
#4
AutoTagger and AI / Re: AI Descriptions with face ...
Last post by Mario - Today at 09:18:34 AM

QuoteMario, do you have an idea about the costs using Ollama and the Google Gemma3 model instead of OpenAI.

 It's free for private use. Same for LM Studio and Ollama.


QuoteOne thing I experienced, when trying to paste the example string you wrote above into the "Prompt"-form (F7) was that I could not. So, in order to get it to work I had to paste it into the prompt for the "Description"-element. Can you look at that and try to replicate that "error"?
Prompts go into the prompts input fields for keywords, descriptions and traits.
The input field in the AutoTagger dialog is where you, optionally, provide text for the The Context Placeholder [[-c-]], which is a neat feature to quickly "extend" existing prompts with ad-hoc information.

Using AutoTagger
Prompting for AutoTagger

for details about prompts, where to put them, the context placeholder and prompting tips.
I cannot repeat all that information in posts.
#5
When you have downloaded the model, it should show in the list of models:

Image1.jpg

and when you open the properties, it should look like this if there is enough VRAM:


Image2.jpg

The CPU Offload tells you if LM Studio can fit the model in memory. If the value is less than 48, LM Studio needs normal RAW and has to swap, which reduces performance considerably.
The 4B model will of course fit nicely into 12GB VRAM.
#6
Is there someting written on the funiculares? Or somewhere else (the thumb is too small to see it here).
Or Gemma wrote just somehing ghostly.
#7
AutoTagger and AI / Re: AI Descriptions with face ...
Last post by Stenis - Today at 07:27:18 AM
Mario, do you have an idea about the costs using Ollama and the Google Gemma3 model instead of OpenAI.

For me it would be higly interesting if it is better with the language expressions and is better on landmarks and identifying animals of all sorts than OpenAI seems to be by default.

.... but maybe OpenAI would be better if I was more experienced in prompting enginering than I am for the moment.

Isn't this very much about prompting no matter the model you use?
#8
AutoTagger and AI / Re: AI Descriptions with face ...
Last post by Stenis - Today at 07:17:06 AM
Interesting, thanks for that input!

I think it also is a matter of if how much patience we have as individuals. When we start building a picture library it will take some time before it gets " momentum and" becomes really useful. Of that reason a lot ofvpeoplecwill never get there.

When I first tried in Lightroom many years ago I gave up since it then and still are so very ineffective. For me it doable first when I started to use PhotoMechanic when the PM Plus 6 variant with database was released.
#9
Quote from: sybersitizen on Today at 12:37:17 AMAssign copies (visually identical, same format)

I don't know if this is the default or if I chose it at some point. I don't recall changing it.

I have actually only found ONE example so far where I'm seeing more than one non-red-lined file in the Result Window for Duplicates, and in this case one of them is a PSD file and one is a JPEG. A third one that will always be red-lined is also a JPEG. All three are legitimate duplicates of the same scene (with slight differences in editing), but if I understand the documentation correctly, I shouldn't be seeing the PSD because it would not be considered.

Is there any condition where I should expect to see more than one non-red-lined file?

The 'Assign copies (visually identical, same format)' is the logic applied when the file is first indexed into IMatch.  That is why a PSD and a JPEG will not be considered duplicates and nether will be assigned to the Duplicates category (hence no red-line) - they have different formats.

Your third file is a JPEG and is red-lined because at the time of indexing there was a visually identical JPEG already in the database.

When you perform a Search for duplicates from within IMatch, it is not using the indexing rules and that is why it found your two files that were not red-lined (not assigned to the Duplicates category).


I hope that is helpful.

Michael
#10
AutoTagger and AI / Re: AI Descriptions with face ...
Last post by jch2103 - Today at 12:55:09 AM
Yes, indeed. I think the problem is that people have limited attention spans, don't know what they don't know and are unsure about new things. 

There's a fascinating book, "Thinking Fast and Slow" by the psychologist Daniel Kahneman. To quote Wikipedia

QuoteThe book's main thesis is a differentiation between two modes of [color=var(--color-progressive,#36c)]thought[/color]: "System 1" is fast, [color=var(--color-progressive,#36c)]instinctive[/color] and [color=var(--color-progressive,#36c)]emotional[/color]; "System 2" is slower, more [color=var(--color-progressive,#36c)]deliberative[/color], and more [color=var(--color-progressive,#36c)]logical[/color].
To start using a DAM requires some "System 2" thinking, which requires more energy from the brain, even though the results will be something that can be used with "System 1" thinking once the initial learning curve is overcome. I.e., the rewards are more than worth the effort, at least for some people's use cases.