Hi guys

What would be really nice would be a nice tool to let the user choose,
since from what I read, even with an IA we would never have, it cannot be
reliable.

I remember when I was using Aftershot (I think it was the name, before it
was Bibble something I think), they had a REALLY great comparison tool :
you select up to 6 photos in your list, then it displays them together (all
next to each other), and when you zoom on one, it instantly zoom on all the
others, at the same location. For a burst, it's great, you can very quickly
tell which one is fine and which one is not.

If I want to do that with Darktable, of course it's not impossible, but
it's significantly slower. A similar tool would be great. Maybe without
applying all the modules, we don't need a fully edited photo for the
sharpness comparison.


    François

Le dim. 6 oct. 2019 à 16:41, Aurélien Pierre <rese...@aurelienpierre.com> a
écrit :

> argh. Tales of over-engineering…
>
> Just overlay the euclidean norm of the 2D laplacian on top of the pictures
> (some cameras call that focus-peaking), and let the photographer eyeball
> them. That will do for subjects at large aperture, when the subject is
> supposed to pop out of the background. For small apertures, the L2 norm
> will do a fair job. And it's a Saturday afternoon job, hence a very
> realistic project given our current resources.
>
> What you ask for is AI, it's a big project for a specialist, and it's
> almost sure we will never make it work reliably. The drawback of AIs, even
> when they work, is they fail inconsistently and need to be double-checked
> anyway.
>
> So, better give users meaningful scopes and let them take their
> responsibility, rather than rely on witchcraft that works only in Nvidia's
> papers on carefully curated samples.
> Le 06/10/2019 à 16:18, Robert Krawitz a écrit :
>
> On Sun, 6 Oct 2019 15:02:39 +0200, =?UTF-8?Q?Aur=c3=a9lien_Pierre?= wrote:
>
> That can be easily done by computing the L2 norm of the laplacian of the
> pictures, or the L2 norm of the first level of wavelets decomposition
> (which is used in the focus preview), and taking the maximum.
>
> As usual, it will be more work to wire the UI to the functionality than
> writing the core image processing.
>
>
> Consider the case where the AF locks onto the background.  This will
> likely result in a very large fraction of the image being in focus,
> but this will be exactly the wrong photo to select.
>
> Perhaps center-weighting, luminosity-weighting (if an assumption is
> made that the desired subject is usually brighter than the background,
> but not extremely light), skin tone recognition (with all of the
> attendant problems of what constitutes "skin tone"), and face
> recognition would have to feed into it.
>
>
> Le 06/10/2019 à 14:14, Germano Massullo a écrit :
>
> Il giorno dom 6 ott 2019 alle ore 13:32 Moritz Mœller<virtualr...@gmail.com> 
> <virtualr...@gmail.com> ha scritto:
>
> Define 'most focused'.
> I give you an example to understand this request better. [...]
>
>
> Yes you are right. but in your case, the couple is the main thing that
> is moving in the picture. For my use case imagine I am taking photos
> to people that are giving a talk. Some photos of the burst may be
> blurred because I moved the camera while shooting, instead some other
> shoots of the same burst could have less blur effect beause my hands
> were not moving during its exposure time so the photo will have less
> blur effect.
> It would be great if an algoritm could detect the best shots
>
>
> ___________________________________________________________________________
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>

___________________________________________________________________________
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Reply via email to