Hi,

last week I took a couple of images at a concert, and it turned out
that only a small part of each image was interesting. I was too far
away, with a wide-angle lens, so the band I wanted to photograph was in
a small part in the center of the frame with lots of other stuff around
them, stage, audience etc. 

Now this can be solved by taking better pictures, coming closer, being
prepared with a telephoto lens etc. - but there also seems to be a
solution that could find its way into darktable.

There have been a number of media reports about machine learning
experiments by Google etc. to add missing detail to images during
upscaling. It seems like the results are often quite convincing. Now I
stumbled upon a Github project for this that seems to offer a hands-on
solution which might be a basis for implementation in darktable:

https://github.com/lucasdupin/ml-image-scaling

What do you think? I imagine this would be useful...

Cheers
Michael
____________________________________________________________________________
darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org

Reply via email to