I've been comparing darktable with different raw developers for the last little 
bit, really hoping I could get it to give me the "best" results. I just posted 
a comparison of my preliminary results on a forum for SfM photogrammetry that I 
frequent. My main finding is that JPGs/TIFs generated with darktable and 
openimageio (and also native DNG) didn't work as well for our purposes as 
JPGs/TIFFs generated with Affinity Photo (!) or Capture One. I would love to 
know what about the RAW processing made these softwares work "best" but I 
wanted to bring it up in case any darktable users or devs have any insights. I 
think I might get better results with auto-leveling and using the right 
colorspaces, but not totally sure what darktable is doing differently than 
CaptureOne or Affinity Photo (though I know from a variety of posts that at 
least in Capture One's case it's partly related to input profiles and camera 
curves). From one of Affinity's blog posts on their demosaicking algorithm 
improvement I'm wondering if that's a significant factor.

Link to my post below, but the quick sum is that with Affinity or Capture One 
post-processing, I can get ~5-10% "better" results than with "raw" (DNG is the 
only supported "RAW" input for the software). With darktable and oiio I can't 
quite match the raw. I tried and didn't finish evaluating other softwares 
because they weren't satisfactory from a color processing or performance 
standpoint. Affinity is barely satisfactory from a performance standpoint but I 
think it's a bug with threadripper support.

Post link (with pretty chart): 

https://www.agisoft.com/forum/index.php?topic=11952.msg54532#msg54532

I know mine is a non-standard use of darktable, but I really like a lot of 
things about the program, so I spent a lot of time trying to get it to work 
well with my raws. At the end I *think* that the limitations of auto-applying 
images were what made other software work better, and I'm disappointed that my 
"best" results weren't with darktable and/or openimageio. I think there's a 
chance that it's because I'm not an image expert, and some simple concept or 
setting is eluding me, but I wanted to point out my results in case there are 
other issues like demosaicing or auto-something or ... I don't even know. I 
think darktable is great and I'm going to continue to follow it and would love 
any ideas folks have to try to get better results. I think probably some test 
pipeline could even be automated by a good programmer with hugin's 
imagematching back-end (sorry forget what it's called) - because I imagine a 
lot of the SIFT algorithms are the same. Also there are automated 
photogrammetry pipelines (like Alice) that darktable could be tested with but 
that's beyond the scope of my work now.

/Andy

____________________________________________________________________________
darktable user mailing list
to unsubscribe send a mail to [email protected]

Reply via email to