Hallöchen! Daniel J Blueman writes:
> In my thesis a while back [1], I developed a strong algorithm and > tool [2] to automatically generate LCA correction data from an > image. These correction values hold for the given set of lens > parameters usually encoded in the image's EXIF fields. I appreciate that people are working on TCA correction (I avoid the acronym "LCA" because it may be misinterpreted as "longitudinal CA"). It is still sub-optimal in Darktable, so any improvement is welcome. Let me first review the current situation. Darktable has two means to reduce TCA. The automatic approach works often poorly according to many user reports. The profile-based approach using LensFun is reliable, but LensFun has an extremely small TCA profiles database. LensFun profiles are generated by running tca_correct over a sample photograph. This works very well. tca_correct's method is described at <http://hugin.sourceforge.net/tutorials/tca/en.shtml>. Basically, it finds control points and creates distortion polynomials for the red and blue channels respectively. You can use the coefficients in LensFun. Alternatively, you can use fulla to TCA-correct your photograph. If you use fulla, the polynomial is allowed to be more complex, and decentering of the lens is taken into account. You can reimport the picture in Darktable. However, this is awkward. Adobe Lightroom used to use profile-based TCA. Its database is much larger that LensFun's. But for whatever reason, Adobe decided to give up this method. Instead, Lightroom prevents most of TCA already while demosaicing, without any profile. I have no idea how it achieves this. After that, you can enable CA correction, which is simply a desaturation of edges. It works well, and allows for removal of axial CA as well. > I wanted feedback onsome ideas to bring this tool to a wider audience: > - reduce runtime/memory usage > - generating lib lensfun-compatible output > - ability to work on a directory of images, averaging across related settings > - moving from python to C/C++ If I understood your text correctly, you calculate difference images between the channels and total these differences up. Then, you calculate the distortion coefficients that minimize this integrated difference. This sounds costly. In particular, I doubt that it can be used for realtime correction of images in Darktable. I suggest the following: Write a C program that reads a TIFF and writes a LensFun <tca> tag line. This would be very helpful. While tca_correct works well, your method really seems to be very robust and accurate. Its slowness doesn't matter when it comes to generating of profiles. I would use your tool in my own calibration tutorial/program, and the LensFun people probably would link to it, too. > Finally, I find that LCA correction is depenendent on aperture and > potentially focus settings (as lens elements are moved); this isn't > encoded in the existing lensfun correction model [3]. TCA is weakly dependent on focal length (of a zoom lens). See <http://en.wikipedia.org/wiki/Breathing_(lens)> for the cause. It is very weakly dependent on aperture. In fact, if I browse through the reviews at <http://www.photozone.de/Reviews/overview>, I doubt that there is a measurable dependence beyond systematic errors. (I'd really like to know why there is no dependence, by the way.) Be that as it may, I don't think that it is senseful to extend LensFun this way. Tschö, Torsten. -- Torsten Bronger Jabber ID: [email protected] or http://bronger-jmp.appspot.com ------------------------------------------------------------------------------ Own the Future-Intel(R) Level Up Game Demo Contest 2013 Rise to greatness in Intel's independent game demo contest. Compete for recognition, cash, and the chance to get your game on Steam. $5K grand prize plus 10 genre and skill prizes. Submit your demo by 6/6/13. http://altfarm.mediaplex.com/ad/ck/12124-176961-30367-2 _______________________________________________ darktable-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/darktable-devel
