On Monday, August 1, 2022 at 6:54:46 AM UTC-4 davi...@gmail.com wrote:
> > > Where the camera has an advantage is that it knows precisely how the > sensor behaves, when and where it will create the most noise and what kind > of noise, maybe even detect different kinds of surfaces and apply different > noise reduction settings on different parts of a photo in order to preserve > more details when possible. But of course this all relies on the camera's > "AI" (I am not sure the word "AI" really applies here), the AI could guess > incorrectly, and the user could achieve a better result with enough > experience and time. > > I wish I understood this stuff better. What (if any) part of the jpg looking better is due to the lossy compression of jpg actually improving quality? Areas that should be smooth in color are smooth in jpg, but grainy looking when converting arw to tiff. Edges that are sharp in jpg are fuzzy when converting arw to tiff. The arw format as produced by my alpha 7 III camera is a fairly stupid lossy compression, using 8 bits per pixel. It carries much better information than a simple 8 bits per pixel, but much less information than uncompressed would have. Clearly, it had at least 11 bits per pixel (maybe more) before the lame compression. I'd far rather have pictures take twice the space and not lose that. But I'm pretty sure the alpha 7 has no such option. Likely the interpolation and noise reduction creating the image that feeds the jpg compression has the original (at least 11 bit per pixel) data, rather than the reconstruction of that data after lossy compression. Maybe there is nothing one can do to the arw file that gets the interpolation and noise elimination as good as in the jpg, because you don't have the original data. So you have a tradeoff of which information to lose when choosing jpg vs. arw, not an option to keep the maximum. Sony's website tells you to use the ImagingEdge software they distribute to convert arw to tiff. That does a slightly worse job than RawTherapee. If knowledge of the characteristics of the sensor were the major difference, then ImagingEdge ought to do a better job. For a tripod shot of a stationary target, the extra detail in dark areas in the arw format is such an advantage over jpg that arw is is clearly better. For faster exposure, especially with a narrow aperture to increase depth of focus (forcing higher ISO), the garbage in the darker areas of the tiff outweigh the extra detail and the jpg is much better. The exposure bracketing feature of this camera is garbage, so it is generally not the answer. If I was just doing the images from tripod with decent lighting, even my own calls to libraw, that do a generally worse job than ImagingEdge, would be usable and better than just using the jpg. I also don't know what is lost (or gained) by delaying some of that processing until after stitching. Since I don't know the concepts behind either better interpolation or noise reduction, I really have no clue what info is lost by delaying. The gains might be more obvious: Where stitching is making automatic decisions based on which image has an area of overlap "better", the noise reduction etc. is more likely to misguide it, vs choose the better before correcting. Where the user must intervene to guide the process (such as in applying a different tone mapping by mask to different parts of the image) it may be much less work and avoid wasted effort if done after stitching. -- A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ --- You received this message because you are subscribed to the Google Groups "hugin and other free panoramic software" group. To unsubscribe from this group and stop receiving emails from it, send an email to hugin-ptx+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/hugin-ptx/f988ccae-9846-416a-8356-488db5d32561n%40googlegroups.com.