>> A few ways around that data loss:
>> - Use 16-bit gamma RGB. There's still *some* data manipulation in
>> terms
>> of quantizing (rounding up/down), but 12-bit linear has less
>> gradations
>> in all areas than 16-bit gamma.
>>
>> - Keep all images 16-bit linear and use color-management during any
>> processing to gamma-correct for viewing. Only convert to
>> gamma-corrected images for "prints"... web, etc. The master RGB
>> image
>> stays in 16-bits.
>
> I'm not entirely certain what you are suggesting, Cory. What is "16-
> bit gamma RGB"? I've never heard of that. And besides, what the
> sensor captured is 12bit linear data ... you can never have more
> gradations than were there, all transformations will have losses,
> mathematically speaking. There may be more values but they are
> synthesized in interpolation.
>
Typically, RGB data represents gamma-corrected RGB data, but it
doesn't have to. It just happens to be a logical extension of 8-bit RGB
images which *HAVE* to be gamma-corrected in order to have enough
gradations within the dynamic range. If one takes the 12-bit Bayer data
from the sensor and interpolates into 3 channels (RGB), quantizing to the
closest level of 16-bits/channel, the result is a 16-bit LINEAR RGB file.
If one further applies the logrithmic gamma processing to each of the
channels, it's a "typical" 16-bit RGB image file. If one then quantizes
it to 8-bits, it's a typical RGB file.
I often do the RAW conversion of my -DS pictures into a 16-bit
linear TIFF. I have icc-profiled my camera, so that if I enable the
"color-manage display" option in cinepaint, it will gamma-correct the
image I see in real-time. Thus, the data and all processing done on it
(levels, curves, WB, sharpening, etc) are all done on the *LINEAR* data...
not the log data. Here's a link to some pages where some guy has gone
into depth on some of this stuff:
http://www.aim-dtp.net/aim/evaluation/gamma_error/processing_space.htm
> You have to do gamma correction in RAW conversion to have a properly
> rendered image, and data loss in RAW conversion is unavoidable: it is
> mathematically impossible to do the conversion without it. The
> process of interpolation (compression of high values and expansion of
> low values to suit the curve normal to vision) will change original
> values at the photosites into something else, and some data will be
> lost in that transformation. Data loss isn't always bad, it is
> actually necessary to the process; the goal is to lose as little
> *significant* data as possible.
>
You're confusing Bayer interpolation with gamma-encoding images.
They're not the same thing.... Bayer interpolation takes the monochome
"image" taken by the sensor with alternating RGBG color filter masks and
tries to recreate an actual 3-colors per pixel image from it.
Gamma-encoding images is simply applying a logrithmic function to the data
before quantizing.
> Of course, moving to as large a data space and gamut as possible will
> maximize what you keep and present the greatest number of options for
> further editing. All my RAW conversion is done into ProPhoto RGB now,
> the largest possible color gamut, represented as full [EMAIL PROTECTED]
> RGB images. I only do downsample conversion to [EMAIL PROTECTED] and sRGB
> gamut for web display, and all printing is color managed through the
> appropriate profiles at time of printing.
>
Likely done in a gamma-corrected 16-bit colorspace, but it doesn't
*have* to be. With 16-bits/channel, gamma-correction is even more
processing to the original RAW data than is necessary.
-Cory
--
*************************************************************************
* Cory Papenfuss, Ph.D., PPSEL-IA *
* Electrical Engineering *
* Virginia Polytechnic Institute and State University *
*************************************************************************
--
PDML Pentax-Discuss Mail List
[email protected]
http://pdml.net/mailman/listinfo/pdml_pdml.net