Regarding linearizing ;

Core Image being Linear RGB is "correct" as I understand it from a strict color 
workflow standpoint. Certain color operations that one takes for granted are 
designed to work with Linear color spaces, so things like contrast/hue and 
color correction filters, creating proper gradients and what not, when run on 
images that are assumed to be Linear but not actually linearized, are 
technically not correct in their output and will artifact. This gets especially 
complicated when (as Chris hinted to) dealing with images that have different 
color assumptions (frames from "YUV" format SD, HD variants, AdobeRGB/device 
native colorspace RGB images, etc). Mixing and matching the same filters would 
result in different  color outputs.

My understanding (and, I could be wrong here, this stuff is nuanced and kind of 
horrible so feel free to correct me), is that the standard way of handling 
these sorts of color paths in most apps (and indeed, most movie production 
studios enforce this model as well to my knowledge), is to input your imagery, 
linearize it and keep it linearized the whole way through your pipeline right 
until you need to display to your device. This means no intermittent stages of 
other colorspaces (which reduces errors) as well as algorithmically correct 
color functions* during any corrections you do along the way to display (or, 
any other 'output' device)

I 100% notice this when working in Core Image compared to GLSL in the QC 
editor. The same functional code will produce different output because Core 
Image handles linearizing the input image before it hits the kernel function. 
Most Image types in Mac OS X contain the gamma level which is accessible 
somehow, and I believe most QC patches are aware of this, so Core Image has 
access to the gamma info. try it yourself, (granted, you need to do color 
rather than spatial operations, so things like displacement wont make a 
difference because you are simply moving an input pixel, not re-calculating its 
color value).

Hopefully I got those right, and if not, id love to know more. Its definitely 
something one has to take care of when working in Cocoa and handling 
QCRenderers, QTVisualContexts, and Core IMage contexts (you can init with all 
with some form of working/input and output colorspaces). Its easy to get 
drastically different results than what QC outputs.

*even things like downsampling, when using a non linear color space can give 
you the wrong results and darken/brighten the image. This is why Core Image 
converts for you to linear to help relieve you of the horrible burden of 
dealing with this stuff.

If you are interested, check this out for the horrible details :)

http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html



On Sep 16, 2010, at 12:24 AM, Christopher Wright wrote:

>> When I create a video patch and connect to a Billboard in QC, the output of 
>> the video patch is a CVPixelBuffer, and colorspace and native pixel format 
>> seem to be device dependent.
> 
> Typically it's 709 (new Apple-shipping cameras all do 709).  Older ones will 
> usually be 601.  (note that those are technically colorspaces, not pixel 
> formats.  QC doesn't handle that distinction particularly well)
> 
>> If I run this through a simple CI Kernel, like :
>> kernel vec4 image(sampler image)
>> {
>>      return sample( image, samplerCoord( image ) );
>> }
>> The image turns to a CIImage (makes sense), the colorspace turns 
>> LinearRGB(!), and there is no native pixel format. This makes sense, and a 
>> small question I have is if it ever proves beneficial to convert to 
>> LinearRGB to get any kind of gain or difference in the way image processing 
>> happens after that step.
> 
> It provides one great benefit:  the image can be used in CI.  CI cannot 
> handle YCbCr images, so they must be converted to RGB (LinearRGB, or one of 
> several other similar variations).  Colorspace conversions like this are 
> pricey, so there's no sense in RGB->YUV'ing it again at the end.  The image 
> is RGB-ified when you stick it on a billboard anyway, that step's just hidden 
> inside the Billboard so you can't see it on any port.
> 
> The downside is that a colorspace conversion introduces some precision loss, 
> and costs time.  These are unfortunately unavoidable if you intend to filter 
> video, or display it on an RGB display (e.g. all of them).
> 
> (Fun fact:  The OpenGL YUV texture format, made available via 
> GL_APPLE_ycbcr_422, only uses 601 matrices.  Thus, 709 input _also_ requires 
> some massaging, otherwise colors shift.  QC doesn't use YCbCr textures, so 
> this isn't a problem in QC).
> 
> Regarding the image buffer question, an example composition would be nice, 
> just to see what's going on.
> 
> --
> Christopher Wright
> [email protected]
> 
> 
> 
> _______________________________________________
> Do not post admin requests to the list. They will be ignored.
> Quartzcomposer-dev mailing list      ([email protected])
> Help/Unsubscribe/Update your Subscription:
> http://lists.apple.com/mailman/options/quartzcomposer-dev/doktorp%40mac.com
> 
> This email sent to [email protected]

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to