I think here it is. -- From [http://www.opengl.org/registry/specs/ARB/color_buffer_float.txt
] :
The standard OpenGL pipeline is based on a fixed-point pipeline.
While color components are nominally floating-point values in the
pipeline, components are frequently clamped to the range [0,1] to
accomodate the fixed-point color buffer representation and allow for
fixed-point computational hardware.
I was naive, thinking I was calculating with floating-point numbers.
No, it is just floating-point number syntax used for what are actually
fixed-point numbers between 0 and 1! (Which is plausible for a high
performance graphics engine indeed, but I did not know the depths of
OpenGL.)
And they say, "frequently"! That means, implementation dependent.
Probably it is really the difference between the software renderer and
the GPU that I experienced, and possibly the reason it is contained
somewhere in this table:
[http://developer.apple.com/graphicsimaging/opengl/capabilities/]
Well, I will never again assume I have more than the interval [0,1],
even if i do sometimes.
Unfortunately, I'm afraid I have to stop my project until I hopefully
find a way to port my algorithm to a fixed-point domain.
Maybe this helps you too, vade.
Jens
I wrote:
Even more strange is what I experienced, and without 'Render To
Image'.
I wanted to use a range larger than [0,1], too, because intermediate
processing in my application must be able to use arbitrary
magnitudes in the range ]-inf,inf[, which cannot be avoided. At the
input, I shift and scale the values from [0,1] to [-1,1]. At the
output, I do the reverse. The scaling is somewhat special.
If I omit all intermediate processing and put in series those two
input/output processing 'Core Image Filter' patches, I should get
the unaltered original
kernel vec4 mapRGBColorSpaceToUnitSphere(sampler src) {
vec4 t = sample(src, samplerCoord(src));
t.rgb = t.rgb * 2. - 1.;
float sphereToCubeRatio = length(t.rgb) / max(max(abs(t.r),
abs(t.g)), abs(t.b));
t.rgb = t.rgb * sphereToCubeRatio;
return t;
}
kernel vec4 mapUnitSphereToRGBColorSpace(sampler src) {
vec4 t = sample(src, samplerCoord(src));
float cubeToSphereRatio = max(max(abs(t.r), abs(t.g)),
abs(t.b)) / length(t.rgb);
t.rgb = t.rgb * cubeToSphereRatio;
t.rgb = (t.rgb + 1.) * .5;
return t;
}
, but it does not work! Values must have been clipped, at least I
see that nothing becomes darker than (probably 0.5) grey at the
output.
Just to prove that my special scaling is not wrong, I put them both
together in a singe filter unit instead
kernel vec4 mapAndUnmapRGBColorSpace(sampler src) {
vec4 t = sample(src, samplerCoord(src));
t.rgb = t.rgb * 2. - 1.;
float sphereToCubeRatio = length(t.rgb) / max(max(abs(t.r),
abs(t.g)), abs(t.b));
t.rgb = t.rgb * sphereToCubeRatio;
float cubeToSphereRatio = max(max(abs(t.r), abs(t.g)),
abs(t.b)) / length(t.rgb);
t.rgb = t.rgb * cubeToSphereRatio;
t.rgb = (t.rgb + 1.) * .5;
return t;
}
, and this works.
But if I omit the special scaling and just do a simple one
kernel vec4 mapRGBColorSpaceToUnitCube(sampler src) {
vec4 t = sample(src, samplerCoord(src));
t.rgb = t.rgb * 2. - 1.;
return t;
}
kernel vec4 mapUnitCubeToRGBColorSpace(sampler src) {
vec4 t = sample(src, samplerCoord(src));
t.rgb = (t.rgb + 1.) * .5;
return t;
}
, there is no more value clipping! An intermediate image between the
two filter units with negative values is obviously accepted here.
I am completely stuck. Where and when does clipping occur
implicitly? How can I avoid it?
The only difference I see is the use of max() and abs() functions
and the division operator. Can this lead to a different data path if
these operations are not GPU supported and the filter is therefore
being executed on the CPU? (My GPU is an ATI RadeonX1600.) But why
then such a different behaviour? I thought the GPU/CPU thing was
meant to be completely transparent? I don't understand that.
Regards,
Jens Groh
Am 17.09.2008 um 05:57 schrieb vade:
Hello
I think ive answered my own question, but im not 100% sure what the
proper behavior should be, so im posting to the list.
The attached composition renders a GLSL shader in a render in image
macro patch.
The GLSL shader specifies a fragment color output of -0.5 (yes,
below black).
The image is passed to another GLSL Shader which reads the image
and adds +0.75 to the input color.
My result is that the output color rendered is 0.75% luma, not 0.5,
which is what id hoped for, and would expect from a floating point
texture pipeline.
Does 32bit per channel, differ from what one would expect as a
float texture in GL would be able to handle (my understanding is it
should handle fragments below 0 and above 1.0.
Does colorspace come in to play here? My understanding is that core
image supports floating point/128bit images, but are they clipped?
Hows this work? Are there any docs that clearly spell out what is
to be expected?
Curious, thanks.
<32BitFloat is clipped 0-1.qtz>
BTW, Some sort of HDR processing for sub blacks and super whites
(or, rather for my purposes more GPGPU style processing) would be
highly enjoyed and used/abused!
Thanks.
On Sep 16, 2008, at 11:07 PM, vade wrote:
Hello
Does anyone know of the using Render in Image clips float output
images to 0 -> 1.0 range?
I am beginning to think it does, based on some GLSL code im using.
Has anyone seen this?_______________________________________________
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com
This email sent to [EMAIL PROTECTED]