On Jan 20, 2011, at 3:42 PM, Christopher Wright wrote:

>> I have a coupla questions... First, why does it only display a center cut 
>> rectangle of the full image. Why when I put it in a Render in Image node 
>> does it sort of work, but then comes off its wheels and starts displaying 
>> random noise? (It does show the full image when I feed the Render In Image 
>> node into a Billboard).
> 
> It only displays the center because the GLSL Grid's size is too big -- it 
> should be 2 wide at most, and it looks to be 3 and a fraction.  The Width and 
> Height inputs control this, so you can see which patches control this (Image 
> Dimensions, I think)

But isn't that the sort of thing that Image Dimensions is supposed to be for? 
Even when I set it to 1 or 2 manually, the center cut behavior does not change.

> 
> Render In Image probably needs a clear patch.  If that doesn't help, please 
> post a sample composition that shows the problem you're encountering.

Well, when I add a clear patch, if I enable it, it goes black, if I then 
disable it, things are fine. Until I change something like the input image, 
etc... then I need to manually clear it to black, then disable it.

>> Last but not least... How does it work? I've looked at the code and it seems 
>> so simple. The Fragment code just feeds all pixels out to the Vertex code 
>> and it does some tiny magic, and then boom, there it is.
> 
> GPUs have been built to interpolate values between corners (Normals, Colors, 
> texture coordinates) -- they do this extraordinarily well.  This shader takes 
> advantage of that -- it's basically "free" (no math required on your part) 
> because of the way GPUs work.

I at least know that much, thanks. It was the reason for my question to start 
with. I figured the GPU was all about interpolating triangle vertex values and 
should be able to do this in its sleep as opposed to the math I was needing to 
perform in my CI Kernel prototype.

> Also, the fragment doesn't go to the vertex - it's the other way around:  
> Vertex Shaders operate on the vertices supplied (possibly changing them, and 
> setting up data for the fragment shader), and then the fragment shader runs 
> for each pixel defined by the (possibly changed) vertices.  There are 
> typically many many more fragments ("pixels", but not always) than there are 
> vertices, so vertex-shader-supplied data is interpolated across the fragments 
> by the GPU.  You happened to want data interpolated in that manner, which 
> made it pretty simple. :)

Okay, but how does this Vertex:

varying vec4 pixelColor;
uniform sampler2DRect texture;

void main()
{
        //Transform vertex by modelview and projection matrices
        gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
        
        //Forward current color and texture coordinates after applying texture 
matrix
        gl_FrontColor = gl_Color;
        gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
        
        pixelColor = texture2DRect(texture, gl_TexCoord[0].xy);
}

and this Fragment:

varying vec4 pixelColor;

void main()
{
        //Multiply color by texture
        gl_FragColor = gl_Color * pixelColor;
}

Specify to interpolate the pixel values found at the vertices of the triangles? 
I understand that GLSL Grid is determining the size of the triangles. I just 
don't get how it's only sampling the corner points  and providing an 
interpolation between them as the values for the intermediate pixels.

I really appreciate everyone's time on this. Thank you...

Patrick



 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list      (Quartzcomposer-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to