Some thoughts:
1. When using images to store data, you can split one image up into 3 or
more images by piping the input image into multiple shader patches and
doing 1/n of the work in each one. Thus, you could represent a 4x4
matrix for each pixel by storing the columns in r, g, b, a across 4
images. Then create a patch that inputs 4 images and does the math you
need to do.
2. It's not possible to store per-pixel data except in the pixels
themselves. However, using the multiple image method above you can
sometimes approximate it. Even so, you can't really do "accumulation"
style computations since you're operating across the output domain and not
the input pixels (ie. you get one call of the shader per output pixel, not
one per input.)
3. The only way to "see" the values to debug them is to represent them as
an image and look at the image. Map the values you're looking for to a
particular color and look for that color, perhaps?
4. I tried writing a generic convolver and it turned out to be a pain in the
butt. I recommend just writing a separate patch for each convolution, but
your mileage may vary.
--Sam
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com
This email sent to [EMAIL PROTECTED]