Hello. I have a simple GLSL alphablending shader I am attempting to
port to CIKernel for kicks/learning experience. The following is the
original GLSL (fragment shader only cause its all that is interesting);
// define our rectangular texture samplers
uniform sampler2D tex0;
uniform sampler2D tex1;
uniform sampler2D tex2;
// define our varying texture coordinates
varying vec2 texcoord0;
varying vec2 texcoord1;
varying vec2 texcoord2;
void main(void)
{
vec4 input0 = texture2D(tex0, texcoord0);
vec4 input1 = texture2D(tex1, texcoord1);
vec4 mask = texture2D(tex2, texcoord2);
gl_FragColor = mix(input0, input1, mask.a);
}
simple enough. The CIKernel code I have is as equally as simple:
kernel vec4 v001AlphaBlend(sampler image1, sampler image2, sampler mask)
{
vec4 image1 = sample(image1, samplerCoord(image1));
vec4 image2 = sample(image2, samplerCoord(image2));
vec4 mask = sample(mask, samplerCoord(mask));
return mix(image1, image2, mask.a);
}
Of course, this works if all the input image sizes are equal. It does
not mirror the GLSL output if they differ. Now I understand that Core
Image does lazy evaluation and uses the DOD/ROI to optimize the pixels
it actually needs. My attempt after reading some posts to edit the
filter function results in this non functional code: (it renders the
same as before, nesting smaller images/masks within the larger image.
function v001ROIHandler(samplerIndex, dstRect, info)
{
return info;
}
// set ROI handler
v001AlphaBlend.ROIHandler = v001ROIHandler;
function __image main(__image image1, __image image2, __image mask)
{
// input image sizes - get largest width/height.
var maxwidth = (image1.extent.width >= image2.extent.width) ?
image1.extent.width : image2.extent.width;
maxwidth = (maxwidth > mask.extent.width) ? maxwidth :
mask.extent.width;
var maxheight = (image1.extent.height >= image2.extent.height) ?
image1.extent.height : image2.extent.height;
maxheight = (maxheight > mask.extent.height) ? maxheight :
mask.extent.height;
// output image dimensions
var dodRect = new Vec(0.0, 0.0, maxwidth, maxheight);
return v001AlphaBlend.apply(dodRect, dodRect, image1, image2,
mask);
}
I fail. Any pointers?
Also, is there an example of using AffineTransform in lieu of mat4
rotation/translation matrices withing a CIKernel? Can I use an affine
transform within the kernel itself, or must I break out operations
within the 'filter function' and apply transforms 'between' sampling
operations? I have some other more interesting shaders which use mat4
matrices which I cannot port straightforwardly.
Curious, and eager to get my GLSL shaders working in CI.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com
This email sent to [EMAIL PROTECTED]