I am trying to create a GLSL shader that uses a 16 bit image as input. I can see that it gets imported as a 16 bit image (Native Pixel Format: Internal_RGBA16) but when it reaches the GLSL Shader patch a texture is created as 8 bit (Texture Backing: Internal_BGRA8 GL_TEXTURE_2D).

Using mostly the same shader logic with the same image in a Core Image patch works as it is supposed to work; see this test composition with repro images:
http://files.fieldofview.com/temp/qc_uvshadertest.zip
The published port "Show" lets you switch between the GLSL Shader patch and Core Image kernel implementations.

The "effect" I am going for is using the 16 bit image as a uv/coordinate lookup map on the source image, in this case correcting the fisheye distortion of a gopro camera image. The GLSL implementation ends up looking like a low resolution version, because it lacks the precision of the 16bit uv map.

TL;DR: How do I stop the GLSL Shader patch creating an 8 bit texture from my 16 bit image?

Aldo

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list      (Quartzcomposer-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to