Hi,

I am also trying to get a grasp on this subject.  Basically, I have n 
shader programs.  Shaders 1 to n-1 need to use floating point textures.  

As I understand it, the first shader to be rendered needs to have an 
internal format of GL_RGBA16F_ARB, a source format of GL_RGBA, and a source 
type of GL_UNSIGNED_BYTE.  

The floating point shaders need to have internal formats of GL_RGBA16F_ARB 
and source formats and types of GL_FLOAT

The last shader would need to have an internal format of GL_RGBA, a source 
format and type of GL_FLOAT.

Does this seem correct?  When I try this approach, I get an error:
RenderStage::drawInner(,) OpenGL errorNo= 0x502

Obviously, I'm doing something wrong, but I am uncertain as to what might 
be the problem.

Thanks,
Brian

On Wed, 17 Jan 2007 12:19:50 +0000, "David Spilling" 
<[EMAIL PROTECTED]> wrote :

> 
> Robert,
> 
> Thanks for that. I can see that this makes sense when (for example) you 
have
> an image in disk in RGBA8, and you load it into a texture with a different
> internal format (e.g. RGBA16F), then a conversion takes place on the way.
> 
> But if you have a RTT, does the source format actually matter at all? By
> setting a texture internal format to RGBA16F and attaching that to a
> cameraNode, you are saying that the render target is RGBA16F, and shaders
> and the like operating on the graph underneath the camera can output float
> values; theres no RGBA8 to 16F conversion going on, is there?
> 
> David
> 
> 
_______________________________________________
osg-users mailing list
[email protected]
http://openscenegraph.net/mailman/listinfo/osg-users
http://www.openscenegraph.org/

Reply via email to