Hi,

On Tuesday, February 19, 2013 19:07:59 Christian Buchner wrote:
> After some futile attempts to make the nVidia dual depth peeling compatible
> with non-Vidia cards by dropping the vendor specific extensions I've
> decided to go forward with the DepthPeeling code from osgoit.
> 
> I noticed that as soon as there are GLSL shaders attached to the object
> being rendered, the shadow depth testing seems to fail (is this fixed
> function stuff?). Or in other words, these statements don't have an effect
> anymore.
> 
>         _depthTextures[i]->setShadowComparison(true);
>         _depthTextures[i]->setShadowAmbient(0); // The r value if the test
> fails
>         _depthTextures[i]->setShadowCompareFunc(osg::Texture::GREATER);
>         _depthTextures[i]->setShadowTextureMode(osg::Texture::INTENSITY);
> 
> It seems that I will have to roll my own shadow comparison in GLSL. This
> would be in the fragment shader, I presume?

Without looking in depth into this and just out of my head.
Yes, in the Fragment shader have a shadow sampler unifom bound on that texture 
and compare for the .r component. Then discard the pixel if that test fails. 
You need to rescale the projection space fragement position apropriately to 
the input of the shadow sampler.

If I remember right there was also some problem with the demo with models not 
being close to the origin. I believe something with the near far computation 
in interaction with osgoit is not entirely correct.

Greetings

Mathias
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to