Hi,

Ulrich Hertlein wrote:
> Hi Maxime,
> 
> On 7/08/09 12:02 PM, Maxime BOUCHER wrote:
> 
> > Ulrich Hertlein wrote:
> > 
> > > Hi Maxime, I'm extremely irritated by what you write and maybe I'm not 
> > > the only one.
> > > 
> > 
> > I am very sorry, please apologize. Maybe could you tell me why so that I 
> > won't do it
> > again.
> > 
> 
> No need to apologize.  (I meant 'irritated' as 'confused', not annoyed or 
> angry, just  in 
> case that's how you understood it; English can be fuzzy... :-}
> 

Well, thank you, I misunderstood you, but I also know I'm quite a noob compared 
to most of you, and I understand my noob's questions can be irritating.


Ulrich Hertlein wrote:
> 
> You have to keep in mind that the people on the mailing list have absolutely 
> no idea what 
> you're trying to achieve and what the problem is.  You alone have that 
> information so it 
> helps to give as much detail as possible when posting a question.
> 
> What are you trying to accomplish and how do you approach it?  What is your 
> shader doing?
> 

What I do:
 It is a projective texture mapping.
I put a prerending viewer in the scene given its orientation and position.
I attach to its DEPTH_BUFFER a depth image. I do a frame() on the prerender to 
fill the depth image.
 A color image is declared to check the prerender orientation if needed.

Code:

_root_stateset = _root->getOrCreateStateSet();

_depth = new Image();
_color = new Image();

_depth->allocateImage(_imWidthHeight->x(), _imWidthHeight->y(), 1, 
GL_DEPTH_COMPONENT, GL_FLOAT);
_color->allocateImage(_imWidthHeight->x(), _imWidthHeight->y(), 1, GL_RGBA, 
GL_UNSIGNED_BYTE);

_prerender->getCamera()->attach(Camera::DEPTH_BUFFER, _depth.get(), 0, 0);
_prerender->getCamera()->attach(Camera::COLOR_BUFFER, _color.get(), 0, 0);

_texdepth = new Texture2D(_depth.get());
_root_stateset->setTextureAttribute(_depth_unit, _texdepth.get());

_texcolor = new Texture2D(_color.get());
_root_stateset->setTextureAttribute(_color_unit, _texcolor.get());

_tex = new Texture2D(_img.get());
_root_stateset->setTextureAttribute(_tex_unit, _tex.get());




 Then I attach to the root node a shader and  run the rendering viewer.
The shader computes the projection of each fragment in the image space of the 
projective camera (i.e. the prerender).
 If the fragment is out of bound  then I texture it with it's original texture.
 If the fragment distance from the projective camera is different from the one 
stored in the Z-buffer  then I texture it with its original texture too.
 Else, I texture the fragment with its corresponding set of pixels 
(interpolation done by hardware) in the image to project.

It is quite clear in the code (and I don't post it in order to be debugged, 
just to be clearer).

Vertex shader:

Code:

varying vec4 coord;

void main()
{
coord = gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = ftransform();
}



Fragment shader:

Code:

uniform sampler2D tex;
//uniform sampler2D depthtex;
uniform sampler2DShadow depthtex;
uniform sampler2D originaltexture;
uniform sampler2D colortex;

uniform mat4 WorldToCam;
uniform mat4 CamToImage;
uniform mat4 correction;
uniform vec3 camposition;

uniform float znear;
uniform float zfar;
uniform float uptozbits;

uniform float imagewidth;
uniform float imageheight;

varying vec4 coord;
varying float intensity;

void normalizeW( inout vec4 vec )
{
vec = vec / vec.w;
return;
}

void normalizeZ( inout vec4 vec )
{
vec = vec / vec.z;
return;
}

float zbuffer_precision(float Z) //*** Used to compute the error of Z buffer
{
float b = znear * zfar / (znear-zfar);
float res = ( b / ( (b/Z) - 1.0/uptozbits ) ) - Z;
return abs(res);
}

void main()
{

vec4 camspace = WorldToCam * coord;
normalizeW(camspace);

if (camspace.z > 0.0)
{
gl_FragColor = texture2D(originaltexture, gl_TexCoord[0].st);
return;
}

float Z = -camspace.z;

vec4 imspace = CamToImage * correction * camspace;
normalizeZ(imspace);

if ( imspace.x < 0.0 | imspace.x > imagewidth | imspace.y < 0.0 | imspace.y > 
imageheight )
{ 
gl_FragColor = texture2D(originaltexture, gl_TexCoord[0].st);
return;
}

//float uncorrected_z = texture2D( depthtex, vec2(imspace.x/imagewidth, 
1-imspace.y/imageheight) );
float uncorrected_z = shadow2D( depthtex, vec3(imspace.x/imagewidth, 
1-imspace.y/imageheight, 1.0) ).r;

float corrected_z = znear*zfar / (zfar - uncorrected_z*(zfar-znear));

if (uncorrected_z < 0.0 | uncorrected_z > 1.0) //*** Error case
{
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
return;
}

if(false) //*** Used to display a kind of real distance
{
gl_FragColor = vec4(Z/15, Z/15, Z/15, 1.0);
return;
}

if(true) //*** Used to display distance stored in the Z-buffer
{
vec4 color = vec4(uncorrected_z, uncorrected_z, uncorrected_z, 1.0);
gl_FragColor = color;
return;
}

if (false) //*** Used to display the color image attached to the color_buffer
{
gl_FragColor = texture2D( colortex, vec2(imspace.x/imagewidth, 
1-imspace.y/imageheight) );
return;
}


float delta = zbuffer_precision(Z);
float corrected_delta = znear*zfar / (zfar - delta*(zfar-znear));

if ( corrected_z+ 0.25*corrected_delta < Z )
{
gl_FragColor = texture2D(originaltexture, gl_TexCoord[0].st);
return;
}

gl_FragColor = texture2D( tex, vec2(imspace.x/imagewidth, 1 - 
imspace.y/imageheight) );
return;

}




As you can see, I've just tried to use sampler2DShadow, but it doesn't change 
anything.
 By the way, I searched for more explanations than the reference manual about 
the vec3 coord used to retrieve from the depthmap/samplerShadow, but I didn't 
find what is its third component about. It's said it is used to make a 
comparison, but I don't know more. But is it really interesting?

I've also been told about a GL_DEPTH_COMPONENT32 or 23.
I took a look at gl.h but I didn't find these. It seemed to be something more 
than the GL_DEPTH_COMPONENT declared in the depth image allocation. Do you know 
about this?


Ulrich Hertlein wrote:
> 
> 
> > Here is the camera view (with a green mask): [Image:
> > http://img7.hostingpics.net/pics/144972Z_buffer_colorimage.png ]
> > 
> > here the distance of fragments to the camera computed in shader: [Image:
> > http://img7.hostingpics.net/pics/850811Z_buffer_shader.png ]
> > 
> > and the Z-buffer image: [Image:
> > http://img7.hostingpics.net/pics/690702Z_buffer_pourri.png ]
> > 
> 
> Okay, the issue is with the last image, the parts that look like they're from 
> a previous 
> frame?  And these are areas that are covered by transparent geometry in the 
> original image?
> 

Exactly!

I tried to clear the depth buffer before the frame() this way:

Code:

_prerender->getCamera()->setClearColor(osg::Vec4(0.1f,0.1f,0.3f,1.0f));
_prerender->getCamera()->setClearMask(GL_COLOR_BUFFER_BIT | 
GL_DEPTH_BUFFER_BIT);       
_prerender->frame();



But it doesn't fix anything... The depth representation keeps the same 
artefacts.


Ulrich Hertlein wrote:
> 
> Cheers,
> /ulrich
> _______________________________________________
> osg-users mailing list
> 
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
> 
>  ------------------
> Post generated by Mail2Forum


Thank you very much for your help and time.
Cheers,


Max

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=16037#16037





_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to