Hi Adrien,
Adrien wrote:
> Hi,
> thanks, I'll try ColorMaskChunk
>
> I'm using an experimental platform, in Grenoble (France),
> called Grimage ( http://www.inrialpes.fr/sed/grimage/index.php ,
> http://www-id.imag.fr/~raffin/papers/ID/ieeevr06.pdf )
> I'm in an internship in the team who is developping it.
Greetings to Bruno! ;)
> If anyone have others ideas for the update of the Z-buffer,
> so that I can compare the framerate for the different methods ...
If you have it as an image you can draw that instead of the geometry.
You can use a manual glDrawPixels() and a DepthClearBackground, or you
put it in a texture and use the PolygonBackground. You will need a
shader to change the depth of the fragments you're drawing, drawing
directly into the depth buffer doesn't work.
> Another possibility I was thinking of could be to make a projective mapping
> of the image captured by the video camera on the geometry of all the
> real objects and then
> draw the virtual ones.
That should only be necessary if you want a different viewing position
than the camera one, otherwise the projection onto the geometry should
always give you the same image.
> I have no idea how to do this using OpenSG, I'm not very familiar to
> this library.
> If anyone one have a hint about that, it would be great!
Projective mappings are pretty simple. You use a TexGenChunk for each
coordinate axis that you need, use eye realtive coordinates and use the
unit vectors 1,0,0,0 and 0,1,0,0 as the coordinate planes. But as I said
above, that should not really be necessary.
Hope it helps
Dirk
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users