Hi,
I'm developping an Angmented Reality environnement based on OpenSG.
 From an acquisition system I get the complete geometry of all the real
objects in
the capture room. In the same time I get the images from a video camera .
I will set the virtual camera on the same place that the real one
relatively to the
coordinate of the capture system.

I would like to mix real and virtual and do not have occlusion problems

My idea is to set the camera images as the background of the scene (this
is already working)
and then to update the Z-Buffer with the meshes from my capture system
(I already constructed the node with
its geometry...) but not to draw them. So then I could draw the other
objects, the virtual ones,
and with the values of the meshes of real objects in z-buffer it would
only draw the parts of virtual objects
in front.

Does anyone know if it's possible to do something like this (update the
Depth Buffer with a geometry but not
draw it) ?

Thanks,
Adrien


-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users

Reply via email to