Re: [osg-users] Writing to the Depth Buffer (Kinect)
Hi Sam, On 12 December 2011 22:35, Sam Corbett-Davies samcorbettdav...@gmail.com wrote: I am working on an augmented reality project using the Kinect. I am trying to occlude geometry based on the depth image produced by Kinect. To do this I thought I'd write the depth image (after scaling the depth values appropriately) to the depth buffer. It looks as though I'd use glDrawPixels() with GL_DEPTH_COMPONENT if I was doing it in OpenGL, but how do I achieve this in OSG? I'm also open to a better way of using the Kinect to occlude geometry if anyone has any ideas. Using an osg::Image attached osg::Texture2D, then render this texture to a full window quad and have a shader read the texture and write the result to the depth buffer using a shader that sets the gl_FragDepth. Robert. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Writing to the Depth Buffer (Kinect)
Thanks guys, I ended up using a fragment shaders which was a new experience for me. Ran into a loss of precision problem as the Kinect produces a 16-bit depth image but when stored as GL_DEPTH_COMPONENT each RGB channel only has 8-bits of precision in the shader. I ended up having to store it with internal format GL_RGB5_A1 and datatype GL_UNSIGNED_SHORT_5_5_5_1 (any other 16-bit format/datatype pair would do) and manually unpack it in the shader to get the full precision. Cheers Sam -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=44634#44634 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Writing to the Depth Buffer (Kinect)
Hi Sam, On 5 January 2012 19:19, Sam Corbett-Davies samcorbettdav...@gmail.com wrote: Thanks guys, I ended up using a fragment shaders which was a new experience for me. Ran into a loss of precision problem as the Kinect produces a 16-bit depth image but when stored as GL_DEPTH_COMPONENT each RGB channel only has 8-bits of precision in the shader. I ended up having to store it with internal format GL_RGB5_A1 and datatype GL_UNSIGNED_SHORT_5_5_5_1 (any other 16-bit format/datatype pair would do) and manually unpack it in the shader to get the full precision. Standard OpenGL doesn't have the high precision types so have a look at the extensions, I'm afraid I don't recall them off the top of my head so you'll need to go have a look at OpenGL extension docs. Robert. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] Writing to the Depth Buffer (Kinect)
Hi, I am working on an augmented reality project using the Kinect. I am trying to occlude geometry based on the depth image produced by Kinect. To do this I thought I'd write the depth image (after scaling the depth values appropriately) to the depth buffer. It looks as though I'd use glDrawPixels() with GL_DEPTH_COMPONENT if I was doing it in OpenGL, but how do I achieve this in OSG? I'm also open to a better way of using the Kinect to occlude geometry if anyone has any ideas. Cheers, Sam -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=44352#44352 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Writing to the Depth Buffer (Kinect)
On 12/12/2011 3:35 PM, Sam Corbett-Davies wrote: I am working on an augmented reality project using the Kinect. I am trying to occlude geometry based on the depth image produced by Kinect. To do this I thought I'd write the depth image (after scaling the depth values appropriately) to the depth buffer. It looks as though I'd use glDrawPixels() with GL_DEPTH_COMPONENT if I was doing it in OpenGL, but how do I achieve this in OSG? You would use osg::DrawPixels. See the osg/DrawPixels header file. You might get better performance by converting the depth data from the Kinect into a texture map. glDrawPixels() will do the same thing for you, but you can almost always do a better job with your own code. (Performance might not be your major concern anyhow, as you will almost certainly be bottlenecked by reading data from the Kinect.) If you want to use texture mapping, the method is straightforward. Mask off the color buffer, then draw a full screen quad (triangle pair). Assign a fragment shader to the quad StateSet. The fragment shader would simply look up depth values from the texture and set gl_FragDepth from those values. -Paul ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org