On 7 Feb 2011, at 17:06, Chris 'Xenon' Hanson wrote:
> On 2/7/2011 5:00 AM, Stefan Walk wrote:
>> I'm looking for a way to visualize an image (dimensions somewhere between
>> 640x480 and 1280x960) from a calibrated camera + a dense depth map
>> (estimated from a stereo pair). With this information, i can compute the 3D
>> coordinates of the regions that produced each pixel, and I'd like to
>> visualize that. Ideally, when the virtual camera is positioned like the
>> physical camera was, the visualization should just look like the image, and
>> the 3d structure becomes apparent when moving the camera. Does
>> OpenSceneGraph provide a convenient way to do that (I'm unfamiliar with it,
>> I'm looking for the right tools to do this task now)? If yes, are there
>> examples that are close to what I want to do? I've looked at osgpointsprite,
>> which seems kind of appropriate, when i take the blending out and reenable
>> the depth test, however the sprites don't change size when the camera is
>> moved, is that changeable?
>
> Is this from a Kinect device, or something similar?
Depends on how you define "similar". The Kinect gains depth information by
having a laser projector and a camera looking at the projection, in my case I
have two camera views (for example from a camera like
http://www.ptgrey.com/products/bbxb3/bumblebeeXB3_stereo_camera.asp) and
compute the depth map from that.
Best regards,
Stefan
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org