Re: [osg-users] Vicon tracking

2010-03-12 Thread Mel Av
Thanks for your answers. 
I understand the theory behind this, which means that I will need to set 
projection matrices for the left and right eye cameras according to my distance 
to the display surface. The problem is, can these cameras be retrieved and have 
their projection matrices modified before each viewer.frame() call? As I have 
mentioned, at the moment I am using the ---stereo command line argument to 
enable stereo. I read at an older thread(around July 2009) that cameras could 
not be retrieved using the viewer object, but this was going to change. Now I 
see a getCameras method in the Viewer class. Will that work for me, or will 
setting the camera matrices interfere with the already implemented stereo 
rendering? In that case, will I need to set stereo settings to off and 
implement stereo manually using slave cameras, as was suggested at that thread?
By the way, the stereo settings wiki:
http://www.openscenegraph.org/projects/osg/wiki/Support/UserGuides/StereoSettings
mentions:
If the user is planning to use head tracked stereo, or a cave then it is 
currently recommend to set it up via a VR toolkit such as VRjuggler, in this 
case refer to the VR toolkits handling of stereo, and keep all the OSG's stereo 
specific environment variables (below) set to OFF

Does anyone know if this is still the case, or is that note outdated?

Thanks a lot

Mel

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=25587#25587





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Vicon tracking

2010-03-03 Thread Mel Av
Hey,

I was wondering if anyone knows what would be the best solution for building an 
application where the camera is headtracked using motion captured data provided 
in realtime from a Vicon system in OpenSceneGraph. I can get the position and 
rotation of the 'real' camera (cap with infrared markers) at every frame but 
was not successful in using these to move the camera in OpenSceneGraph. Also I 
have to mention that the application is rendered in a stereo projector. Right 
now I'm only using the --stereo QUAD_BUFFER command line argument but I'm not 
sure if this is the appropriate way to do this.
I apologise if this has been answered somewhere else.
Any help is much appreciated.

Thank you!

Cheers,
Mel

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=25082#25082





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Vicon tracking

2010-03-03 Thread Mel Av
Hey all,

Many thanks for your answers.
The Vicon system uses a z-up coordinate system, with millimeter units. It sends 
x,y,z coordinates for a given tracked object(in that case the cap you are 
wearing) as well as rotation information in axis/angle form. The client I'm 
using converts this to three different forms I can use interchangeably:
1. Euler angles
2. Global rotation matrix
3. Quaternion

Right now I am just using the viewer-getCamera()-setViewMatrix() call before 
the viewer.frame() call. The problem seems to be that the matrix I pass to 
setViewMatrix() does not seem to be correct. I use the x,y,z coordinates for 
the cap to set the translation matrix ( tr.makeTranslate() ) and the quaternion 
to set the rotation matrix ( rot.makeRotate() ). I then multiply tr * rot and 
pass this to setViewMatrix(). The result is that when I move closer to the 
projected image, the rendered objects seems to come closer, which is correct 
behavior, but when i rotate my head this causes the rendered objects to rotate 
in the opposite direction. The code I use is from the OSG Quick start guide and 
it was supposed to change the view according to its description. However this 
does not seem to be the case for two reasons, from what I found out last night 
reading one old camera tutorial:
1. You have to inverse the tr*rot result first
2. After you do that, you have to rotate -90 degrees about the x-axis because 
Matrix classes use a Y-up coordinate frame whereas the viewer uses a Z-up 
coordinate frame.

Will these two corrections solve the problem? Will I also need to do something 
more advanced like dynamically changing the frustum according to the head 
position? 

Thank you!

Cheers,
Mel

P.S Sorry if my questions seem nub. I just thought I had the necessary 
background for understanding rotation and translation transformations but I'm 
completely confused by how OSG handles these and why it uses different 
coordinate frames in different situations. Anw, perhaps I'm confused by DirectX 
because camera manipulation was much easier with the separation of the 
Modelview matrix in two different matrices whereas in OpenGL you may be 
moving/rotating objects instead of the actual camera viewpoint.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=25121#25121





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org