Hello Carsten,

thanks for the quick answer.

It sounds good, that I probably can modify the MatrixCamera's projection 
matrix just like the one in OpenGL.
It is also good to know, that I can use _sfBeacon to set the ModelView 
matrix, so that I do not have to modify the matrix explicitly.

 I will try that out.

Thanks once more.

Cheers,
Jens 
> I am new to OpenSg and want to get the intriniscs of a calibrated camera 

> (focal length fx, fy, principal point px, py) to be matched by my OpenSG 

> camera.
> I saw that there is a osg::MatrixCamera class, that perhaps could be 
> used for this.

yes, the MatrixCamera simply feeds the matrices it stores to OpenGL, so 
as long as OpenGL is capable of doing what you need this should work.

> My aim is primarily to match the scale of a rendered object and a video 
> background showing the image of the same object (as it is done in many 
> augmented reality applications).
> 
> The camera intrinsic are stored in a  matrix like this:
> 
> fx 0  px
> 0 fy py
> 0 0 1
> 
> There are ways to convert the intrinsic matrix obtained e.g. by 
> calibrating with the Matlab Camera Calibration Toolbox or OpenCV into an 

> OpenGL Projection Matrix, as shown in here:
> 
> http://www.hitlabnz.org/forum/showthread.php?t=604
> 
http://www.hitl.washington.edu/artoolkit/mail-archive/message-thread-00654-Re--Questions-concering-.html
 

> 
> Does  the
> osg::MatrixCamera::_sfProjectionMatrix 
> work like the OpenGL projection matrix? I.e., are the order of entries 
> the same?

OpenSG matrices are stored in column major order, as is the convention 
with OpenGL.

> If not, how can I convert my intrinics matrix into a 
> osg::MatrixCamera::_sfProjectionMatrix  ?
> Are there other ways to match the scale of my rendered object and the 
> view through the real camera?
> 
> Also, for now I do not want to specify the osg:MatrixCamera:: 
> _sfModelviewMatrix field (for the external orientation) automatically. 
> The position of the camera should be specified by the user (for now).

OpenSG cameras are usually positioned in the scene with their _sfBeacon 
field. It points to a Node and the camera is then oriented in the 
pointed-to Node's coordinate system to look along the negative z axis. 
In order to move the camera you would simply modify the beacon nodes 
coordinate system (for example if the beacon has a Transform core, just 
change that).
The MatrixCamera can either use the beacon to determine the modelview 
matrix or it can use the one specified in _sfModelviewMatrix (if 
_sfUseBeacon is false).

> Are there any tutorials on using the osg::MatrixCamera class?

not directly as it simply passes the matrices it stores to the OpenGL 
matrices of the same name (with the above mentioned option to use the 
beacon for modelview instead).
The online tutorial has some info on cameras in general in the "Windows 
and Viewports" section 
(<http://opensg.vrsource.org/trac/wiki/Tutorial/OpenSG1/Windows>) - it 
is not converted for OpenSG 2 yet, so the code snippets would need small 
adjustments, but the information in general still a applies.


                 Cheers,
                                 Carsten


------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users

Reply via email to