Re: [osg-users] Camera intrinsics

2011-04-06 Thread benedikt naessens

Keith Parkins wrote:
 After looking at this again, I am unclear as to whether you have built
 the projection matrix from the intrinsic parameters. I was assuming that
 you had made it by hand. To do that you would do something like this:
 
 //-//
  
 // The _intrinsic variable holds the five values for the intrinsic
 // matrix. The intrinsic * our pixel transform to get the projections
 // matrix. The intrinsic values (itr[]) are given as five doubles
 // such that: 
 //
 //| itr[0] itr[1] itr[2]  |
 //|   0itr[3] itr[4]  |
 //|   0  0  0 |
 //
 //| alpha_u gamma u_0  |
 //|   0alpha_v v_0 |
 //|   0  0  0  |
 //
 //-//
 
 void 
 -Camera::calcProjection() {
 double alpha_u, alpha_v;
 
 // calc alphas
 alpha_u = _intrinsic[0];
 double cot_theta = - _intrinsic[1]/_intrinsic[0];
 double sin_theta = sqrt(1/(1+cot_theta*cot_theta));
 alpha_v = _intrinsic[3] * sin_theta;
 
 double skew = _intrinsic[1];
 
 double u0, v0;
 u0 = _intrinsic[2]; v0 = _intrinsic[4];
 
 double left = -u0 / alpha_u * _near;
 double bottom = (_screen_height-v0) / alpha_v * _near;
 double right = (_screen_width - u0) / alpha_u * _near;
 double top = -v0 / alpha_v * _near;
 
 _projection[0] = 2 * _near / (right - left);
 _projection[4] = 2 * _near * skew / ((right - left) * alpha_u);
 _projection[5] = 2 * _near / (top - bottom);
 _projection[8] = -(right + left)/(right - left);
 _projection[9] = -(top + bottom)/(top - bottom);
 _projection[10] = (_far + _near) / (_far - _near);
 _projection[11] = 1;
 _projection[14] = -2 * _far * _near/(_far-_near);
 
 |}
 
 
 /lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
 

___
osg-users mailing list

http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

 --
Post generated by Mail2Forum[/quote]

There seems to be a mismatch between the projection matrix generated by e.g. 
makePerspective and the projection matrix elements you suggest here.

When I do the comparison between the two, it seems that: 
_projection[5], _projection[8], _projection[9] and _projection[11] have 
opposite signs. 

I'm also a bit surprised that the projection matrix generated by OSG 
(makePerspective) has -1 instead of 1 for _projection[11]. Maybe that has 
something to do with the inversion of the Y axis (0,0 in the bottom left 
instead of the top left) ?

Can you give me a suggestion why these differences exist ?

Thanks !
Benedikt.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=38253#38253





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] [forum] Video capture with 3D augmented reality

2010-11-02 Thread benedikt naessens
I am trying to solve this problem more or less as you proposed:

* I have a quad with a texture that is my input video (input images are 640 x 
480)
* The first camera (with depth and color clear masks) looks at the quad and 
renders to the same image (I'm not sure if this is useful; it feels like an 
extra unnecessary step to me). The projection matrix is an orthogonal 640 x 480 
matrix, the view matrix is an identity matrix.
* The second camera (with only depth clear mask) looks at the scene and renders 
this to the same image. The projection and view matrices were recorded during 
the recording of the input video.

Both camera's use the COLOR_BUFFER. 

I use a NodeCallback as an UpdateCallback to my quad. This callback refreshes 
the quad texture with a new video image and updates the projection and view 
matrices of the second camera.

I also set two final draw callbacks for each camera; just to see the contents 
of the image (i.e. I call writeImageFile). For the first camera (this final 
draw callback is called first), I write to file1.bmp and for the second camera 
I write to file2.bmp.

My scene consists of a cylinder and a 2D picture that cuts the cylinder in the 
middle.

I see that file1.bmp is what I expected: a video stream image is rendered into 
the image. file2.bmp is another case (this one is saved when the second camera 
has done drawing). I don't see my video stream (input) image anymore (has been 
cleared somehow) and the 3D data is rendered on top of each other (due to 
disabling of GL_COLOR_BUFFER_BIT ?).

I have the impression that somehow two color buffers are used at the same time; 
one for the first camera and one for the second camera. Still, it's clear in 
the code that I attach both camera's to osg::Camera::COLOR_BUFFER with the same 
image (m_VideoImage). 

How is this possible ?

Here is the code:

Code:

void VideoRecThread::setupTexture()
{
m_RenderTexture = new osg::Texture2D;
m_RenderTexture-setTextureSize(640, 480);
m_RenderTexture-setInternalFormat(GL_RGBA);

m_RenderTexture-setFilter(osg::Texture2D::MIN_FILTER,osg::Texture2D::LINEAR);

m_RenderTexture-setFilter(osg::Texture2D::MAG_FILTER,osg::Texture2D::LINEAR);
m_RenderTexture-setDataVariance(osg::Object::DYNAMIC);
}

void VideoRecThread::setupGeometry()
{
osg::Geometry* polyGeom = new osg::Geometry();

osg::ref_ptrosg::Geometry screenQuad;
screenQuad = createTexturedQuadGeometry(osg::Vec3(), osg::Vec3(640, 
0.0, 0.0), osg::Vec3(0.0, 480, 0.0),
0.0f, 1.0f, 1.0f, 0.0); 
m_QuadGeode = new osg::Geode;
m_QuadGeode-addDrawable(screenQuad.get());
screenQuad-setName(PolyGeom);
screenQuad-setDataVariance( osg::Object::DYNAMIC );
screenQuad-setSupportsDisplayList(false);

osg::StateSet* stateset = new osg::StateSet;
stateset-setTextureAttributeAndModes(0, 
m_RenderTexture,osg::StateAttribute::ON);
screenQuad-setStateSet(stateset);
}

void VideoRecThread::setupImages()
{
m_VideoImage = new Image();
m_VideoImage-allocateImage(640,480,1, GL_RGBA, GL_UNSIGNED_BYTE);
}

void VideoRecThread::setupHudCamera()
{
m_pHudCamera = new osg::Camera;
m_pHudCamera-setReferenceFrame(osg::Transform::ABSOLUTE_RF);
m_pHudCamera-setProjectionMatrix(osg::Matrix::ortho2D(0, 640, 0, 480));
m_pHudCamera-setViewMatrix(osg::Matrix::identity());
m_pHudCamera-setRenderOrder(osg::Camera::PRE_RENDER);
m_pHudCamera-setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
m_pHudCamera-setViewport(0,0,640,480);

m_pHudCamera-setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);
m_pHudCamera-attach(osg::Camera::COLOR_BUFFER, m_VideoImage);

m_pVideoRecGroup-addChild(m_pHudCamera.get());
m_pHudCamera-addChild(m_QuadGeode);

m_TextureUpdateCallback = new TextureCallback();

m_TextureUpdateCallback-updateTexture.connect(boost::bind(VideoRecThread::updateTexture,this));
m_QuadGeode-setUpdateCallback(m_TextureUpdateCallback);

m_TextureCallback = new VideoPostDrawCallback();

m_TextureCallback-renderingCompleted.connect(boost::bind(VideoRecThread:videoRenderingCompleted,this));
m_pHudCamera-setFinalDrawCallback(m_TextureCallback);
}

void VideoRecThread::setupSnapshotCamera()
{
m_pSnapshotcamera = new osg::Camera();
m_pSnapshotcamera-setReferenceFrame(osg::Transform::ABSOLUTE_RF);
m_pSnapshotcamera-setRenderOrder(osg::Camera::PRE_RENDER);
m_pSnapshotcamera-setClearMask(GL_DEPTH_BUFFER_BIT);
m_pSnapshotcamera-setViewport(0,0,640,480);

m_pSnapshotcamera-setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);
m_pSnapshotcamera-attach(osg::Camera::COLOR_BUFFER, m_VideoImage);

osg::ref_ptrosg::Node pScene = getSceneManager()-getSceneData();
m_pSnapshotcamera-addChild(pScene);

Re: [osg-users] Video capture with 3D augmented reality

2010-11-02 Thread benedikt naessens
Already thanks for all the effort you have put in this !

I set the threading model of my viewers (I have two in my application) to 
osgViewer::ViewerBase::SingleThreaded. Is this what you meant with forcing OSG 
to singlethreaded ?

I also introduced now a snapshot image where the first and second camera write 
to (using the aforementioned techniques : attached with the COLOR_BUFFER 
component). 

Both (at the same time or not) didn't help.

I have set the clear color of the second camera (m_Snapshotcamera) to red 
(using the setClearColor function of the camera). I don't see any red in 
file2.bmp, so I think there is no clearing issue. 

About your remark about the clearing of the buffer: if the buffer is not 
cleared, then this translates into the output image, right ? Or am I mistaken ? 
This could explain somehow the overlapping 3D data ? On the other hand, I am 
sure that both final drawing callbacks are called (and in the right order), so 
this still doesn't explain why my input video image is gone in file2.bmp.

Strange. I'm a bit without ideas right now. Any other ideas ?

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=33311#33311





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Video capture with 3D augmented reality

2010-11-02 Thread benedikt naessens
Can this thread be moved to the general OSG forum ? I made the mistake of 
putting this in the OpenSceneGraph forum forum. 

Thank you!

Cheers,
benedikt

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=33313#33313





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Video capture with 3D augmented reality

2010-11-02 Thread benedikt naessens
Why has the format been changed from GL_RGBA to GL_RGB ? Is it not working with 
the alpha channel or is that just a coincidence ?

Also, why do you attach the second camera to the texture and not to the image ?

The example image is great :)

Thank you!

Cheers,
Benedikt

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=7#7





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] [forum] Video capture with 3D augmented reality

2010-10-29 Thread benedikt naessens
Thanks for your reply !

Can you explain how the structure of the camera's then will be ? And how do you 
flip the coordinates ?


Thank you!

Cheers,
benedikt

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=33241#33241





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] [forum] Video capture with 3D augmented reality

2010-10-28 Thread benedikt naessens
I want to record a video and put 3D augmented reality on top of each frame of 
the AVI. Due to the fact that I don't want to skip frames, I store all AVI 
frames in memory during recording and after each video frame has been captured, 
I also store the view and projection matrices of the 3D view that is currently 
applicable on this video frame. 

The input of the camera is 640 x 480 and my application usually renders in 1280 
x 1024 windows. The video frames can be retrieved from an array of unsigned 
char arrays (stored in m_RingBuffer)

I want to achieve the rendering of the 3D data on top of the video frames in a 
post-processing step (which is part of the work done in a thread called 
VideoPostRecThread). 

I follow this strategy: 

1) I set up a pre-rendering HUD camera (m_pHudCamera) which looks at a quad 
geometry with one of the video frames as a texture (m_RenderTexture)
2) The HUD camera is the child of a snapshot camera (m_pSnapshotcamera) . The 
snapshot renders the 3D data. The output of the snapshot camera should go back 
to the video frame, but I also store it temporarily in an image 
(m_SnapshotImage). For this, I disabled the GL_COLOR_BUFFER_BIT clear mask of 
the snapshot camera, to make sure the rendered outputs of the HUD camera are 
not cleared.

I also use two callbacks:
1) a pre-draw callback applied on the HUD camera: the texture of the quad 
geometry (m_RenderTexture) is updated each time with a new frame. Name: 
m_UpdateCallback (instance of the TexturePreDrawCallback struct).
2) a post-draw callback: my thread is blocked until all the 3D data is rendered 
on top of the HUD contents; the callback unblocks my thread (using a mutex 
called m_SnapshotMutex). After this, I can do some post-processing (like for 
example saving the AVI or updating my GUI that another frame has been 
post-processed). Name: m_VideoCallback (instance of the VideoPostDrawCallback 
struct)

Here are my callback definitions:


Code:

struct VideoPostDrawCallback : public osg::Camera::DrawCallback
{
 VideoPostDrawCallback(){}

  virtual void operator() ( osg::RenderInfo renderInfo) const
  {
 renderingCompleted();
  }

   boost::signals2::signalvoid(void) renderingCompleted;
};

struct TexturePreDrawCallback : public osg::Camera::DrawCallback
{
TexturePreDrawCallback()  {}

virtual void operator() ( osg::RenderInfo renderInfo) const
{
updateCamera();
}

boost::signals2::signalvoid(void) updateCamera;
};




And here is the definition of my thread class


Code:

class VideoPostRecThread : public VideoRecWithArThread
{
Q_OBJECT

friend class VideoPostDrawCallback;

public:
VideoPostRecThread (boost::shared_ptrIDSCameraManager pCamMgr, 
unsigned int maxFrames, QObject *parent = NULL);
~VideoPostRecThread ();

void renderingCompleted();
void updateTextureCamera();
private:
virtual void postProcess();
void setupImages();
void setupHudCamera();
void setupSnapshotCamera();

osg::ref_ptrosg::Camera m_pSnapshotcamera;
osg::ref_ptrosg::Camera m_pHudCamera;
osg::ref_ptrosg::Image m_TextureImage;
osg::ref_ptrosg::Image m_SnapshotImage;
osg::ref_ptrosg::Texture2D m_RenderTexture;
osg::ref_ptrosg::Geode m_QuadGeode;
osg::ref_ptrVideoPostDrawCallback m_VideoCallback;
osg::ref_ptrTexturePreDrawCallback m_UpdateCallback;

QWaitCondition m_SnapshotCondition;
QMutex m_SnapshotMutex;

unsigned int m_CurrentArFrameIndex;
};




Here is the implementation of the VideoPostRecThread class.


Code:

void VideoPostRecThread ::setupImages()
{
m_SnapshotImage = new Image();
m_SnapshotImage-allocateImage(1280,1024,1, GL_RGBA, GL_UNSIGNED_BYTE);
m_TextureImage = new Image();
}

void VideoPostRecThread ::setupHudCamera()
{
// Create the texture to render to
m_RenderTexture = new osg::Texture2D;
m_RenderTexture-setDataVariance(osg::Object::DYNAMIC);
m_RenderTexture-setInternalFormat(GL_RGBA);

osg::ref_ptrosg::Geometry screenQuad;
screenQuad = osg::createTexturedQuadGeometry(osg::Vec3(),
 osg::Vec3(1280, 0.0, 0.0),
 osg::Vec3(0.0, 1024, 0.0));
m_QuadGeode = new osg::Geode;
m_QuadGeode-addDrawable(screenQuad.get());

m_pHudCamera = new osg::Camera;
m_pHudCamera-setReferenceFrame(osg::Transform::ABSOLUTE_RF);
m_pHudCamera-setRenderOrder(osg::Camera::PRE_RENDER);
m_pHudCamera-setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
m_pHudCamera-setProjectionMatrix(osg::Matrix::ortho2D(0, 1280, 0, 
1024));
m_pHudCamera-setViewMatrix(osg::Matrix::identity());
m_pHudCamera-setViewport(0,0,1280,1024);
m_pHudCamera-addChild(m_QuadGeode);


[osg-users] Video capture with 3D augmented reality

2010-10-28 Thread benedikt naessens
I want to record a video and put 3D augmented reality on top of each frame of 
the AVI. Due to the fact that I don't want to skip frames, I store all AVI 
frames in memory during recording and after each video frame has been captured, 
I also store the view and projection matrices of the 3D view that is currently 
applicable on this video frame. 

The input of the camera is 640 x 480 and my application usually renders in 1280 
x 1024 windows. The video frames can be retrieved from an array of unsigned 
char arrays (stored in m_RingBuffer)

I want to achieve the rendering of the 3D data on top of the video frames in a 
post-processing step (which is part of the work done in a thread called 
VideoPostRecThread). 

I follow this strategy: 

1) I set up a pre-rendering HUD camera (m_pHudCamera) which looks at a quad 
geometry with one of the video frames as a texture (m_RenderTexture)
2) The HUD camera is the child of a snapshot camera (m_pSnapshotcamera) . The 
snapshot renders the 3D data. The output of the snapshot camera should go back 
to the video frame, but I also store it temporarily in an image 
(m_SnapshotImage). For this, I disabled the GL_COLOR_BUFFER_BIT clear mask of 
the snapshot camera, to make sure the rendered outputs of the HUD camera are 
not cleared.

I also use two callbacks:
1) a pre-draw callback applied on the HUD camera: the texture of the quad 
geometry (m_RenderTexture) is updated each time with a new frame. Name: 
m_UpdateCallback (instance of the TexturePreDrawCallback struct).
2) a post-draw callback: my thread is blocked until all the 3D data is rendered 
on top of the HUD contents; the callback unblocks my thread (using a mutex 
called m_SnapshotMutex). After this, I can do some post-processing (like for 
example saving the AVI or updating my GUI that another frame has been 
post-processed). Name: m_VideoCallback (instance of the VideoPostDrawCallback 
struct)

Here are my callback definitions:


Code:

struct VideoPostDrawCallback : public osg::Camera::DrawCallback
{
 VideoPostDrawCallback(){}

  virtual void operator() ( osg::RenderInfo renderInfo) const
  {
 renderingCompleted();
  }

   boost::signals2::signalvoid(void) renderingCompleted;
};

struct TexturePreDrawCallback : public osg::Camera::DrawCallback
{
TexturePreDrawCallback()  {}

virtual void operator() ( osg::RenderInfo renderInfo) const
{
updateCamera();
}

boost::signals2::signalvoid(void) updateCamera;
};




And here is the definition of my thread class


Code:

class VideoPostRecThread : public VideoRecWithArThread
{
Q_OBJECT

friend class VideoPostDrawCallback;

public:
VideoPostRecThread (boost::shared_ptrIDSCameraManager pCamMgr, 
unsigned int maxFrames, QObject *parent = NULL);
~VideoPostRecThread ();

void renderingCompleted();
void updateTextureCamera();
private:
virtual void postProcess();
void setupImages();
void setupHudCamera();
void setupSnapshotCamera();

osg::ref_ptrosg::Camera m_pSnapshotcamera;
osg::ref_ptrosg::Camera m_pHudCamera;
osg::ref_ptrosg::Image m_TextureImage;
osg::ref_ptrosg::Image m_SnapshotImage;
osg::ref_ptrosg::Texture2D m_RenderTexture;
osg::ref_ptrosg::Geode m_QuadGeode;
osg::ref_ptrVideoPostDrawCallback m_VideoCallback;
osg::ref_ptrTexturePreDrawCallback m_UpdateCallback;

QWaitCondition m_SnapshotCondition;
QMutex m_SnapshotMutex;

unsigned int m_CurrentArFrameIndex;
};




Here is the implementation of the VideoPostRecThread class.


Code:

void VideoPostRecThread ::setupImages()
{
m_SnapshotImage = new Image();
m_SnapshotImage-allocateImage(1280,1024,1, GL_RGBA, GL_UNSIGNED_BYTE);
m_TextureImage = new Image();
}

void VideoPostRecThread ::setupHudCamera()
{
// Create the texture to render to
m_RenderTexture = new osg::Texture2D;
m_RenderTexture-setDataVariance(osg::Object::DYNAMIC);
m_RenderTexture-setInternalFormat(GL_RGBA);

osg::ref_ptrosg::Geometry screenQuad;
screenQuad = osg::createTexturedQuadGeometry(osg::Vec3(),
 osg::Vec3(1280, 0.0, 0.0),
 osg::Vec3(0.0, 1024, 0.0));
m_QuadGeode = new osg::Geode;
m_QuadGeode-addDrawable(screenQuad.get());

m_pHudCamera = new osg::Camera;
m_pHudCamera-setReferenceFrame(osg::Transform::ABSOLUTE_RF);
m_pHudCamera-setRenderOrder(osg::Camera::PRE_RENDER);
m_pHudCamera-setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
m_pHudCamera-setProjectionMatrix(osg::Matrix::ortho2D(0, 1280, 0, 
1024));
m_pHudCamera-setViewMatrix(osg::Matrix::identity());
m_pHudCamera-setViewport(0,0,1280,1024);
m_pHudCamera-addChild(m_QuadGeode);


Re: [osg-users] [forum] Video capture with 3D augmented reality

2010-10-28 Thread benedikt naessens
Hi,

Can this post be removed ? I posted this in the wrong forum group. I already 
moved it to the General Forum of the OpenSceneGraph list.


Thank you!

Cheers,
benedikt

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=33173#33173





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Video capture with 3D augmented reality

2010-10-28 Thread benedikt naessens
The weird thing is that it is not working with the frame buffer object render 
target (black image).

Should I consider a pbuffer ?

Thank you!

Cheers,
benedikt

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=33190#33190





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Screenshot to non OSG image

2010-09-16 Thread benedikt naessens
Robert,

Basically, I want to render my scene data onto an image which is not an OSG 
image, but has the same structure, i.e.

* 8 bits red
* 8 bits green
* 8 bits blue

The image (memory block) contains already something (see it as a background) 
and thus I only want my 3D scene objects on top of it.

Can you explain which parts of my original post are unclear, such that I can 
rectify this ?

Thank you!

Cheers,
benedikt

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=31692#31692





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Screenshot to non OSG image

2010-09-15 Thread benedikt naessens
Or maybe someone can give me an indication if this is possible or not ?

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=31637#31637





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Screenshot to non OSG image

2010-09-14 Thread benedikt naessens
Dear all,

I want to combine a movie (movie shot by a camera) with some OSG 3D objects. I 
already managed to make sure that the position and orientation of the camera 
are stored each time a picture is grabbed (as part of the movie). The sequence 
of the movie is stored in memory (can choose between RGB24 and RGB32) in a 
buffer (an array of char* blocks). I just need to call an API function of the 
camera to convert this to an AVI sequence. 

Now before I call this function to convert the contents of the buffer to an AVI 
movie, I want to do the OSG rendering (this is my postprocessing step). 

The technique I use, is:
* Adding an OSG camera to the OSG root node and adding the scene data as a 
child to the OSG camera
* Set the render order as POST_RENDER
* Set the target of the rendering to FRAME_BUFFER_OBJECT
* I explicitly don't set the clear mask for the camera: 
setClearMask((GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) is not present in 
the code. I hope this is right ?

This results in the following code:


Code:

m_pSnapshotcamera = new Camera;
m_refpARRoot-addChild( m_pSnapshotcamera );

m_pSnapshotcamera-setReferenceFrame(osg::Transform::ABSOLUTE_RF);
m_pSnapshotcamera-setProjectionMatrixAsPerspective(snapVerFov, 
snapAspect,0.01, 4);
m_pSnapshotcamera-setDrawBuffer(GL_BACK);
m_pSnapshotcamera-setReadBuffer(GL_BACK);
m_pSnapshotcamera-setRenderOrder(osg::Camera::POST_RENDER);
m_pSnapshotcamera-setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);




Now, how do I assign the (AVI sequence) memory as the frame buffer to write to 
? I am aware of the attach() member function of the Camera class, but this 
needs an (OSG) image as a parameter. 

Also, how can I make sure that OSG renders the 3D objects into a window with 
the size of the camera (video) pictures? I clarify: in the code above, I 
already defined the horizontal and vertical field of view (using the aspect 
ratio), but still the system can't deduce what the size of the frame buffer is 
(let's assume that the video is 640 x 480, which is different than the standard 
size of my OSG widgets).

Thank you!

Kind regards,
Benedikt Naessens.[/code]

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=31594#31594





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Culling traversal

2010-08-04 Thread benedikt naessens
I have an OSG tree, where I want some nodes to be visible and some hidden. 
Currently, I set the visibility status with nodemasks; I don't want to use 
switches, because I want to set the visibility of nodes dynamically (thus, 
otherwise, each node would need a switch). 

I have a situation where I want to hide a node X and all its children, with the 
exception of one its children. I made an NodeVisitor which adapts all the 
nodemasks of the node X and all its children. I know that if you want to hide 
the node and its children, basically you only need to change the nodemask of 
node X, because the cull visitor stops at node X. But, this is obviously not 
the behaviour I want. 

How can I change the behaviour of the CullVisitor, so I doesn't stop at nodes 
with the hidden nodemask ? I use setInheritanceMask ( with inheritancemask = 
osg::CullSettings::ALL_VARIABLES  ~osg::CullSettings::CULL_MASK) and 
setCullMask, but it seems that does something different. I am a bit confused of 
how inheritance masks work. 

Do I have to set the traversal mask of the cull visitor ? How do I apply then a 
new cull visitor on a camera (or viewer ...) ? 

Thank you!

Benedikt.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=30522#30522





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] 3D objects that don't rescale

2010-07-22 Thread benedikt naessens
Can you make 3D objects that don't rescale in OSG ? To clarify : if you move 
the camera closer and further, they will still have the same size. Still, they 
need to rotate and translate according to the movement of the camera. An 
example of that is the 
OBJECT_COORDS_WITH_MAXIMUM_SCREEN_SIZE_CAPPED_BY_FONT_HEIGHT character size 
mode for text. Here the text always has the same font size, independent of how 
far you are from the text geode. What I would like to have is something similar 
for for example points, lines and boxes.

Thank you!

Kind regards,
Benedikt

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=30175#30175





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] 3D objects that don't rescale

2010-07-22 Thread benedikt naessens
Thanks !

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=30180#30180





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG stereo features

2010-04-03 Thread benedikt naessens

Alberto Luaces wrote:
 Benedikt,
 
 take a look at this link:
 
 [cut URL]
 
 with this you can change the stereo behaviour of your program without
 touching a line of code. Of course you can also do the same
 programmatically.
 
 --
 Alberto
 


Thanks for the fast reply. I have already seen the page. I was just wondering 
how to do it programmatically; what these environment variables exactly do and 
what to use then : still a CompositeViewer with two camera's or a Viewer or ... 
? How to get access to the stereo camera's to set the view and projection 
matrices (such that the interocular distance remains the same) ? This is all 
still very unclear to me ...

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=26418#26418





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG stereo features

2010-04-03 Thread benedikt naessens

robertosfield wrote:
 
 
 Have you spotted the osgviewerQT examples?
 
 Robert.
 


I am using the QSceneGraph class to put dialogs and other items on and do the 
OSG rendering in the drawBackground() member function. Framerate drops quite 
drastically, but that could also be because of the dual screen and the higher 
resolution (than I was used to work with in the past).

Do you think the osgviewerQT examples would be more efficient ?

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=26419#26419





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] OSG stereo features

2010-04-02 Thread benedikt naessens
Can anybody on the forum explain how to use the stereo support in OSG? 

I tried every possible combination of DisplaySettings, OsgViewer, OSG, stereo, 
etc. on Google to find any documentation, but I didn't get further than some 
obscure forums where only part of it was explained. The main page of the OSG 
site is not clear either.

Currently, I am using a CompositeViewer with two views (two camera's), but I 
guess there are better ways out there. I have found some information how to do 
it with the OSGViewer application, but looking at the code of the OSGViewer app 
doesn't help. I want to integrate this into a Qt application anyway, so simply 
starting the OsgViewer is no solution.

Also the DisplaySettings class probably can help me, but I have no clue how to 
integrate that information with a viewer. 

Also, any information on techniques (OSG ?) how to find the interocular 
distance is not easy to find. Can anyone give me a clue on this ?

From time to time, I also find some pages that report some problems with the 
fusion matrix. What is this problem exactly (missing the point what a fusion 
matrix is anyway) and has it been fixed ?

Sorry for the amount of questions and my ignorance.

Thank you!

Cheers,
Benedikt.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=26384#26384





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org