Re: [osg-users] osgPPU vs RTT

2008-11-16 Thread Art Tevs
Hi,

as I was able to see the MPV and CIGI projects are something like rendering 
over the netwrok - interface thing. I wasn't able to find info how the RTT is 
handled there. Also it seems that is is using an almost outdated osg version, 
since osgUtil::SceneView is already deprecated since osg 2.x

What you get directly from the MPV seems not to be enough to use osgPPU or 
otehr direct post processing libraries (if such exist ;), since there are no 
texture available. However if you have access to SceneView you can get the 
camera from there and hence the according master camera texture. Hence I would 
suppose this is the way to go. All other alternatives as for example directly 
sampling the video memory and copying into a texture (if such things are 
possible with MPV and SDL (sdl is used as backend there) ) would cost too much 
performance.

I hope I was able to help.

Best regards,
art


--- Patrick Castonguay [EMAIL PROTECTED] schrieb am Do, 13.11.2008:

 Von: Patrick Castonguay [EMAIL PROTECTED]
 Betreff: Re: [osg-users] osgPPU vs RTT
 An: osg-users@lists.openscenegraph.org
 Datum: Donnerstag, 13. November 2008, 17:29
 Art,
 Here are the link to the MPV project:
 
 CIGI website: http://cigi.sourceforge.net/
 MPV website: http://cigi.sourceforge.net/product_mpv.php
 
 
 From what I can see (and I am not an expert) there is
 no direct use of osg::Texture, UnitTexture or osg::Camera. 
 It seems to me that osg::Geode, Geometry and StateSet  is
 used most places where rendering is done.
 
 As well the scene graph looks something like this, before
 adding much data in there:
 RootNode
 EntityBranchNode
 Entities...
 TerrainBranchNode
 Terrain info...
 SkyBranchNode
 Sky info...
 
 If I do a sceneGraphDump when there a terrain and other
 stuff loaded I get a really long dump but I still can't
 see any information about cameras or such...
 
 What I do get from the MPV which is related to the
 rendering is:
 In a RenderCameraCigiSDLOSG plugin they handle the viewport
 and get the sceneView as such:osgUtil::SceneView
 *sceneView = viewport-getSceneView(); (would need to
 modify the current MPV code to get a hold of this)
 
 What I can get a hold of directly is:

 //=
 //! The camera matrices, one for each view.  Each one
 of these is a 
 //! complete modelview matrix.  Retrieved from the
 blackboard.
 //! 
 std::map int, osg::Matrix  *cameraMatrixMap; 

 //=
 //! Map containing the view group parameters (provides
 additional 
 //! view transformation, etc).  Retrieved from the
 blackboard.
 //! 
 std::map int, ViewGroupParams * 
 *viewGroupParamsMap;

 //=
 //! Map containing the view parameters (used for
 frustum setup,
 //! entity-following, etc).  Retrieved from the
 blackboard.
 //! 
 std::map int, XL::XLPtrView  *viewMap;
 
 Again thanks for your help!
 
 
 Patrick
 
 Hi Patrick,
 
 osgPPU does use the camera only in to get its
 texture (so the texture to which the scene is rendered).
 You do not
 require a camera at all, you can use UnitTexture to provide
 osgPPU
 pipeline with any texture (or with the texture you get from
 MPV). To
 provide a texture you need some protocol to get the texture
 from MPV
 into osg::Texture container. Have you some link to the MPV,
 so that I
 can get more info about it?
 
 How are you managing the combination of osg and mpv?
 
 cheers
 
 
 Art,
 From what I can understand, it uses cameras, views and
 viewports.  I believe mostly from the osgSDLView, but it
 also uses xlsignal? to pass the pointers around.
  The way MPV works at the hi level is that a single window
 is defined (per process).  This window makes use a a viewer
 which possibly has multiple viewports. (I guess this is
 pretty normal...) The views are defined in a definition file
 and the information driving them comes through the CIGI
 protocol.  Can I use the viewports and apply the osgPPU
 processors to it(them) or do I have to attach the processor 
  
 Patrick Castonguay
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


  
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgPPU vs RTT

2008-11-13 Thread Patrick Castonguay
Art,
Here are the link to the MPV project:

CIGI website: http://cigi.sourceforge.net/
MPV website: http://cigi.sourceforge.net/product_mpv.php


From what I can see (and I am not an expert) there is no direct use of 
osg::Texture, UnitTexture or osg::Camera.  It seems to me that osg::Geode, 
Geometry and StateSet  is used most places where rendering is done.

As well the scene graph looks something like this, before adding much data in 
there:
RootNode
EntityBranchNode
Entities...
TerrainBranchNode
Terrain info...
SkyBranchNode
Sky info...

If I do a sceneGraphDump when there a terrain and other stuff loaded I get a 
really long dump but I still can't see any information about cameras or such...

What I do get from the MPV which is related to the rendering is:
In a RenderCameraCigiSDLOSG plugin they handle the viewport and get the 
sceneView as such:osgUtil::SceneView *sceneView = viewport-getSceneView(); 
(would need to modify the current MPV code to get a hold of this)

What I can get a hold of directly is:
//=
//! The camera matrices, one for each view.  Each one of these is a 
//! complete modelview matrix.  Retrieved from the blackboard.
//! 
std::map int, osg::Matrix  *cameraMatrixMap; 
//=
//! Map containing the view group parameters (provides additional 
//! view transformation, etc).  Retrieved from the blackboard.
//! 
std::map int, ViewGroupParams *  *viewGroupParamsMap;
//=
//! Map containing the view parameters (used for frustum setup,
//! entity-following, etc).  Retrieved from the blackboard.
//! 
std::map int, XL::XLPtrView  *viewMap;

Again thanks for your help!


Patrick

Hi Patrick,

osgPPU does use the camera only in to get its
texture (so the texture to which the scene is rendered). You do not
require a camera at all, you can use UnitTexture to provide osgPPU
pipeline with any texture (or with the texture you get from MPV). To
provide a texture you need some protocol to get the texture from MPV
into osg::Texture container. Have you some link to the MPV, so that I
can get more info about it?

How are you managing the combination of osg and mpv?

cheers


Art,
From what I can understand, it uses cameras, views and viewports.  I believe 
mostly from the osgSDLView, but it also uses xlsignal? to pass the pointers 
around.  The way MPV works at the hi level is that a single window is defined 
(per process).  This window makes use a a viewer which possibly has multiple 
viewports. (I guess this is pretty normal...) The views are defined in a 
definition file and the information driving them comes through the CIGI 
protocol.  Can I use the viewports and apply the osgPPU processors to 
it(them) or do I have to attach the processor 
 
Patrick Castonguay
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgPPU vs RTT

2008-11-10 Thread Art Tevs
Hi Patrick,

osgPPU does use the camera only in to get its texture (so the texture to which 
the scene is rendered). You do not require a camera at all, you can use 
UnitTexture to provide osgPPU pipeline with any texture (or with the texture 
you get from MPV). To provide a texture you need some protocol to get the 
texture from MPV into osg::Texture container. Have you some link to the MPV, so 
that I can get more info about it?

How are you managing the combination of osg and mpv?


cheers

--- Patrick Castonguay [EMAIL PROTECTED] schrieb am Mo, 10.11.2008:

 Von: Patrick Castonguay [EMAIL PROTECTED]
 Betreff: [osg-users] osgPPU vs RTT
 An: osg-users@lists.openscenegraph.org
 Datum: Montag, 10. November 2008, 20:46
 Art,
 From what I can understand, it uses cameras, views and
 viewports.  I believe mostly from the osgSDLView, but it
 also uses xlsignal? to pass the pointers around.
  The way MPV works at the hi level is that a single window
 is defined (per process).  This window makes use a a viewer
 which possibly has multiple viewports. (I guess this is
 pretty normal...) The views are defined in a definition file
 and the information driving them comes through the CIGI
 protocol.  Can I use the viewports and apply the osgPPU
 processors to it(them) or do I have to attach the processor 
 
  
 Patrick Castonguay
 H: 613 435 2235
 C: 613 325 1341
  
 Technology Innovation Management (TIM) Student - Modeling
 and Simulation stream
 
 Carleton University, Ottawa, ON
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


  
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgPPU vs RTT

2008-11-07 Thread Art Tevs
Hi Patrick.

I am not familar with MPV, however let my try to help you ;)

To let the ppu pipeline be applied for the complete scene, you have to provide 
it with a texture containing your scene as input. If you use the standard way, 
scene-osgPPU::Processor-osgPPU::Unit..., and attach a camera which views on 
your scene to the processor, then the camera output (which should be an RTT) 
will be provided into the pipeline. 

How is MPV handling scene rendering? Does it use osg's camera or does it render 
directly to some texture without camera intervention? If the second is true, 
then you can use UnitTexture as a root unit of your pipeline to provide your 
rendering texture into the pipeline.

Best regards,
Art




--- Patrick Castonguay [EMAIL PROTECTED] schrieb am Do, 6.11.2008:

 Von: Patrick Castonguay [EMAIL PROTECTED]
 Betreff: [osg-users] osgPPU vs RTT
 An: osg-users@lists.openscenegraph.org
 Datum: Donnerstag, 6. November 2008, 20:38
 Hi, 
 I am trying to get osgPPU to work with the MPV project (a
 CIGI compliant IG)...  I am having a little bit of a hard
 time figuring out how to get the two to talk to each other. 
 As MPV is very modular and based on a plugin stucture, what
 I have to work with (so far) is the root node of the scene. 
 The examples all have their own viewer that are used by PPU
 but I would like to stay away from that and just
 insert the pipeline to affect the intire scene (the
 displaying of the scene is handled by an other plugin inside
 MPV).
 
 Anybody have an idea of how could I attach a PPU pipeline
 to the scene? 
 
 My other option is to not use osgPPU and start doing my own
 RTT, a little like ├ůsa Engvall is intending to do...
 
 
 Patrick Castonguay
 H: 613 435 2235
 C: 613 325 1341
  
 Technology Innovation Management (TIM) Student - Modeling
 and Simulation stream
 
 Carleton University, Ottawa,
 ON___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


  
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org