Hi,

as I was able to see the MPV and CIGI projects are something like "rendering 
over the netwrok" - interface thing. I wasn't able to find info how the RTT is 
handled there. Also it seems that is is using an almost outdated osg version, 
since osgUtil::SceneView is already deprecated since osg 2.x

What you get directly from the MPV seems not to be enough to use osgPPU or 
otehr direct post processing libraries (if such exist ;), since there are no 
texture available. However if you have access to SceneView you can get the 
camera from there and hence the according master camera texture. Hence I would 
suppose this is the way to go. All other alternatives as for example directly 
sampling the video memory and copying into a texture (if such things are 
possible with MPV and SDL (sdl is used as backend there) ) would cost too much 
performance.

I hope I was able to help.

Best regards,
art


--- Patrick Castonguay <[EMAIL PROTECTED]> schrieb am Do, 13.11.2008:

> Von: Patrick Castonguay <[EMAIL PROTECTED]>
> Betreff: Re: [osg-users] osgPPU vs RTT
> An: osg-users@lists.openscenegraph.org
> Datum: Donnerstag, 13. November 2008, 17:29
> Art,
> Here are the link to the MPV project:
> 
> CIGI website: http://cigi.sourceforge.net/
> MPV website: http://cigi.sourceforge.net/product_mpv.php
> 
> 
> >From what I can see (and I am not an expert) there is
> no direct use of osg::Texture, UnitTexture or osg::Camera. 
> It seems to me that osg::Geode, Geometry and StateSet  is
> used most places where rendering is done.
> 
> As well the scene graph looks something like this, before
> adding much data in there:
> RootNode
>     EntityBranchNode
>         Entities...
>     TerrainBranchNode
>         Terrain info...
>     SkyBranchNode
>         Sky info...
> 
> If I do a sceneGraphDump when there a terrain and other
> stuff loaded I get a really long dump but I still can't
> see any information about cameras or such...
> 
> What I do get from the MPV which is related to the
> rendering is:
> In a RenderCameraCigiSDLOSG plugin they handle the viewport
> and get the sceneView as such:    osgUtil::SceneView
> *sceneView = viewport->getSceneView(); (would need to
> modify the current MPV code to get a hold of this)
> 
> What I can get a hold of directly is:
>    
> //=========================================================
>     //! The camera matrices, one for each view.  Each one
> of these is a 
>     //! complete modelview matrix.  Retrieved from the
> blackboard.
>     //! 
>     std::map< int, osg::Matrix > *cameraMatrixMap; 
>    
> //=========================================================
>     //! Map containing the view group parameters (provides
> additional 
>     //! view transformation, etc).  Retrieved from the
> blackboard.
>     //! 
>     std::map< int, ViewGroupParams * >
> *viewGroupParamsMap;
>    
> //=========================================================
>     //! Map containing the view parameters (used for
> frustum setup,
>     //! entity-following, etc).  Retrieved from the
> blackboard.
>     //! 
>     std::map< int, XL::XLPtr<View> > *viewMap;
> 
> Again thanks for your help!
> 
> 
> Patrick
> 
> Hi Patrick,
> 
> osgPPU does use the camera only in to get its
> texture (so the texture to which the scene is rendered).
> You do not
> require a camera at all, you can use UnitTexture to provide
> osgPPU
> pipeline with any texture (or with the texture you get from
> MPV). To
> provide a texture you need some protocol to get the texture
> from MPV
> into osg::Texture container. Have you some link to the MPV,
> so that I
> can get more info about it?
> 
> How are you managing the combination of osg and mpv?
> 
> cheers
> 
> 
> Art,
> >From what I can understand, it uses cameras, views and
> viewports.  I believe mostly from the osgSDLView, but it
> also uses "xlsignal"? to pass the pointers around.
>  The way MPV works at the hi level is that a single window
> is defined (per process).  This window makes use a a viewer
> which possibly has multiple viewports. (I guess this is
> pretty normal...) The views are defined in a definition file
> and the information driving them comes through the CIGI
> protocol.  Can I use the viewports and apply the osgPPU>
> processors to it(them) or do I have to attach the processor 
>  
> Patrick Castonguay
> _______________________________________________
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


      
_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to