Hi Patrick, osgPPU does use the camera only in to get its texture (so the texture to which the scene is rendered). You do not require a camera at all, you can use UnitTexture to provide osgPPU pipeline with any texture (or with the texture you get from MPV). To provide a texture you need some protocol to get the texture from MPV into osg::Texture container. Have you some link to the MPV, so that I can get more info about it?
How are you managing the combination of osg and mpv? cheers --- Patrick Castonguay <[EMAIL PROTECTED]> schrieb am Mo, 10.11.2008: > Von: Patrick Castonguay <[EMAIL PROTECTED]> > Betreff: [osg-users] osgPPU vs RTT > An: firstname.lastname@example.org > Datum: Montag, 10. November 2008, 20:46 > Art, > >From what I can understand, it uses cameras, views and > viewports. I believe mostly from the osgSDLView, but it > also uses "xlsignal"? to pass the pointers around. > The way MPV works at the hi level is that a single window > is defined (per process). This window makes use a a viewer > which possibly has multiple viewports. (I guess this is > pretty normal...) The views are defined in a definition file > and the information driving them comes through the CIGI > protocol. Can I use the viewports and apply the osgPPU > processors to it(them) or do I have to attach the processor > > > Patrick Castonguay > H: 613 435 2235 > C: 613 325 1341 > > Technology Innovation Management (TIM) Student - Modeling > and Simulation stream > > Carleton University, Ottawa, ON > _______________________________________________ > osg-users mailing list > email@example.com > http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org _______________________________________________ osg-users mailing list firstname.lastname@example.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org