Re: [osg-users] Refactoring DatabasePagerNeedToRemovestringflaggingtechnique

2009-12-02 Thread Dan Small
hi all,

I'd like to jump in on this conversation.  We are currently in the camp of
doing intersection testing against a non-rendered, high-resolution,
non-paged terrain model, and then using a PagedLOD terrain model for viz.
>From all the comments, it seems like trying to combine the
IntersectionVisitor and the database pager is a bad idea if you are also
wanting to visualize the same scene graph.  Our application is similar to
Wojciech's, in that we don't want to pay the overhead of reloading the
highest level LOD every time we do an intersection test.  We are extracting
regular gridded height fields off the terrain data, and then doing line lots
of sight checks between the height field points and some other point in the
scene.

I have two questions relevant to this thread:

1) Does the task of combining the database pager with the
IntersectionVisitor (while ideally caching the highest LOD tile) get easier
if you're not actually rendering the scenegraph?

2) Again with respect to a non-rendered, PagedLOD terrain, do you see any
problems with running multiple threads where IntersectionVisitor is executed
in those threads?  We currently do this against our own static BSP tree
representation of the scenegraph, and it works fine.

We would very much like to do these analyses on very large terrains, but
even at 64 bits we will very easily run out of RAM when we try to load them.
This makes using an OSGDEM/PagedLOD terrain for the intersection testing the
next logical step.

Thanks,

Dan

On Tue, Nov 24, 2009 at 6:54 AM, Robert Osfield wrote:

> Hi Wojtek,
>
> On Tue, Nov 24, 2009 at 1:33 PM, Wojciech Lewandowski
>  wrote:
> >> When I get on to reviewing the multiple viewpoint issue with
> >> DatabasePager I'll have a think about the consequences of users
> >> caching subgraphs.
> >
> > Thank You. Does it mean I should try to prepare the proposed submission
> or
> > not ?
>
> You can submit changes, I can always just use these as another point
> of information when doing my investigation even if the code doesn't
> make it into svn/trunk.
>
> Robert.
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] compositeViewer newbie question

2008-11-11 Thread Dan Small
Thanks Robert,

We used the second approach you mentioned, threw all the views in one
window,  & not surprisingly the memory utilization went way down.
Thanks for correcting our GL naivete.

It seems tho that the 2nd approach also limits us to single-threaded only;
would you concur?  We have an 8-core Dell we are running this on, and it
would be nice to avail ourselves of the multithreading features if we can.

btw, I wasn't my intent to imply there was a bug in the OSG, only in that
way that we were using it :)

cheers,

Dan


On Tue, Nov 11, 2008 at 9:27 AM, Robert Osfield <[EMAIL PROTECTED]>wrote:

> Hi Dan,
>
> Creating 9 separate windows without any of them sharing OpenGL objects
> will cause the OSG to create 9 sets of OpenGL objects per object in
> the scene graph.  This is just a fact of life if you use the OSG they
> way you are doing, it isn't a bug.
>
> Thankfully OpenGL/OSG provide the ability to share OpenGL objects
> between contexts, see the sharedContext variable in the
> osg::GraphicsContext::Traits object - use this to pass in the
> previously created context.  The downside of this approach is
> threading issues relating to creation and distruction order of the
> shared OpenGL objects.  The best way to avoid issues is to run the OSG
> single threaded.
>
> The other alternatively, and by far the best solution is to use a
> single Window with 9 viewports - which boils down to all the Camera's
> sharing the same GraphicsWindow.  This will give you the best alround
> performance and memory utilisation.
>
> Robert.
>
> On Tue, Nov 11, 2008 at 4:07 PM, Dan Small <[EMAIL PROTECTED]> wrote:
> > Hello OSGers,
> >
> > I'm trying to create a security system simulator with 9 cameras visible
> to
> > the user at all times.
> >
> > Eight of these will be static, fixed views of the same terrain.
> >
> > The last one will be a pan tilt zoom view.
> >
> > We are currently using a 150mb 3-D terrain model (IVE format, 5-LOD's,
> > exported from terrex).  The vast majority of this data is texture.
> >
> > In general, we lockdown the LOD to the highest level for each window.
> >
> > There are two things happening that we would like to get more information
> > on.
> >
> > 1) each window view seems to create a full copy of the scenegraph in
> > memory.  with eight Windows on the screen, we take up about 1.5 gigs of
> > RAM.  This number goes down by about 150 MB if we create one less window
> > each time.
> >
> > 2) We seem to be running out of memory on the graphics card.  we get an
> > OpenGL out of memory error, and in some of the windows created toward the
> > end,  the terrain textures go white.  the graphics card has 768
> megabytes,
> > and handles several windows just fine. Our thought was that by using
> > compositeViewer, we could save on the memory necessary to render all of
> > these different views.
> >
> > We are using independently allocated trait and graphicsContext objects.
>  I
> > do notice that the osgcompositeviewer example uses shared graphics
> context
> > and traits.  I suspect that this may be the source of our problem.
> >
> > My question is why do we get such a major increase in memory when
> ostensibly
> > we're using the same model?
> >
> > I will include some of the code implementation at the end for those who
> > would like to see some specifics.
> >
> >
> > Thanks,
> >
> > Dan Small
> >
> >
> >
> > major components of the Window Object:
> >
> > boost::shared_ptr getScene() const;
> > void setScene( boost::shared_ptr x );
> > void setLighting( bool lighting );
> > bool getLighting() const;
> > double getNear() const;
> > void setNear(double x);
> > double getFar() const;
> > void setFar(double x);
> >
> > virtual umb::Vec4d getClearColor() const;
> > void setClearColor( const umb::Vec4d &color );
> >
> > void setFOVY( double fovy );
> > double getFOVY() const;
> >
> > void setFOVX( double fovx );
> > double getFOVX() const;
> >
> > void setFullScreen( bool fs );
> > bool getFullScreen() const;
> >
> > void setUseMouseInteractions( bool is_use );
> > bool getUseMouseInteractions() const;
> >
> > void toggleFullScreen();
> >
> > void setPosrot( const umb::PosRotd& );
> > umb::PosRotd getPosrot() const;
> >
> >

[osg-users] compositeViewer newbie question

2008-11-11 Thread Dan Small
Hello OSGers,

I'm trying to create a security system simulator with 9 cameras visible to
the user at all times.

Eight of these will be static, fixed views of the same terrain.

The last one will be a pan tilt zoom view.

We are currently using a 150mb 3-D terrain model (IVE format, 5-LOD's,
exported from terrex).  The vast majority of this data is texture.

In general, we lockdown the LOD to the highest level for each window.

There are two things happening that we would like to get more information
on.

1) each window view seems to create a full copy of the scenegraph in
memory.  with eight Windows on the screen, we take up about 1.5 gigs of
RAM.  This number goes down by about 150 MB if we create one less window
each time.

2) We seem to be running out of memory on the graphics card.  we get an
OpenGL out of memory error, and in some of the windows created toward the
end,  the terrain textures go white.  the graphics card has 768 megabytes,
and handles several windows just fine. Our thought was that by using
compositeViewer, we could save on the memory necessary to render all of
these different views.

We are using independently allocated trait and graphicsContext objects.  I
do notice that the osgcompositeviewer example uses shared graphics context
and traits.  I suspect that this may be the source of our problem.

My question is why do we get such a major increase in memory when ostensibly
we're using the same model?

I will include some of the code implementation at the end for those who
would like to see some specifics.


Thanks,

Dan Small



*major components of the Window Object:
*
boost::shared_ptr getScene() const;
void setScene( boost::shared_ptr x );
void setLighting( bool lighting );
bool getLighting() const;
double getNear() const;
void setNear(double x);
double getFar() const;
void setFar(double x);

virtual umb::Vec4d getClearColor() const;
void setClearColor( const umb::Vec4d &color );

void setFOVY( double fovy );
double getFOVY() const;

void setFOVX( double fovx );
double getFOVX() const;

void setFullScreen( bool fs );
bool getFullScreen() const;

void setUseMouseInteractions( bool is_use );
bool getUseMouseInteractions() const;

void toggleFullScreen();

void setPosrot( const umb::PosRotd& );
umb::PosRotd getPosrot() const;

void setLODScale( double x );
double getLODScale() const;

void setAutoNearFar( bool x );
bool getAutoNearFar() const;

void setWindowPos( const umb::Vec2i &windowPos );
umb::Vec2i getWindowPos() const;

void setWindowSize( const umb::Vec2ui &windowSize );
umb::Vec2ui getWindowSize() const;

ViewerManipulatorControl* getManipulatorControl() const;

void hideMouseCursor( bool hideCursor );

void addCameraNode( osg::Node* node );

void setCenterPoint( const umb::Vec3d& pr );

umb::Vec3d getCenterPoint();

osgViewer::View* getView()
{
return mView.get();
}

using umb::Interpreter::Object::getName;
using umb::Interpreter::Object::setName;

private:

int mViewNumber;
bool mLighting;

osg::ref_ptr mView;
boost::shared_ptr mScene;
umb::Vec4d mClearColor;
boost::shared_ptr mSceneLight;

osg::ref_ptr mGroup;


size_t mToken;
double mLODScaleCache;

bool mIsFullScreen;
bool mUseMouseInteractions;

uviewer::ViewerManipulatorControl* mManipulatorControl;
usg::ShapeIntersections mPickInformation;

umb::Vec2i mWindowPos;
umb::Vec2ui mWindowSize;

std::vector mReservedKeys;
std::vector mReservedKeysFirst;

std::map mViewpoints;

osg::ref_ptr mColorImage;
osg::ref_ptr mDepthImage;

WindowCapturePostDrawCallback* mCameraPostRenderCB;




*// Constructor

*
uviewer::Window::Window() :
mViewNumber( 0 ),
mLighting( true ),
mView( new osgViewer::View ),
mClearColor( 0, 0, 0, 1 ),
mGroup( new osg::Group() ),
mToken( 0 ),
mLODScaleCache( 1.0 ),
mIsFullScreen( false ),
mUseMouseInteractions( false ),
mManipulatorControl( 0 ),
mPickInformation(),
mWindowPos( umb::Vec2i( 100, 100 ) ),
mWindowSize( umb::Vec2ui( 800, 450 ) )
{
float characterSize = 20.0f;
osg::Vec3 pos( 0.0f, 0.0f, 0.0f );

osg::Geode* textGeode = new osg::Geode;
textGeode->addDrawable( infoText );
mGroup->addChild( textGeode );

mView->setSceneData( mGroup.get() );

mColorImage = oc_color.get();
mDepthImage = oc_depth.get();

mCameraPostRenderCB = new WindowCapturePostDrawCallback( mView.get(),
mColorImage.get(), mDepthImage.get() );
mCameraPostRenderCB->setColorCapture( false );
mCameraPo