Hi All,

I haven't chipped in so far as I haven't really too much to add to the
investigation.  My sense from own tests and looking at others over the
years is that looks to be contention in the drivers that leads to poor
scaling.  Reducing the amount of data and commands being pushed in
each context's OpenGL fifio does seem to help avoid the contention,
but doesn't seem to be able to remove it entirely.

Curiously running separate contexts via separate applications has in
the past shown good scalability so I suspect the driver contention is
something per application specific.

Along these lines I do wonder if running separate processes for each
context rather than separate threads might help avoid the contention.
Unfortunately the OSG's osgViewer design is based around the
lightweight data sharing that threading provides over use of separate
processes, and OpenThreads is geared up for creation and management of
threads rather than processes.

Perhaps one test one could do is create multiple processes at a very
high level in the application and then run a separate scene graph and
viewer within each of these processes.  It wouldn't scale well in
terms of CPU performance as the cache coherency would be rather shot
to shreds, but it might just get around the driver contention.

If we can confirm a driver contention that rapping at NVidia and AMD's
door would be the appropriate thing to do.

Robert.
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to