[osg-users] a bug or feature in the osgb writer?

2012-08-10 Thread John Kelso

Hi all,

I am noticing some odd behavior in the osgb file writer. I will try to
describe it as succinctly as possible. Maybe it's a bug, maybe it's my code.

We are using osg 3.1 0 on a CentOS 6 Linux system.

A fairly large program create a points file that includes some shaders and
textures which operate on the points.

If the program writes the file as an ive or osg file the input geometry type
in the file is POINTS, as is specified in the program, and I get no runtime
errors.

If the program writes the file as an osgb file the input geometry type 
in the file is TRIANGLES, or at least that's what I see when I convert the

osgb file to an osg file. At runtime I get a bunch of spewed messages:

  Warning: detected OpenGL error 'invalid operation' at after 
RenderBin::draw(..)

That is, doing nothing but changing the file type from ive to osgb causes
the error.

Extra information:

I diffed the outputs of:
  env OSGNOTIFYLEVEL=debug OSG_OPTIMIZER=NONE osgconv points.osgb 
pointFromOsgb.osg
and
  env OSGNOTIFYLEVEL=debug OSG_OPTIMIZER=NONE osgconv points.ive 
pointFromIve.osg

and the ive run has the line
  Using vertex attribute instead
repeated eight times. I doubt this has anything to do with it, as the ive
  file is the one that works, but you never know...

I can dump more gory details on request, but as an initial email I thought I'd
see if anyone familiar with the osgb writer had any ideas.

Thanks,

John

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] framerate drops drastically when turning child nodes on and off

2012-05-09 Thread John Kelso

Do you see the original problem as well? Does it act the same at .5 FPS and
no switching, and degrade at 1 FPS?

Thanks,

John

On Wed, 9 May 2012, Ulrich Hertlein wrote:


When running this on OS X I'm seeing some odd behaviour as well:
- roughly every 7s there is a jump in the draw time (see attached screenshot)
- this happens even when *not* switching between the two groups (no update 
callback
installed; both are visible at the same time)

Cheers,
/ulrich


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] framerate drops drastically when turning child nodes on and off

2012-05-09 Thread John Kelso

Many thanks. This problem sure is beginning to smell like driver and/or
card to me. I'd love to hear from other Linux users too.

John

On Wed, 9 May 2012, Stephan Maximilian Huber wrote:


Am 09.05.12 15:26, schrieb John Kelso:

Do you see the original problem as well? Does it act the same at .5 FPS and
no switching, and degrade at 1 FPS?


I don't see the original problem, the draw time is constant for both
FPS, no peaks, nada.

Only if i rotate the cube about 180° performance decrease to 3-5fps,
regardless of the specified switch-timing.

cheers,
Stepahn
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] framerate drops drastically when turning child nodes on and off

2012-05-07 Thread John Kelso

Hi all,

We are creating a simple flipbook animation. There's a node at the top, and
after a certain delta time the child node that's on is turned off, and
the next child is turned on.

If all the child nodes are turned on we get a nice solid 60 FPS. Spin it
around, move it here and there, no change in FPS. So the prolem isn't just
too much data to fit on the card.

When we animate with a delta time of .05 seconds we also get a nice smooth
animation at 60 FPS.

But, here's where things get weird, if we use a delta time of 1 second we
get a sudden and very drastic drop in frame rate. The size of the drop seems
to be related to the amount of data getting turned on and off. Same thing if
you manually single step the animation.

more details...

We've tried this with using both nodemasks and a switch node with no
change. We've seen this with OSG 2.8 and 3.1. We only have Linux boxes with
Nvidia cards, although we've tried many models and many drivers and they all
do it.

Try it yourself! Below is a program that demonstrates the problem. Save it to
jumpy.cpp, build it, and try this:

  ./jumpy .05

hit the S key a couple of times to bring up the stats. Mostly GPU. Then exit
and try:

  ./jumpy 1

hit the S key a couple of times to bring up the stats and notice the U
shaped dips in the top line of the strip chart. Each dip matches a step, and
also corresponds to a flash of yellow Draw activity, then it's back to GPU.

Has anyone else seen a problem like this? Can you give it a try and let us
know if you also see, or don't see, the problem? Maybe with more tests we can
pinpoint the source of the problem.

Many thanks,

John


#include osg/NodeCallback
#include osgDB/ReadFile
#include osgViewer/Viewer
#include osgViewer/ViewerEventHandlers


class MyUpdateCallback : public osg::NodeCallback
{
public:
MyUpdateCallback( double deltaFlipTime=0.1 )
  : _deltaFlipTime( deltaFlipTime )
{}
virtual void operator()( osg::Node* node, osg::NodeVisitor* nv )
{
unsigned int intTime( nv-getFrameStamp()-getReferenceTime() / 
_deltaFlipTime );

osg::Group* grp( node-asGroup() );
grp-getChild( 0 )-setNodeMask( intTime  0x1 );
grp-getChild( 1 )-setNodeMask( (intTime+1)  0x1 );
traverse( node, nv );
}
protected:
~MyUpdateCallback() {}
double _deltaFlipTime;
};

osg::Node* createFrame( double t )
{
  osg::Geode* geode( new osg::Geode );
  geode-getOrCreateStateSet()-setMode( GL_LIGHTING, osg::StateAttribute::OFF 
);


  osg::Geometry* geom( new osg::Geometry );
  geom-setDataVariance( osg::Object::STATIC );
  geom-setUseDisplayList( false );
  geom-setUseVertexBufferObjects( true );
  geode-addDrawable( geom );

  unsigned int w( 121 ), h( 121 ), d( 120 );
  unsigned int totalSamples( w*h*d );
  osg::Vec3Array* v( new osg::Vec3Array );
  osg::Vec4Array* c( new osg::Vec4Array );
  v-resize( totalSamples );
  c-resize( totalSamples );
  unsigned int index( 0 );
  unsigned int wIdx, hIdx, dIdx;
  for( wIdx=0; wIdxw; ++wIdx )
{
  for( hIdx=0; hIdxh; ++hIdx )
{
  for( dIdx=0; dIdxd; ++dIdx )
{
  const double r( ((double)wIdx)/(w-1.) );
  const double g( ((double)hIdx)/(h-1.) );
  const double b( ((double)dIdx)/(d-1.) );
  const double x( r * (double)w - (w*.5) );
  const double y( g * (double)h - (h*.5) );
  const double z( b * (double)d - (d*.5) );
	  (*v)[ index ].set( x + sin( (x+y+t)*.8 ), y + sin( (x+y+t) ), z + 
sin( (x+y+t)*1.2 ) );

  (*c)[ index ].set( r, g, b, 1. );
  ++index;
}
}
}
  geom-setVertexArray( v );
  geom-setColorArray( c );
  geom-setColorBinding( osg::Geometry::BIND_PER_VERTEX );

#if 1
  osg::DrawElementsUInt* deui( new osg::DrawElementsUInt( GL_POINTS, 
totalSamples ) );

  for( index=0; indextotalSamples; index++ )
(*deui)[ index ] = index;
  geom-addPrimitiveSet( deui );
#else
  geom-addPrimitiveSet( new osg::DrawArrays( GL_POINTS, 0, totalSamples ) );
#endif

  return( geode );
}

int main( int argc, char** argv )
{
  double deltaFlipTime( 0.1 );
  if( argc  1 )
deltaFlipTime = atof( argv[ 1 ] );

  osg::Group* root( new osg::Group() );
  root-addChild( createFrame( 0.0 ) );
  root-addChild( createFrame( 0.5 ) );

  root-setUpdateCallback( new MyUpdateCallback( deltaFlipTime ) );

  osgViewer::Viewer viewer;
  viewer.addEventHandler( new osgViewer::StatsHandler );
  viewer.addEventHandler( new osgViewer::ThreadingHandler );
  viewer.setSceneData( root );
  viewer.run();
}

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Frame syncing over multiple contexts

2012-01-19 Thread John Kelso

Hi all,

We have seen the same behavior as Anna in our immersive system. It has four
screens; each screen has a single graphics context and either one or two
cameras (depends on if running in mono or stereo). The system is driven by
an Nvidia quadroplex containing four FX5800 cards, one card per
screen. We're running CentOS Linux.

As a test I tried a configuration with one graphics context containing four
cameras with non-overlapping viewports and in this case the graphics in all
of the viewports appear to be updating at the same time.

As a second test I tried a configuration with four graphics contexts on the
same card, with each graphics context having a single camera. In this case I
could see each window getting updated at a different time.

I also tried setting the traits-swapGroupEnabled value to true but nothing
changed.

So as far as I can tell we are syncing swaps within a graphics context, but
not between graphic contexts. At least that's how I interpret what I'm seeing.

This may or may not be relevant, but we use one osgViewer::Viewer object
and all of the cameras we use are slave cameras of the master camera in the
viewer. Our graphics loop just calls osgViewer::Viewer::frame().

I see some methods in the osgViewer::ViewerBase class that might be relevant
to the problem, but I'm unclear about which ones to set to what value.

Any suggestions?

Many thanks,

John


Hi Anna, Robbert,
I think the bufferswaps on window are by default not synchronized, a
call to
wglJoinSwapGroupNV(HDC hdc, GLuint group) is needed to make different
windows synchronize.
the osg lib has the code to make the call, just set
 traits-swapGroupEnabled = true;
before
 createGraphicsContext(traits);


Output should look like: (set OSG_NOTIFY_LEVEL=INFO)
GraphicsCostEstimator::calibrate(..)
GraphicsWindowWin32::setSyncToVBlank on
GraphicsWindowWin32::wglJoinSwapGroupNV (0) returned 1
GraphicsWindowWin32::wglBindSwapBarrierNV (0, 0) returned 0

the wglBindSwapBarrierNV  fails if you don't have a gsync card (hardware
connection card for multiple graphics cards)

Still, as Robbert says, a single graphics window is likely to perform
better, and is of course automatically in sync. But I
suppose you don't want fullscreen with stereo mode VERTICAL_SPLIT.

Laurens.

On 1/16/2012 10:17 AM, Robert Osfield wrote:

Hi Anna,

This should work out of the box - the two windows should be rendering
and synchronised.  How are you setting up your viewer and graphics
contexts?

A a general note, if you are using a single graphics card for best
performance one usually tries to use a single graphics window and have
two cameras or more share this context.  Is there a reason why you
need two separate windows rather than a single window with two views?

Robert.

On 14 January 2012 21:21, Anna Sokolannaso...@gmail.com  wrote:

Hi,

I am trying to figure out how to keep multiple graphics contexts in frame
sync.

My operating system is Windows XP SP3.
My graphics is NVidia Quadro NVS 290 with the latest driver 276.42.
I'm using OpenSceneGraph 3.0.1 compiled with Visual C++ 2005 for win32.

I have vsync on in the driver and in Traits, also I am using a
CullDrawThreadPerContext as the threading model.
I have 2 graphics windows with separate contexts showing the same scene with
a left and right view on one display.
I have the scene moving across both windows so that I can see if its
properly syncing.
It sometimes visibly looks to be a number of frames out of sync (i.e. one of
the rendered context is dragging behind).
What could be causing this? In the threads? Or down in the graphics card?
Is there any specific settings I should set to make the rendered contexts
stay in frame sync?


Regards,
Anna Sokol

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Frame syncing over multiple contexts

2012-01-19 Thread John Kelso

OK then! This is getting good!

I tried setting setEndBarrierPosition(BeforeSwapBuffers), setting
setThreadingModel(CullDrawThreadPerContext), and running with four windows,
each with a single camera, on a desktop system with a single graphics card,
and the problem didn't go away.

But should the problem go away in this environment?

We'll get a chance to test the same fix in our immersive environemnt soon
and I'll report back.

Many thanks,

John

On Thu, 19 Jan 2012, Robert Osfield wrote:


Hi Paul,

On 19 January 2012 18:48, Paul Martz pma...@skew-matrix.com wrote:

Hi Robert -- The default value for ViewerBase::_endBarrierPosition appears
to be AfterSwapBuffers. Does John need to change this to BeforeSwapBuffers
in order to get the behavior you describe above?


Man I'm impressed, I'd forgotten implementing the EndBarrierPosition
and the default. I presume I set the default to AfterSwapBuffers to
avoid the possible performance drop in waiting for syncing the swap
buffers dispatch.

John should indeed change EndBarrierPosition to BeforeSwapBuffers using:

 viewer.setEndBarrierPosition(osgViewer::Viewer::BeforeSwapBuffers);

;-)

Robert.

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] problems building osg-2.9.14, dicom plugin won't compile

2011-05-12 Thread John Kelso

Hi all,

In our CentOS environment I can build osg-2.9.10 just fine with the command:
   cmake \
 -D CMAKE_INSTALL_PREFIX=$DIR/osg-$v/installed \
 -D INVENTOR_INCLUDE_DIR=`coin-config --prefix`/include \
 -D INVENTOR_LIBRARY=`coin-config --prefix`/lib/libCoin.so \
 -D OSG_USE_AGGRESSIVE_WARNINGS=OFF \
 -D DCMTK_DIR=$HEVROOT/external/dcmtk/dcmtk-3.x \
   ../OpenSceneGraph

   make install

When trying to build osg-2.9.14 using the same cmake command (and after
adding a trailing space to include/osg/GraphicsCostEstimator to get rid of a
lot of annoying compiler warnings) I get:

[ 83%] Building CXX object 
src/osgPlugins/dicom/CMakeFiles/osgdb_dicom.dir/ReaderWriterDICOM.o
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/include/osg/View:98: 
warning: ‘struct osg::View::Slave’ has virtual functions but non-virtual 
destructor
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/src/osgPlugins/dicom/ReaderWriterDICOM.cpp:
 In member function ‘virtual osgDB::ReaderWriter::ReadResult 
ReaderWriterDICOM::readImage(const std::string, const osgDB::Options*) const’:
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/src/osgPlugins/dicom/ReaderWriterDICOM.cpp:287:
 error: expected `)' before ‘{’ token
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/src/osgPlugins/dicom/ReaderWriterDICOM.cpp:298:
 error: ‘Images’ was not declared in this scope
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/src/osgPlugins/dicom/ReaderWriterDICOM.cpp:298:
 error: expected `;' before ‘images’
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/src/osgPlugins/dicom/ReaderWriterDICOM.cpp:304:
 error: ‘images’ was not declared in this scope
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/src/osgPlugins/dicom/ReaderWriterDICOM.cpp:308:
 error: ‘images’ was not declared in this scope
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/src/osgPlugins/dicom/ReaderWriterDICOM.cpp:310:
 error: ‘images’ was not declared in this scope
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/src/osgPlugins/dicom/ReaderWriterDICOM.cpp:316:
 error: ‘Images’ is not a class or namespace
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/src/osgPlugins/dicom/ReaderWriterDICOM.cpp:316:
 error: expected `;' before ‘itr’
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/src/osgPlugins/dicom/ReaderWriterDICOM.cpp:317:
 error: name lookup of ‘itr’ changed for new ISO ‘for’ scoping
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/src/osgPlugins/dicom/ReaderWriterDICOM.cpp:299:
 error:   using obsolete binding at ‘itr’
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/src/osgPlugins/dicom/ReaderWriterDICOM.cpp:317:
 error: ‘images’ was not declared in this scope
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/src/osgPlugins/dicom/ReaderWriterDICOM.cpp:320:
 error: ‘struct std::basic_stringchar, std::char_traitschar, 
std::allocatorchar ’ has no member named ‘get’
make[2]: *** 
[src/osgPlugins/dicom/CMakeFiles/osgdb_dicom.dir/ReaderWriterDICOM.o] Error 1
make[1]: *** [src/osgPlugins/dicom/CMakeFiles/osgdb_dicom.dir/all] Error 2
make: *** [all] Error 2

I'm using dcmtk-3.6 in each case.

Any ideas what I might be doing wrong?

I'm happy to include extra details, but at this point I'm not sure what's 
relevant to the problem.

Thanks,

John___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] problems building osg-2.9.14, dicom plugin won't compile

2011-05-12 Thread John Kelso

Hi Robert,

Please see below...

On Thu, 12 May 2011, Robert Osfield wrote:


Hi John,

Is you version of CentOS and old or recent one?


As far as I know it's a fairly new one. If I type /proc/version I get this:

Linux version 2.6.18-238.9.1.el5 (mockbu...@builder10.centos.org) (gcc
version 4.1.2 20080704 (Red Hat 4.1.2-50)) #1 SMP Tue Apr 12 18:10:13 EDT
2011



2011/5/12 John Kelso ke...@nist.gov:

When trying to build osg-2.9.14 using the same cmake command (and after
adding a trailing space to include/osg/GraphicsCostEstimator to get rid of a
lot of annoying compiler warnings) I get:


Could you pass on the warnings.  I'm not getting any warnings on my
Kubuntu system.  Also you can explain exactly
why and where you added a trailing space to GraphicsCostEstimator,
ideally posting the modified file as well.


I just finished rebuilding it with warnings enabled, and compressed
typescript and the CMakeCache.txt files are attached.  I hope that's not a
no-no on this list.

All I did to GraphicsCostEstimator was to add a newline at the end of the
last line.  My mistake- it wasn't a space I added but a newline.


[ 83%] Building CXX object
src/osgPlugins/dicom/CMakeFiles/osgdb_dicom.dir/ReaderWriterDICOM.o
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/include/osg/View:98:
warning: ‘struct osg::View::Slave’ has virtual functions but non-virtual
destructor
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/src/osgPlugins/dicom/ReaderWriterDICOM.cpp:
In member function ‘virtual osgDB::ReaderWriter::ReadResult
ReaderWriterDICOM::readImage(const std::string, const osgDB::Options*)
const’:
/usr/local/HEV-beta/external/osg/osg-2.9.14/OpenSceneGraph/src/osgPlugins/dicom/ReaderWriterDICOM.cp


Curious.  I'm build against DCMTK 3.6.1 and OSG svn/trunk without any problems.


You got dcmtk 3.6.1? The latest I saw to download was 3.6.0.  It's probably
not important... I hope.

I did need to use a newer version of cmake than came with CentOS.  I
installed a private copy of cmake 2.8.4 which I used to build OSG.  As a
test I used this same version to build 2.9.10 and it worked fine, so I don't
think cmake's an issue.



What version of g++ do you have?  On my system I have:


g++ --version

g++ (Ubuntu/Linaro 4.5.2-8ubuntu4) 4.5.2


gcc-version: gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50)

Thanks again,

John

typescript.gz
Description: GNU Zip compressed data


CMakeCache.txt.gz
Description: GNU Zip compressed data
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] ArgumentParser::isString bug?

2010-12-27 Thread John Kelso

Hi,

I've decided to give the osg ArgumentParser class a try.

Is the method:

bool ArgumentParser::isString(const char* str)
{
if (!str) return false;
return true;
//return !isOption(str);
}

doing as advertised?  The comment says:
/** Return true if string is non-NULL and not an option in the form
  * -option or --option. */

Thanks,

John
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG problem with multiple cards

2010-12-17 Thread John Kelso

Hi Robert,

Based on your question I went back and did some grepping through the DGL
codebase and I see that DGL does NOT use SceneView or any other OSG code.
It simply uses Performer.  I was mistaken when I said earlier than DGL uses
SceneView.

There is an OSG layer that can be used with DGL to allow OSG programs to work
with DGL, and it is this layer that uses SceneView.  It does not use the
OSG Viewer or Camera classes.

This might be helpful: While a non-OSG DGL program does not show the
slowdown when using multiple displays, an OSG program using DGL DOES show a
slowdown similar to what we see with the pure OSG program.

To summarize:

1) DGL, all OpenGL no OSG used, uses only Producer, no slowdown

2) DGL with OSG, uses SceneView and Producer (no Viewers or Cameras),
   slowdown observed

3) pure OSG, uses Viewers, Cameras (and eventually SceneView if I read the
   OSG code correctly), slowdown observed

I apologize that my original posting was incorrect, and I hope it didn't
cause anyone to go down the wrong rabbit hole.

Thanks,

John


On Fri, 17 Dec 2010, Robert Osfield wrote:


Hi John, Steve, et. al,

On Tue, Dec 14, 2010 at 7:32 PM, John Kelso ke...@nist.gov wrote:

DGL has its own threading and draw code.  It uses OpenThreads
for threading. The OpenGL calls generated by draw() are sent to the
defined windows using OSG's SceneView class and Producer.  So, it's
not completely OSG-free, but as its threading works, perhaps this
indicates that the OSG problem is not in SceneView.


I'm I reading this correctly.  You are using Producer and SceneView,
and only a custom OpenGL call to the rendering?

I wouldn't expect any performance issues due to straight OpenGL
dispatch or SceneView, the scene graphs job is make sure there aren't
issues, and will typically far out perform a naive OpenGL program.

Most likely culprit would be at the high level - creating graphics
windows and synchronization of the threads and swap buffers.  This
would lead me to ask question could the difference be Producer vs
osgViewer?

Both are pretty similar in window setup and threading when running
CullDrawThreadPerContext is very similar to that of Producer's
multi-thread approach.  Events are handled a little differently, but
this won't be a factor for performance.  The only real difference I
can recall is that the osgViewer uses a barrier at the end of dispatch
and before the call swap buffers, while Producer just dispatches swap
buffers independently and then joins a barrier afterwards.  Is there
any chance that this is the issue? I'd be easy to move the barrier.

Unfortunately I've got my head down looking at paging issues right now
so can't head off to start testing multi-card setup.

Robert.___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG problem with multiple cards

2010-12-17 Thread John Kelso

Oops!!!

In the below, It simply uses Performer. should have read It simply uses
Producer.

Performer is not involved with this problem in any way. 8^)

John


On Fri, 17 Dec 2010, John Kelso wrote:


Hi Robert,

Based on your question I went back and did some grepping through the DGL
codebase and I see that DGL does NOT use SceneView or any other OSG code.
It simply uses Performer.  I was mistaken when I said earlier than DGL uses
SceneView.

There is an OSG layer that can be used with DGL to allow OSG programs to work
with DGL, and it is this layer that uses SceneView.  It does not use the
OSG Viewer or Camera classes.

This might be helpful: While a non-OSG DGL program does not show the
slowdown when using multiple displays, an OSG program using DGL DOES show a
slowdown similar to what we see with the pure OSG program.

To summarize:

1) DGL, all OpenGL no OSG used, uses only Producer, no slowdown

2) DGL with OSG, uses SceneView and Producer (no Viewers or Cameras),
   slowdown observed

3) pure OSG, uses Viewers, Cameras (and eventually SceneView if I read the
   OSG code correctly), slowdown observed

I apologize that my original posting was incorrect, and I hope it didn't
cause anyone to go down the wrong rabbit hole.

Thanks,

John


On Fri, 17 Dec 2010, Robert Osfield wrote:


Hi John, Steve, et. al,

On Tue, Dec 14, 2010 at 7:32 PM, John Kelso ke...@nist.gov wrote:

DGL has its own threading and draw code.  It uses OpenThreads
for threading. The OpenGL calls generated by draw() are sent to the
defined windows using OSG's SceneView class and Producer.  So, it's
not completely OSG-free, but as its threading works, perhaps this
indicates that the OSG problem is not in SceneView.


I'm I reading this correctly.  You are using Producer and SceneView,
and only a custom OpenGL call to the rendering?

I wouldn't expect any performance issues due to straight OpenGL
dispatch or SceneView, the scene graphs job is make sure there aren't
issues, and will typically far out perform a naive OpenGL program.

Most likely culprit would be at the high level - creating graphics
windows and synchronization of the threads and swap buffers.  This
would lead me to ask question could the difference be Producer vs
osgViewer?

Both are pretty similar in window setup and threading when running
CullDrawThreadPerContext is very similar to that of Producer's
multi-thread approach.  Events are handled a little differently, but
this won't be a factor for performance.  The only real difference I
can recall is that the osgViewer uses a barrier at the end of dispatch
and before the call swap buffers, while Producer just dispatches swap
buffers independently and then joins a barrier afterwards.  Is there
any chance that this is the issue? I'd be easy to move the barrier.

Unfortunately I've got my head down looking at paging issues right now
so can't head off to start testing multi-card setup.

Robert.___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG problem with multiple cards

2010-12-14 Thread John Kelso

Hi all,

As Tim and Robert requested, attached is the OSG program I've been using to show
the problem with threading.  It's called multiWindows.cpp

Tim, I'd be very interested if you could run it and see what happens.
Anybody else out there have a system with more than one graphics card that
can give it a try?

To run it, specify 1 or more screen numbers, then a file to load.

For example:

  multiWindows 0 1 2 3 bigHonkingModelFile.ive

will create windows on 4 displays (:0.0 :0.1 :0.2 :0.3 -or- :0.0 :1.0 :2.0
:3.0 - look at the #if in the source for how to choose which one) and also
set processor affinity to processors 0 1 2 3.

As Steve mentioned, we have been using a pretty big file to show the drop in
frame rate.  Steve's working on getting it onto an ftp server.

As for the non-OSG program that doesn't show the problem, it uses a package
called DGL, which is the OpenGL component of the DIVERSE package.  In
brief, DGL lets you run an OpenGL program as a callback.  The program I
wrote was the basic OpenGL helix program, but modified to spew enough
triangles to give a frame rate that was less than 60hz on our system.
I always got the same frame rate no matter if I ran on 1, 2, 3, or 4 cards.

-- I think the following might be important ---

DGL has its own threading and draw code.  It uses OpenThreads for threading.
The OpenGL calls generated by draw() are sent to the defined windows using
OSG's SceneView class and Producer.  So, it's not completely OSG-free, but
as its threading works, perhaps this indicates that the OSG problem is not
in SceneView.

If anyone wants to install DGL I can send them details on how to get it and
install it, and the modified helix test file. The DIVERSE home page is
http://diverse.sourceforge.net/diverse/

I hope this is helpful.

Many thanks,

John

On Tue, 14 Dec 2010, Steve Satterfield wrote:


Hi Tim,

I have pulled your questions out of the body of the test and responding to
them up front.


Are you using Linux?


Yes, we are running CentOS and our sys admin keeps it very much up to date.


Could you share the source of this program?


  Yes, we can post the source code. John Kelso did the actual work and
  he will follow up with the code and details in a separate
  message. There are actually two test programs.

  The first test is a straight OSG only test. It is the primary code
  used for most of the tests. It reads any OSG loadable file. We have
  an .ive test case. I need to make it available via FTP. Details will
  follow.

  The second test does not use OSG and does the graphics directly with
  OpenGL. It does require some additional software to download and install.
  John will provide details.


It is paradoxical. That it works at all is do to the fact that, with
vsync enabled, all GPU activity is buffered up until the after the
next SwapBuffers call.


  I am not entirely clear what you mean in this statement. I will say
  that for the majority of our testing, we have the Nvidia environment
  variable __GL_SYNC_TO_VBLANK set to 0 so the swap is tied to
  vblank. I believe this is specific to the Nvidia driver. For normal
  production its set to 1. The X/N performance is observed in both
  cases.



I put together a multicard system specifically to look at these
issues, and I too am very interested in getting it to work.


  Does this mean you are seeing performance problems like I have
  described on your system? We would certainly be interested in
  hearing how our test program(s) run on your multi-card system.

  I will add that we had Nvidia contacts interested in eliminating if
  the problem is related to Nvidia drivers. They got the X/N
  performance on a a non-Nvidia machine and that's what prompted me to
  build a dual ATI based machine as I reported in the original
  message. Its always useful to demonstrate a problem on multiple
  platforms.


-Steve






On Mon, 13 Dec 2010, Tim Moore wrote:




On Mon, Dec 13, 2010 at 9:51 PM, Steve Satterfield 
st...@nist.govmailto:st...@nist.gov wrote:

Hi,

I would like to update the discussion we started back in October
regarding an apparent problem scaling OSG to multiple windows on
multiple graphics cards. For full details on the previous discussion
search the email archive for problem scaling to multiple cards.

Summary of the problem:

  We have a multi-threaded OSG application using version 2.8.3.  (We also
  ran the same tests using version 2.9.9 and got the same results.)  We
  have a system with four Nvidia FX5800 cards (an immersive cave like
  config) and 8 cores with 32 GB memory.

Are you using Linux?
  Since the application is parallel drawing to independent cards using
  different cores, we expect the frame rate to be independent of the number
  of cards in use.  However, frame rate is actually X/N where N is the
  number of cards being used.

  For example if the frame rate is 100 using one card, the frame rate
  drops to 50 for 2 cards and 25 for 4 cards in use.  If the
  application worked

[osg-users] non-virtual thunk errors with 2.9.9, 2.9.10

2010-12-13 Thread John Kelso

Hi,

When I try to link my executables using osg-2.9.9 or osg-2.9.10 I get messages
like:

libiris.so: undefined reference to `non-virtual thunk to 
osgViewer::Viewer::setSceneData(osg::Node*)'

I'm using CentOS and gcc 4.1.2.

I googled a bit and found this problem mentioned when using one optimization
level for a building a library and a different optimization level when
compiling programs that link with the library.

The executable, and the programs comprising libiris.so, are compiled with
whatever's the default optimization.  I built osg-2.9.9 and osg-2.9.10 just
before building libiris.so, so I'm using the same compiler, system libraries
and so forth.  I don't get this error on a freshly rebuilt osg-2.8.3.

I also didn't specify any optimization level when building OSG.

osgviewer builds and runs.

Any ideas about how to start tracking this problem down?

More gory details on request.

Many thanks,

John




___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] non-virtual thunk errors with 2.9.9, 2.9.10

2010-12-13 Thread John Kelso

Hmmm...  I can't seem to find any .ccache directory or file...

I *thought* I recompiled and reinstalled everything in OSG and my software,
but I'll do so one more time.

Any other ideas out there?

Thanks,

John

On Mon, 13 Dec 2010, Tim Moore wrote:


I saw a similar error when upgrading  from Fedora 13 to Fedora 14, which uses a 
different version of gcc. In my case the solution was to remove the contents of 
my .ccache directory. More generally, I think this problem comes from mixing 
object files produced by different g++ versions.

Tim

On Mon, Dec 13, 2010 at 10:36 PM, John Kelso 
ke...@nist.govmailto:ke...@nist.gov wrote:
Hi,

When I try to link my executables using osg-2.9.9 or osg-2.9.10 I get messages
like:

libiris.so: undefined reference to `non-virtual thunk to 
osgViewer::Viewer::setSceneData(osg::Node*)'

I'm using CentOS and gcc 4.1.2.

I googled a bit and found this problem mentioned when using one optimization
level for a building a library and a different optimization level when
compiling programs that link with the library.

The executable, and the programs comprising libiris.so, are compiled with
whatever's the default optimization.  I built osg-2.9.9 and osg-2.9.10 just
before building libiris.so, so I'm using the same compiler, system libraries
and so forth.  I don't get this error on a freshly rebuilt osg-2.8.3.

I also didn't specify any optimization level when building OSG.

osgviewer builds and runs.

Any ideas about how to start tracking this problem down?

More gory details on request.

Many thanks,

John




___
osg-users mailing list
osg-users@lists.openscenegraph.orgmailto:osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG seems to have a problem scaling to multiple windows on multiple graphics cards

2010-10-21 Thread John Kelso

Hi,

Just a recap:

We aren't using CompositeViewer- just a Viewer with 4 slave cameras each in
its own window.  Each window is opened on a separate graphics card.  We're
running X11 which has no issues with graphics card affinity.  We aren't
using mosaic mode, twinview, xinerama or anything else fancy- just a simple
mapping of one X display to one graphics card.

We can run four copies of our program at the same time, each displaying to a
separate card and all four can run as fast as a single copy.

If I try running one copy of the program displaying to multiple windows the
frame rate drops as the number of windows increases.  If the frame rate for
one window is N, then opening 2 windows gives roughly N/2 FPS, three windows
gives N/3 and so forth.  I also get the same results if I open multiple
windows on just one card.  I can see N draw threads by running top.

This is true if I use just one copy of the scenegraph, set in the viewer, or
four separate copies of the scenegraph, each loaded in a single slave
camera.

I get similar results for setting serial draw to on or off.  (Actually
setting it to on, the default is a bit faster.)  I'm using the
culldrawthreadpercontext threading model.

At this point I've run out of OSG things to try, but am open to suggestions.
At this point I'm assuming it's either an Nvidia driver bug or an OSG bug.
(Latest Nvidia driver, OSG 2.8.3)

Thanks,

John

On Wed, 20 Oct 2010, Paul Martz wrote:


On 10/20/2010 1:44 AM, Serge Lages wrote:

Hi,

Here we had a setup with 2 NVidia cards and 4 screens on Linux (2 twinviews), on
the application side we used a CompositeViewer with 2 views and 4 cameras (2
contexts), and we had a solid 60fps without problems :


I think John can also hit 60fps, if he uses a small enough data set, so the
framerate itself is irrelevant. The question is: Do the draw traversals occur
simultaneously, and do they take the same time in the multi-window case as they
do in the single-window case?

In other words, if your data set running single windowed has a combined cull and
draw that just barely fits in 16ms and therefore runs at 60hz, how does that
software perform in the multi-window case? For John, that case drops to 30fps or
15fps, because the combined cull-draw for each window quadruples.



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG seems to have a problem scaling to multiple windows on multiple graphics cards

2010-10-21 Thread John Kelso

Hi,

I will try to build 2.9.9 with both mesa and the debugger- I've never done
either, so it'll be a learning experience.

I will be at Viz next week, and have a side panic project to work on too, so
I might not get this done until early November.

Thanks,

John

On Thu, 21 Oct 2010, Robert Osfield wrote:


Hi John,

On Thu, Oct 21, 2010 at 5:35 PM, John Kelso ke...@nist.gov wrote:

At this point I've run out of OSG things to try, but am open to suggestions.
At this point I'm assuming it's either an Nvidia driver bug or an OSG bug.
(Latest Nvidia driver, OSG 2.8.3)



From the sound of the setup of hardware and software you have you

should get the scaling that one expect from one such set up.  Clearly
the hardware is capable of doing it - running the separate apps shows
this.  This does leave the OSG or the NVidia driver.

The OSG has scaled just fine in the past on other multi-graphics card
systems, and the design and implementation should lead to good
scaling.  Personally I'd suspect the NVidia driver is the problem, but
we can't rule out the OSG have some bottleneck that we've introduced
inadvertently.

One way to investigate whether the OSG being a problem would be to run
the OSG against a dummy graphics contexts that just eats all the
OpenGL calls.  Is it possible to use Mesa in this way?  Or perhaps
just write a non GL library that implements all the function entry
points but then does a non op or just logs the timings.

Another route to take is to drop in some ATI cards and see how things
scale.  Others on this thread have indicated they have seen better
scaling with ATI cards.   Doing benchmarks of ATI vs NVidia showing
that the NVidia drivers suck really badly for mult-context,
multi-threaded apps should get their attention.

Another approach might be to look to NVidia for an OpenGL example that
claims to scale well, then go test it.  Perhaps a tool like Equalizer
has multi-threaded, multi-context support that could serve as testbed.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG seems to have a problem scaling to multiple windows on multiple graphics cards

2010-10-19 Thread John Kelso

Hi,

We have a pretty simple setup.  4 graphics cards, each is its own X11
display, as in :0.0, :0.1, :0.2, :0.3.  No Twinview or Xinerama.

We've found out that our hardware doesn't support mosaic mode.  We installed
the latest Nvidia driver and it made no difference.  We're really sort of
completely stumped at this point.

Can anyone think of any test we can try to determine the cause of the problem?

Is anyone else on the list using a similar setup?  If so, have you seen this 
problem?

Thanks,

John

On Mon, 11 Oct 2010, J.P. Delport wrote:


Hi,

On 07/10/10 23:39, John Kelso wrote:

Hi,

Many thanks for your speedy reply.

We were considering trying mosaic mode if we couldn't come up with
something
that would fix the problem with our current display configuration.

Switching to mosaic mode will require a good bit of code rewrite, but if
that's the way to go I guess it's worth it in the long run.

I'll look into WGL_NV_swap_group extension too.

Any other ideas from the group?


How is the nvidia driver set up? Single X screen, or screen per card?
Twinview? Xinerama?

We've found strange driver issues with X screens spanning multiple cards.

jp



Thanks,

John

On Thu, 7 Oct 2010, Wojciech Lewandowski wrote:


Hi John,

This is odd but it sounds bit like swap buffers of the windows are
somehow
waiting for each other. I believe that WGL_NV_swap_group extension is not
used by OSG. This extension could possible help you there.

But I could be wrong on above. It is not really my main point I wanted to
mention. Instead I wanted to suggest you check SLI mosaic mode. We
have done
some experiments on 4 channels on Linux / Nvidia QuadroPlex D2 in the
past.
At first we tried to go the same path as you describe. But later we have
read somewhere that fastest method is to use one window filing whole
desktop
and split this window into 4 screen quarter slave views. Each slave view
could be positioned so that it covers one monitor output. Such 4 monitor
setup is possible with QP D2 drivers in SLI mosaic mode.

Using producer config files one may easily create a .cfg that could be
passed from command line to osgViewer and set 4 channel slaves on single
window. Best thing with using one window is that all four views use
the same
context so GL resources are shared and all four are swaped at once with
single SwapBuffer call.

In our project we ended up with 4 channel rendering using SLI mosaic
and we
were pleasently surprised how fast it was performing in comparison to
separate gl contexts on 4 windows. You may check SLI mosaic if you
haven't
done this before

Hope this helps,
Wojtek Lewandowski
--
From: John Kelso ke...@nist.gov
Sent: Thursday, October 07, 2010 9:35 PM
To: osg-users@lists.openscenegraph.org
Subject: [osg-users] OSG seems to have a problem scaling to multiple
windows
on multiple graphics cards


Hi all,

Our immersive system is a single host computer with 8 cores and 4
graphics
cards running Linux. (1) We are using OSG 2.8.3.

We are having a heck of a hard time getting OSG to take advantage of
our multiple graphics cards. Help!

Here's what we did:

If we load a fairly large model into our test program we can get a frame
rate of about 150 FPS when displaying in a single window. (2) We are
running single-threaded, and assign to a specific core.

When we background this and run a second copy of the program to another
graphics card and core then both programs run at 150 FPS. Same thing for
running three and four copies at once.

That is, four processes using four graphics cards on four cores run just
as
fast as a single process. All four cores are at near 100% CPU
utilization
according to top. So far, so good.

Now we modify the program to load the model and create multiple
windows on
multiple cards. There's one window per card and each uses a different
core. (3)

The threading model is CullThreadPerCameraDrawThreadPerContext, the
default chosen by OSG. The environment variable
OSG_SERIALIZE_DRAW_DISPATCH
is not set, so it defaults to ON, which we think means draw in serial.

If we draw to four windows on four different cards we get about 36 FPS.
There are four different cores being used, and each has about 25% of the
CPU. This probably this makes sense as the draws are in serial. 150
FPS/4
is about 36 FPS. As expected, we get nearly identical results if we
create
four windows on a single card using four different cores.

If we set OSG_SERIALIZE_DRAW_DISPATCH=OFF we hope to see better
performance,
but with four windows on four graphics cards we only get 16 FPS! There
are
four different cores bring used, one at about 82%, and the other
three at
75%, but what are they doing? Again, we get nearly identical results if
using four windows on a single card.

So

How can we get OSG to draw to four windows on four cards in one
process as
fast as running four separate processes?

Any pointers or suggestions are welcome.

Thanks,

John


1 - Our immersive

Re: [osg-users] OSG seems to have a problem scaling to multiple windows on multiple graphics cards

2010-10-19 Thread John Kelso

Well, at least we don't feel so lonely anymore!

But seriously, I'm sort of at a loss about what to try next.  If worse comes
to worse maybe I could try to code up an equivalent SceniX test program to
see if it has the same problem, but I really hope to not have to go there...

Thanks,

John

On Tue, 19 Oct 2010, Martins Innus wrote:


John,

Can't help much except to say I saw the same thing.  I had access
to a system when the GTX 470 cards came out that had 2 cards and 4
displays hooked up.  We could never get one instance of our software
spanned across the 4 displays to run as fast as 4 separate instances.
Tried twinview, xinerama, separate x-screens.  I don't have access to
the system anymore, but at the time I chalked it up to bad drivers since
the cards had just come out.  I can't remember if the frame drop was as
dramatic, but it was certainly there.

Martins

On 10/19/10 3:11 PM, John Kelso wrote:

Hi,

We have a pretty simple setup.  4 graphics cards, each is its own X11
display, as in :0.0, :0.1, :0.2, :0.3.  No Twinview or Xinerama.

We've found out that our hardware doesn't support mosaic mode.  We
installed
the latest Nvidia driver and it made no difference.  We're really sort of
completely stumped at this point.

Can anyone think of any test we can try to determine the cause of the
problem?

Is anyone else on the list using a similar setup?  If so, have you
seen this problem?

Thanks,

John

On Mon, 11 Oct 2010, J.P. Delport wrote:


Hi,

On 07/10/10 23:39, John Kelso wrote:

Hi,

Many thanks for your speedy reply.

We were considering trying mosaic mode if we couldn't come up with
something
that would fix the problem with our current display configuration.

Switching to mosaic mode will require a good bit of code rewrite,
but if
that's the way to go I guess it's worth it in the long run.

I'll look into WGL_NV_swap_group extension too.

Any other ideas from the group?


How is the nvidia driver set up? Single X screen, or screen per card?
Twinview? Xinerama?

We've found strange driver issues with X screens spanning multiple
cards.

jp



Thanks,

John

On Thu, 7 Oct 2010, Wojciech Lewandowski wrote:


Hi John,

This is odd but it sounds bit like swap buffers of the windows are
somehow
waiting for each other. I believe that WGL_NV_swap_group extension
is not
used by OSG. This extension could possible help you there.

But I could be wrong on above. It is not really my main point I
wanted to
mention. Instead I wanted to suggest you check SLI mosaic mode. We
have done
some experiments on 4 channels on Linux / Nvidia QuadroPlex D2 in the
past.
At first we tried to go the same path as you describe. But later we
have
read somewhere that fastest method is to use one window filing whole
desktop
and split this window into 4 screen quarter slave views. Each slave
view
could be positioned so that it covers one monitor output. Such 4
monitor
setup is possible with QP D2 drivers in SLI mosaic mode.

Using producer config files one may easily create a .cfg that could be
passed from command line to osgViewer and set 4 channel slaves on
single
window. Best thing with using one window is that all four views use
the same
context so GL resources are shared and all four are swaped at once
with
single SwapBuffer call.

In our project we ended up with 4 channel rendering using SLI mosaic
and we
were pleasently surprised how fast it was performing in comparison to
separate gl contexts on 4 windows. You may check SLI mosaic if you
haven't
done this before

Hope this helps,
Wojtek Lewandowski
--
From: John Kelso ke...@nist.gov
Sent: Thursday, October 07, 2010 9:35 PM
To: osg-users@lists.openscenegraph.org
Subject: [osg-users] OSG seems to have a problem scaling to multiple
windows
on multiple graphics cards


Hi all,

Our immersive system is a single host computer with 8 cores and 4
graphics
cards running Linux. (1) We are using OSG 2.8.3.

We are having a heck of a hard time getting OSG to take advantage of
our multiple graphics cards. Help!

Here's what we did:

If we load a fairly large model into our test program we can get a
frame
rate of about 150 FPS when displaying in a single window. (2) We are
running single-threaded, and assign to a specific core.

When we background this and run a second copy of the program to
another
graphics card and core then both programs run at 150 FPS. Same
thing for
running three and four copies at once.

That is, four processes using four graphics cards on four cores
run just
as
fast as a single process. All four cores are at near 100% CPU
utilization
according to top. So far, so good.

Now we modify the program to load the model and create multiple
windows on
multiple cards. There's one window per card and each uses a different
core. (3)

The threading model is CullThreadPerCameraDrawThreadPerContext, the
default chosen by OSG. The environment variable
OSG_SERIALIZE_DRAW_DISPATCH
is not set, so it defaults to ON, which we

Re: [osg-users] OSG seems to have a problem scaling to multiple windows on multiple graphics cards

2010-10-08 Thread John Kelso

Hi Robert,

We normally run with vsync on, but for these tests we turned it off so we
could observe frame rates greater than 96 FPS.

We ran some sanity tests, with vsync on, CullDrawThreadPerContext threading
and both with and without OSG_SERIALIZE_DRAW_DISPATCH=OFF, but the results
were largely the same.

Our next step will be to try mosaic mode as suggested by both Wojtek and
some folks from Nvidia.  We'll create one big window that covers all four
cards and open 4 viewports on it, one viewport per card, and see what the
performance is.

Thanks,

John

On Fri, 8 Oct 2010, Robert Osfield wrote:


Hi John,

I haven't run with multiple cards for a little while, but when I've
done it in the past I certainly didn't have the scaling problems you
are seeing.  Like Wojtek I'd suspect the OpenGL driver is trying to
serialize the swap buffers even though the OSG is attempting to run
them from separate threads.

As a sanity test try running the app with CullDrawThreadPerContext,
this will only require 5 threads, but since you are getting 150fps
frame rate to begin with not separating Cull and Draw into different
threads shouldn't affect performance too much.  Using less threads on
the OSG side might give the OS and driver a bit more breathing space.

I'd also suggest enabling vysnc to see what the driver makes of it -
this is how you should be deploying your app, and is the way you
should conduct most of work, disabling vysnc is only something I'd
recommend for very specific types of performance profiling.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] OSG seems to have a problem scaling to multiple windows on multiple graphics cards

2010-10-07 Thread John Kelso

Hi all,

Our immersive system is a single host computer with 8 cores and 4 graphics
cards running Linux. (1)  We are using OSG 2.8.3.

We are having a heck of a hard time getting OSG to take advantage of
our multiple graphics cards.  Help!

Here's what we did:

If we load a fairly large model into our test program we can get a frame
rate of about 150 FPS when displaying in a single window. (2) We are
running single-threaded, and assign to a specific core.

When we background this and run a second copy of the program to another
graphics card and core then both programs run at 150 FPS.  Same thing for
running three and four copies at once.

That is, four processes using four graphics cards on four cores run just as
fast as a single process.  All four cores are at near 100% CPU utilization
according to top.  So far, so good.

Now we modify the program to load the model and create multiple windows on
multiple cards.  There's one window per card and each uses a different
core. (3)

The threading model is CullThreadPerCameraDrawThreadPerContext, the
default chosen by OSG.  The environment variable OSG_SERIALIZE_DRAW_DISPATCH
is not set, so it defaults to ON, which we think means draw in serial.

If we draw to four windows on four different cards we get about 36 FPS.
There are four different cores being used, and each has about 25% of the
CPU.  This probably this makes sense as the draws are in serial.  150 FPS/4
is about 36 FPS.  As expected, we get nearly identical results if we create
four windows on a single card using four different cores.

If we set OSG_SERIALIZE_DRAW_DISPATCH=OFF we hope to see better performance,
but with four windows on four graphics cards we only get 16 FPS!  There are
four different cores bring used, one at about 82%, and the other three at
75%, but what are they doing?  Again, we get nearly identical results if
using four windows on a single card.

So

How can we get OSG to draw to four windows on four cards in one process as
fast as running four separate processes?

Any pointers or suggestions are welcome.

Thanks,

John


1 - Our immersive system consists of 3 projectors and a console each driven
by an Nvidia FX5800 graphics card all genlocked for 3D stereo
display. The four graphics cards are in two QuardoPlex Model D2 units
connected to the host.  The host computer is an 8 core Dell Precision
T5400 running 64 bit Linux (CentOS 5.5). We are using Nvidia driver
version 195.36.24

2 - the program is attached- it uses only OSG.  We run our tests with
_GL_SYNC_TO_VBLANK=0 to get the maximum frame rate.

3 - one graphics context per window and one camera per window#include osgDB/ReadFile

#include osgViewer/Viewer

#include osgViewer/ViewerEventHandlers

#include osgGA/TrackballManipulator

#include iostream

#include OpenThreads/Thread



#include Nerves.h



void newWindow(osgViewer::Viewer viewer, unsigned int sn, char *name=NULL)

{

osg::ref_ptrosg::GraphicsContext::Traits traits = new 
osg::GraphicsContext::Traits;

//printf(traits-referenceCount() = %d\n,traits-referenceCount()) ;

traits-screenNum = sn ;

traits-x = 111 ;

traits-y = 0 ;

traits-width = 1058 ;

traits-height = 990 ;

traits-windowDecoration = true;

traits-doubleBuffer = true;

traits-sharedContext = 0;

char foo[256] = display- ;

strcat(foo,name) ;

if (name) traits-windowName = foo ;



osg::ref_ptrosg::GraphicsContext gc = 
osg::GraphicsContext::createGraphicsContext(traits.get());

//printf(traits-referenceCount() = %d\n,traits-referenceCount()) ;

//printf(gc-referenceCount() = %d\n,gc-referenceCount()) ;

if (gc.valid())

{

osg::notify(osg::INFO)  GraphicsWindow has been created 
successfully.std::endl;

}

else

{

osg::notify(osg::NOTICE)  GraphicsWindow has not been created 
successfully.std::endl;

}



osg::ref_ptrosg::Camera camera = new osg::Camera;

//printf(camera-referenceCount() = %d\n,camera-referenceCount()) ;

camera-setGraphicsContext(gc.get());

//printf(gc-referenceCount() = %d\n,gc-referenceCount()) ;

camera-setViewport(new osg::Viewport(0, 0, traits-width, traits-height));

// running in mono

GLenum buffer = traits-doubleBuffer ? GL_BACK : GL_FRONT;

camera-setDrawBuffer(buffer);

// does this make any difference?

camera-setReadBuffer(buffer);



viewer.addSlave(camera.get()) ;

//printf(camera-referenceCount() = %d\n,camera-referenceCount()) ;

//printf(traits-referenceCount() = %d\n,traits-referenceCount()) ;

//printf(gc-referenceCount() = %d\n,gc-referenceCount()) ;

//printf(-camera-referenceCount() = 
%d\n,camera-referenceCount()) ;

}



int main( int argc, char **argv )

{

  

osgViewer::Viewer viewer ;

viewer.addEventHandler(new osgViewer::StatsHandler) ;

viewer.addEventHandler(new osgViewer::ThreadingHandler) ;



//viewer.setThreadingModel( 

Re: [osg-users] OSG seems to have a problem scaling to multiple windows on multiple graphics cards

2010-10-07 Thread John Kelso

Hi,

Many thanks for your speedy reply.

We were considering trying mosaic mode if we couldn't come up with something
that would fix the problem with our current display configuration.

Switching to mosaic mode will require a good bit of code rewrite, but if
that's the way to go I guess it's worth it in the long run.

I'll look into WGL_NV_swap_group extension too.

Any other ideas from the group?

Thanks,

John

On Thu, 7 Oct 2010, Wojciech Lewandowski wrote:


Hi John,

This is odd but it sounds bit like swap buffers of the windows  are somehow
waiting for each other. I believe that WGL_NV_swap_group extension is not
used by OSG. This extension could possible help you there.

But I could be wrong on above. It is not really my main point I wanted to
mention. Instead I wanted to suggest you check SLI mosaic mode. We have done
some experiments on 4 channels on Linux / Nvidia QuadroPlex D2 in the past.
At first we tried to go the same path as you describe. But later we have
read somewhere that fastest method is to use one window filing whole desktop
and split this window into 4 screen quarter slave views.  Each slave view
could be positioned so that it covers one monitor output. Such 4 monitor
setup is possible with QP D2 drivers in SLI mosaic mode.

Using producer config files one may easily create a .cfg that could be
passed from command line to osgViewer and set 4 channel slaves on single
window. Best thing with using one window is that all four views use the same
context so GL resources are shared and all four are swaped at once with
single SwapBuffer call.

In our project we ended up with 4 channel rendering using SLI mosaic and we
were pleasently surprised how fast it was performing in comparison to
separate gl contexts on 4 windows. You may check SLI mosaic if you haven't
done this before

Hope this helps,
Wojtek Lewandowski
--
From: John Kelso ke...@nist.gov
Sent: Thursday, October 07, 2010 9:35 PM
To: osg-users@lists.openscenegraph.org
Subject: [osg-users] OSG seems to have a problem scaling to multiple windows
on multiple graphics cards


Hi all,

Our immersive system is a single host computer with 8 cores and 4 graphics
cards running Linux. (1)  We are using OSG 2.8.3.

We are having a heck of a hard time getting OSG to take advantage of
our multiple graphics cards.  Help!

Here's what we did:

If we load a fairly large model into our test program we can get a frame
rate of about 150 FPS when displaying in a single window. (2) We are
running single-threaded, and assign to a specific core.

When we background this and run a second copy of the program to another
graphics card and core then both programs run at 150 FPS.  Same thing for
running three and four copies at once.

That is, four processes using four graphics cards on four cores run just
as
fast as a single process.  All four cores are at near 100% CPU utilization
according to top.  So far, so good.

Now we modify the program to load the model and create multiple windows on
multiple cards.  There's one window per card and each uses a different
core. (3)

The threading model is CullThreadPerCameraDrawThreadPerContext, the
default chosen by OSG.  The environment variable
OSG_SERIALIZE_DRAW_DISPATCH
is not set, so it defaults to ON, which we think means draw in serial.

If we draw to four windows on four different cards we get about 36 FPS.
There are four different cores being used, and each has about 25% of the
CPU.  This probably this makes sense as the draws are in serial.  150
FPS/4
is about 36 FPS.  As expected, we get nearly identical results if we
create
four windows on a single card using four different cores.

If we set OSG_SERIALIZE_DRAW_DISPATCH=OFF we hope to see better
performance,
but with four windows on four graphics cards we only get 16 FPS!  There
are
four different cores bring used, one at about 82%, and the other three at
75%, but what are they doing?  Again, we get nearly identical results if
using four windows on a single card.

So

How can we get OSG to draw to four windows on four cards in one process as
fast as running four separate processes?

Any pointers or suggestions are welcome.

Thanks,

John


1 - Our immersive system consists of 3 projectors and a console each
driven
by an Nvidia FX5800 graphics card all genlocked for 3D stereo
display. The four graphics cards are in two QuardoPlex Model D2 units
connected to the host.  The host computer is an 8 core Dell Precision
T5400 running 64 bit Linux (CentOS 5.5). We are using Nvidia driver
version 195.36.24

2 - the program is attached- it uses only OSG.  We run our tests with
_GL_SYNC_TO_VBLANK=0 to get the maximum frame rate.

3 - one graphics context per window and one camera per window





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Re: [osg-users] One more time: Bug? problem using osg::GraphicsContext::setClearMask in quadBufferStereo to clear both buffers

2010-10-01 Thread John Kelso

Is there a way to get osgviewer to create a viewport that is smaller than
its window?

Thanks,

John

On Fri, 1 Oct 2010, Robert Osfield wrote:


Hi John,

On Fri, Oct 1, 2010 at 12:32 AM, John Kelso ke...@nist.gov wrote:

If there were any responses to this I missed them.  Can anyone duplicate
this, or perhaps tell me what I'm doing wrong?  Is this a bug?


It does sound like a may well be a bug, but I can't test it
unfortunately as I don't have any systems that support quad buffer
stereo.

What happens when you enable quad buffer stereo with the standard
osgviewer?  Does this work fine?

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] One more time: Bug? problem using osg::GraphicsContext::setClearMask in quadBufferStereo to clear both buffers

2010-10-01 Thread John Kelso

Hi,

We have a sort of oddball immersive system where the physical screens are
a bit smaller than the area illuminated by the projectors.  (The projected area
matches the X11 screen size size).  My workaround is to make the viewport
the size of the physical screen, and make the window fullsize.

In our current software, which is an osg/Producer hybrid, the area outside
the viewport is black, done by a setClearColor call.

I'll write a small osgviewer-like program and test it and send my results
and the program back to the list.

Thanks,

John

On Fri, 1 Oct 2010, Robert Osfield wrote:


Hi John,

On Fri, Oct 1, 2010 at 2:21 PM, John Kelso ke...@nist.gov wrote:

Is there a way to get osgviewer to create a viewport that is smaller than
its window?


No... you'd need to create the context and setup the camera's viewport
manually to do this.

Your mention of this does make me rather curious, one typically does
stereo fullscreen.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] One more time: Bug? problem using osg::GraphicsContext::setClearMask in quadBufferStereo to clear both buffers

2010-10-01 Thread John Kelso

Hi again,

Attached is the program we just ran.  It shows the loaded file in a green
camera, with the clear color set to red.

Basically, when you run in quadbuffered stereo the area outside of the
viewport shows whatever cruft was there from the run before in the right
buffer.

For example, if the previous run had a white background, when you run in
stereo the area outside the viewport is pink- red for the left buffer and
white for the right.

Thanks,

John

On Fri, 1 Oct 2010, John Kelso wrote:


Hi,

We have a sort of oddball immersive system where the physical screens are
a bit smaller than the area illuminated by the projectors.  (The projected area
matches the X11 screen size size).  My workaround is to make the viewport
the size of the physical screen, and make the window fullsize.

In our current software, which is an osg/Producer hybrid, the area outside
the viewport is black, done by a setClearColor call.

I'll write a small osgviewer-like program and test it and send my results
and the program back to the list.

Thanks,

John

On Fri, 1 Oct 2010, Robert Osfield wrote:


Hi John,

On Fri, Oct 1, 2010 at 2:21 PM, John Kelso ke...@nist.gov wrote:

Is there a way to get osgviewer to create a viewport that is smaller than
its window?


No... you'd need to create the context and setup the camera's viewport
manually to do this.

Your mention of this does make me rather curious, one typically does
stereo fullscreen.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

#include osgDB/ReadFile

#include osgViewer/Viewer

#include osgViewer/ViewerEventHandlers

#include osgGA/TrackballManipulator

#include iostream



#include Nerves.h



void newWindow(osgViewer::Viewer viewer, bool stereo, unsigned int sn, char 
*name=NULL)

{

osg::ref_ptrosg::GraphicsContext::Traits traits = new 
osg::GraphicsContext::Traits;

//printf(traits-referenceCount() = %d\n,traits-referenceCount()) ;

traits-screenNum = sn ;

traits-x = 0 ;

traits-y = 0 ;

traits-width = 1000 ;

traits-height = 1000 ;

traits-windowDecoration = true;

traits-doubleBuffer = true;

traits-sharedContext = 0;

if (name) traits-windowName = name ;

traits-quadBufferStereo = stereo ;



osg::ref_ptrosg::GraphicsContext gc = 
osg::GraphicsContext::createGraphicsContext(traits.get());

if (gc.valid())

{

osg::notify(osg::INFO)  GraphicsWindow has been created 
successfully.std::endl;

}

else

{

osg::notify(osg::NOTICE)  GraphicsWindow has not been created 
successfully.std::endl;

}



// color outside the viewport

gc-setClearColor(osg::Vec4(1,0,0,1));

gc-setClearMask( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );



{

osg::ref_ptrosg::Camera camera = new osg::Camera;

camera-setGraphicsContext(gc.get());

camera-setViewport(new osg::Viewport(50, 50, traits-width-100, 
traits-height-100));

GLenum buffer = GL_BACK ;

if (stereo) buffer = GL_BACK_LEFT ;

camera-setDrawBuffer(buffer);

// color inside the viewport

camera-setClearColor(osg::Vec4(0,1,0,1)) ;

viewer.addSlave(camera.get()) ;

}



if (stereo)

{

osg::ref_ptrosg::Camera camera = new osg::Camera;

camera-setGraphicsContext(gc.get());

camera-setViewport(new osg::Viewport(50, 50, traits-width-100, 
traits-height-100));

GLenum buffer = GL_BACK_RIGHT ;

camera-setDrawBuffer(buffer);

// color inside the viewport

camera-setClearColor(osg::Vec4(0,1,0,1)) ;

viewer.addSlave(camera.get()) ;

}



}



int main( int argc, char **argv )

{



osgViewer::Viewer viewer ;

viewer.addEventHandler(new osgViewer::StatsHandler) ;

viewer.addEventHandler(new osgViewer::ThreadingHandler) ;



osg::GraphicsContext::WindowingSystemInterface* wsi = 
osg::GraphicsContext::getWindowingSystemInterface();

if (!wsi) 

{

osg::notify(osg::NOTICE)Error, no WindowSystemInterface available, 
cannot create windows.std::endl;

return 1;

}



bool stereo ;

if (!strcmp(-m,argv[1])) 

{

stereo = false ;

printf(running in mono\n) ;

}

else if (!strcmp(-s,argv[1])) 

{

stereo = true ;

printf(running in stereo\n) ;

}

else

{

printf(Usage: %s -m|-s n ... file\n) ;

return 1 ;

}



for (unsigned int i=2; iargc-1; i++)

{

int sn ;

sscanf(argv[i],%d,sn) ;

newWindow(viewer,stereo,sn,argv[i]);

}



// load the scene.

osg::ref_ptrosg::Node loadedModel = osgDB::readNodeFile(argv[argc-1]);



if (!loadedModel) 

{

std::cout  argv[0] : No data loaded.  std::endl;

return 1;

}



viewer.setSceneData

[osg-users] One more time: Bug? problem using osg::GraphicsContext::setClearMask in quadBufferStereo to clear both buffers

2010-09-30 Thread John Kelso

Hi,

If there were any responses to this I missed them.  Can anyone duplicate
this, or perhaps tell me what I'm doing wrong?  Is this a bug?

Basically, when I'm running in stereo the area outside the viewport is only
getting set for one of the buffers.

Thanks,

John

On Mon, 20 Sep 2010, John Kelso wrote:


Hi,

I have a window whose viewport doesn't fill it, and I want to set the area
outside the viewport to a specific color.

I'm using:
  _gc = osg::GraphicsContext::createGraphicsContext(_traits);
  _gc-setClearColor( osg::Vec4f(1.f, 0.f, 0.f, 1.0f) );
  _gc-setClearMask( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

This is osg 2.8.3 on Centos, Quadro FX 4600.

This works fine when traits-quadBufferStereo is false, but when it's true,
I only get one of the buffers cleared.

Any ideas?  I suspect the fix is easy but hard to find, at least for me.

Many thanks,

John


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] problem using osg::GraphicsContext::setClearMask in quadBufferStereo to clear both buffers

2010-09-20 Thread John Kelso

Hi,

I have a window whose viewport doesn't fill it, and I want to set the area
outside the viewport to a specific color.

I'm using:
  _gc = osg::GraphicsContext::createGraphicsContext(_traits);
  _gc-setClearColor( osg::Vec4f(1.f, 0.f, 0.f, 1.0f) );
  _gc-setClearMask( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

This is osg 2.8.3 on Centos, Quadro FX 4600.

This works fine when traits-quadBufferStereo is false, but when it's true,
I only get one of the buffers cleared.

Any ideas?  I suspect the fix is easy but hard to find, at least for me.

Many thanks,

John


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] cmake errors with 2.9.8

2010-07-16 Thread John Kelso

Hi,

I'm glad to hear it's something simple.  Unfortunately for me, given the nature 
of
the system I work on, it's not something I can change.  I'll just have to
wait until a newer version of cmake gets installed before I can update our
version of OSG.

BTW, the README.txt, and the link it references, both specify I only need cmake
2.4.6.

Thanks,

John



On Fri, 16 Jul 2010, Robert Osfield wrote:


Hi JS, Chuck, John et. al.

On Thu, Jul 15, 2010 at 9:48 PM, Jean-Sébastien Guay
jean-sebastien.g...@cm-labs.com wrote:

But that means that on Win32 and anything other than APPLE, it will accept
old versions of CMake, and the FRAMEWORK keyword is not guarded to be used
only on APPLE everywhere it's used, so there's the problem.


Guarding the FRAMEWORK keyword sounds like the sensible thing to do,
it's a bit of pain, but it would allow those using cmake out of the
box on older OS spins to keep working.


I'd say we should use only one CMAKE_MINIMUM_REQUIRED version for all
platforms. Else it becomes a nightmare to maintain, you have to guard stuff
everywhere.


So far it hasn't been a big problem, but it's something we should
monitor - just how hassle is it to support older CMake versions.  On a
pure engineering standpoint I'd rather we'd just have a single CMake
min version as well, but from a pragmatic standpoint it can shift the
a small amount of disruption in one place to more disruption
elsewhere.  Where to draw the line is the difficult thing, something I
try to do on a case by case basis when reviewing submissions, and by
monitoring the pain threshold out in the community.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] cmake errors with 2.9.8

2010-07-16 Thread John Kelso

Hi,

Yes, I'm very willing to do this, but realisticaly won't have a chance to do
it until August.

But, I'll take a look at the effort involved right now, and if it isn't too
messy I'll see if I can fit into the interstisal spaces of my schedule and
knock it out.

Thanks,

John

On Fri, 16 Jul 2010, Jean-Sébastien Guay wrote:


Hi Robert, John,


Guarding the FRAMEWORK keyword sounds like the sensible thing to do,
it's a bit of pain, but it would allow those using cmake out of the
box on older OS spins to keep working.


Yes, but who will do it? It would need to be someone who runs into the
problem... John, do you have a bit of time to look into it?

In theory you'd just have to surround lines that have FRAMEWORK with
if(APPLE)...ELSE()...ENDIF() constructs. Hopefully you won't have to
copy whole blocks into the ELSE side, otherwise you could make a macro
you put in the CMakeModules/OSGMacroUtils.cmake that would do what you
need, i.e. omit FRAMEWORK on non-APPLE configs.

Argh, messy... :-)


So far it hasn't been a big problem, but it's something we should
monitor - just how hassle is it to support older CMake versions.  On a
pure engineering standpoint I'd rather we'd just have a single CMake
min version as well, but from a pragmatic standpoint it can shift the
a small amount of disruption in one place to more disruption
elsewhere.  Where to draw the line is the difficult thing, something I
try to do on a case by case basis when reviewing submissions, and by
monitoring the pain threshold out in the community.


I agree with you, and I remembered that some systems and distributions
are stuck on old versions for a long time (as is the case for John), so
yeah, going around guarding the FRAMEWORK keyword is pretty much the
only thing we can do.

Thanks,

J-S
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] cmake errors with 2.9.8

2010-07-16 Thread John Kelso

Good news!

As a quick test I deleted the line

FRAMEWORK DESTINATION /Library/Frameworks

in CMakeModules/ModuleInstall.cmake, and everything builds and installs just
fine.

I tried this:

INSTALL(
TARGETS ${LIB_NAME}
RUNTIME DESTINATION ${INSTALL_BINDIR} COMPONENT libopenscenegraph
LIBRARY DESTINATION ${INSTALL_LIBDIR} COMPONENT libopenscenegraph
ARCHIVE DESTINATION ${INSTALL_ARCHIVEDIR} COMPONENT libopenscenegraph-dev
IF(APPLE)
FRAMEWORK DESTINATION /Library/Frameworks
ENDIF
)

but get the error:

CMake Error: Error in cmake code at
/usr/local/HEV-beta/apps/osg/osg-2.9.8/OpenSceneGraph/CMakeModules/ModuleInstall.cmake:33:
Parse error.  Function missing ending ).  Instead found left paren with
text (.

I suspect this is an easy fix for someone who knows cmake.

Thanks,

John

On Fri, 16 Jul 2010, John Kelso wrote:


Hi,

Yes, I'm very willing to do this, but realisticaly won't have a chance to do
it until August.

But, I'll take a look at the effort involved right now, and if it isn't too
messy I'll see if I can fit into the interstisal spaces of my schedule and
knock it out.

Thanks,

John

On Fri, 16 Jul 2010, Jean-Sébastien Guay wrote:


Hi Robert, John,


Guarding the FRAMEWORK keyword sounds like the sensible thing to do,
it's a bit of pain, but it would allow those using cmake out of the
box on older OS spins to keep working.


Yes, but who will do it? It would need to be someone who runs into the
problem... John, do you have a bit of time to look into it?

In theory you'd just have to surround lines that have FRAMEWORK with
if(APPLE)...ELSE()...ENDIF() constructs. Hopefully you won't have to
copy whole blocks into the ELSE side, otherwise you could make a macro
you put in the CMakeModules/OSGMacroUtils.cmake that would do what you
need, i.e. omit FRAMEWORK on non-APPLE configs.

Argh, messy... :-)


So far it hasn't been a big problem, but it's something we should
monitor - just how hassle is it to support older CMake versions.  On a
pure engineering standpoint I'd rather we'd just have a single CMake
min version as well, but from a pragmatic standpoint it can shift the
a small amount of disruption in one place to more disruption
elsewhere.  Where to draw the line is the difficult thing, something I
try to do on a case by case basis when reviewing submissions, and by
monitoring the pain threshold out in the community.


I agree with you, and I remembered that some systems and distributions
are stuck on old versions for a long time (as is the case for John), so
yeah, going around guarding the FRAMEWORK keyword is pretty much the
only thing we can do.

Thanks,

J-S
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] cmake errors with 2.9.8

2010-07-16 Thread John Kelso

Same thing.

Parse error.  Function missing ending ).  Instead found left paren with
text (.
CMake Error: Error in cmake code at
/usr/local/HEV-beta/apps/osg/osg-2.9.8/OpenSceneGraph/CMakeModules/ModuleInstall.cmake:33:

Line 33 is the IF(APPLE) line.

Just to be sure, this is what I have (run through cat -n):
33  IF(APPLE)
34  FRAMEWORK DESTINATION /Library/Frameworks
35  ENDIF()

Thanks,

John

On Fri, 16 Jul 2010, Robert Osfield wrote:


Hi Johm,

Try ENDIF() rather than ENDIF.

Robert.

On Fri, Jul 16, 2010 at 3:13 PM, John Kelso ke...@nist.gov wrote:

Good news!

As a quick test I deleted the line

   FRAMEWORK DESTINATION /Library/Frameworks

in CMakeModules/ModuleInstall.cmake, and everything builds and installs just
fine.

I tried this:

INSTALL(
   TARGETS ${LIB_NAME}
   RUNTIME DESTINATION ${INSTALL_BINDIR} COMPONENT libopenscenegraph
   LIBRARY DESTINATION ${INSTALL_LIBDIR} COMPONENT libopenscenegraph
   ARCHIVE DESTINATION ${INSTALL_ARCHIVEDIR} COMPONENT libopenscenegraph-dev
   IF(APPLE)
       FRAMEWORK DESTINATION /Library/Frameworks
   ENDIF
)

but get the error:

CMake Error: Error in cmake code at
/usr/local/HEV-beta/apps/osg/osg-2.9.8/OpenSceneGraph/CMakeModules/ModuleInstall.cmake:33:
Parse error.  Function missing ending ).  Instead found left paren with
text (.

I suspect this is an easy fix for someone who knows cmake.

Thanks,

John

On Fri, 16 Jul 2010, John Kelso wrote:


Hi,

Yes, I'm very willing to do this, but realisticaly won't have a chance to
do
it until August.

But, I'll take a look at the effort involved right now, and if it isn't
too
messy I'll see if I can fit into the interstisal spaces of my schedule and
knock it out.

Thanks,

John

On Fri, 16 Jul 2010, Jean-Sébastien Guay wrote:


Hi Robert, John,


Guarding the FRAMEWORK keyword sounds like the sensible thing to do,
it's a bit of pain, but it would allow those using cmake out of the
box on older OS spins to keep working.


Yes, but who will do it? It would need to be someone who runs into the
problem... John, do you have a bit of time to look into it?

In theory you'd just have to surround lines that have FRAMEWORK with
if(APPLE)...ELSE()...ENDIF() constructs. Hopefully you won't have to
copy whole blocks into the ELSE side, otherwise you could make a macro
you put in the CMakeModules/OSGMacroUtils.cmake that would do what you
need, i.e. omit FRAMEWORK on non-APPLE configs.

Argh, messy... :-)


So far it hasn't been a big problem, but it's something we should
monitor - just how hassle is it to support older CMake versions.  On a
pure engineering standpoint I'd rather we'd just have a single CMake
min version as well, but from a pragmatic standpoint it can shift the
a small amount of disruption in one place to more disruption
elsewhere.  Where to draw the line is the difficult thing, something I
try to do on a case by case basis when reviewing submissions, and by
monitoring the pain threshold out in the community.


I agree with you, and I remembered that some systems and distributions
are stuck on old versions for a long time (as is the case for John), so
yeah, going around guarding the FRAMEWORK keyword is pretty much the
only thing we can do.

Thanks,

J-S


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] cmake errors with 2.9.8

2010-07-16 Thread John Kelso

On Fri, 16 Jul 2010, Jean-Sébastien Guay wrote:


Perhaps we could have that function in a separate file which would only
be loaded IF(APPLE)? Would that work?

J-S



I hope you're not asking me!  8^)

But, send me something I'd be happy to try it out.

Thanks,

John___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] cmake errors with 2.9.8

2010-07-15 Thread John Kelso

Hi all,

I just tried build OSG 2.9.8 on our CentOS system, using cmake 2.4.7.

My cmake command gave me errors.  The main thing I see are lines like this:

CMake Error: Error in cmake code at 
.../OpenSceneGraph-2.9.8/CMakeModules/ModuleInstall.cmake:28:
INSTALL TARGETS given unknown argument FRAMEWORK.

Line 28 of the file has:
FRAMEWORK DESTINATION /Library/Frameworks

We're not using OS X.

Any idea what's going wrong?  I suspect it's something simple.  More gory 
details on request.

Thanks,

John


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] How to use vertex attributes?

2010-01-12 Thread John Kelso

I have learned to avoid numbers 0, 2, 3 and 8.

See http://www.mail-archive.com/osg-users@lists.openscenegraph.org/msg26406.html

John

On Tue, 12 Jan 2010, Andrew Holland wrote:


Hi,

I want to use vertex attributes and I'm having trouble getting it to work. I 
think the problem should lie within this code here:


Code:

program-addBindAttribLocation(a_col, 0);

osg::Vec4Array* colors = new osg::Vec4Array;

for(int i=0;imodel-verticesNum;i++) {
float r = model-cols[i*4];
float g = model-cols[i*4+1];
float b = model-cols[i*4+2];
float a = model-cols[i*4+3];
colors-push_back(osg::Vec4(r,g,b,a));
}


//for this commented out code, when used with gl_Color in the vertex shader, it 
works, displaying the mesh and colours properly.
//pyramidGeometry-setColorArray(colors);
//pyramidGeometry-setColorBinding(osg::Geometry::BIND_PER_VERTEX);

//but using this code with the a_col atttribute instead of the gl_Color
//doesn't work, it shows the mesh to be white, and slightly deformed. e.g. 
vertices being out of place
pyramidGeometry-setVertexAttribArray(0, colors);
pyramidGeometry-setVertexAttribBinding(0, osg::Geometry::BIND_PER_VERTEX);





Am I missing something here?

I'll post the whole code below, incase the above code isn't enough.

..omitted, forums says something about urls and 2 posts, don't know why..


Also my shader programs are:

vertex shader

Code:



attribute vec4 a_col;
varying vec4 col;

void main() {
col = a_col;//gl_Color;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;

}




fragment shader

Code:

varying vec4 col;

void main() {
gl_FragColor =col;
}




Thanks!

andrew

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=22452#22452





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] is the osg inventor loader broken?

2009-10-20 Thread John Kelso

Hi,

We just tried it on a 32 bit system and had the same incorrect results.

John

On Tue, 20 Oct 2009, Eric Sokolowsky wrote:


I tried out the file below, and I can confirm all of John's results. Since I 
use SGI's Inventor instead of Coin, it appears that the bug is in the Inventor 
plugin, and not Coin. I wonder if it's a problem with 64-bit builds? I have 
been using Centos 5.2/5.3 on a 64-bit machine for a while now, and my previous 
use of Inventor was probably on our old 32-bit machines.

-Eric

On Fri, Oct 9, 2009 at 3:12 PM, John Kelso 
ke...@nist.govmailto:ke...@nist.gov wrote:
Hi,

We're running 2.8.2 on a 64-bit Centos system.  OSG was configured to use
Coin-3.1.1.

Not too long ago I ran an old demo using the new releases and noticed that
an Inventor file that used to load properly no longer did.  Checking around,
many others also didn't.

Here's a simple example file that demonstrates the problem:

#Inventor V2.0 ascii
Material { diffuseColor 1 0 0 }
Cone { bottomRadius 1 height 2 }

Rotation { rotation 1 0 0 3.14159 }
Material { diffuseColor 0 1 0  }
Cone { bottomRadius 1 height 2 }

If you look at this with ivview, also linked with Coin-3.1.1, you see two
intersecting cones, red on the bottom and green on the top.

If you load this same file with osgviewer you just see the red cone.

If you use osgconv to create an osg file from the iv file you see both cones
in the osg file with identical vertex data and the green one isn't rotated.
I'm surprised I don't see any z-fighting in osgviewer.

Anyway, can anyone else try loading this Inventor file and see if it works
for them?

Many thanks,

John



___
osg-users mailing list
osg-users@lists.openscenegraph.orgmailto:osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] is the osg inventor loader broken?

2009-10-09 Thread John Kelso

Hi,

We're running 2.8.2 on a 64-bit Centos system.  OSG was configured to use
Coin-3.1.1.

Not too long ago I ran an old demo using the new releases and noticed that
an Inventor file that used to load properly no longer did.  Checking around,
many others also didn't.

Here's a simple example file that demonstrates the problem:

  #Inventor V2.0 ascii
  Material { diffuseColor 1 0 0 }
  Cone { bottomRadius 1 height 2 }

  Rotation { rotation 1 0 0 3.14159 }
  Material { diffuseColor 0 1 0  }
  Cone { bottomRadius 1 height 2 }

If you look at this with ivview, also linked with Coin-3.1.1, you see two
intersecting cones, red on the bottom and green on the top.

If you load this same file with osgviewer you just see the red cone.

If you use osgconv to create an osg file from the iv file you see both cones
in the osg file with identical vertex data and the green one isn't rotated.
I'm surprised I don't see any z-fighting in osgviewer.

Anyway, can anyone else try loading this Inventor file and see if it works
for them?

Many thanks,

John



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] building 2.8.1 with dcmtk

2009-06-08 Thread John Kelso

On Sat, 6 Jun 2009, Robert Osfield wrote:


Hi John,

Try removing your OpenSceneGraph/CMakeCache.text file and the re-run
./configure to see if that kicks CMake into properly checking all the
dependencies.

Also try disabling the aggressive warnings to see if that prevents gcc
spitting out errors when compiling against ITK.

Robert.



Hi,

I did both cmakes in empty directories, so there was no cache file to erase.
All other cmakes mentioned in this email were also run in empty directories.

Both times I got the same errors in InsightToolkit.

I reran the cmake that uses just DCMTK_DIR with an added
OSG_USE_AGGRESSIVE_WARNINGS=OFF and it built successfully.  OK!

I admit that this is a bit of a surprise, at least to me, as I wasn't
expecting compiler warnings to have an effect on compiler errors.

I reran cmake again without DCMTK_DIR and with OSG_USE_AGGRESSIVE_WARNINGS=OFF
and it also built successfully, so I'm not sure it DCMTK is ever used.

So I ran cmake one more time, in a clean directory, with both DCMTK_DIR and
OSG_USE_AGGRESSIVE_WARNINGS=OFF, and got the below in my CMake\* files.

CMakeCache.txt finds DCMTK, but the dicom plugin still refers to
InsightToolkit.

Does this help resolve what's going on?  Did I perhaps not install the DCMTK
components that he dicom plugin needs?

Thanks again,

John


find . -name CMake\* | xargs grep -i dcmtk

./CMakeCache.txt://Root of DCMTK source tree (optional).
./CMakeCache.txt:DCMTK_DIR:PATH=/usr/local/HEV-beta/apps/dcmtk/dcmtk-3.x
./CMakeCache.txt:DCMTK_ROOT_INCLUDE_DIR:PATH=/usr/local/HEV-beta/apps/dcmtk/dcmtk-3.x/config/include
./CMakeCache.txt:DCMTK_config_INCLUDE_DIR:PATH=/usr/local/HEV-beta/apps/dcmtk/dcmtk-3.x/config/include/dcmtk/config
./CMakeCache.txt:DCMTK_dcmdata_INCLUDE_DIR:PATH=DCMTK_dcmdata_INCLUDE_DIR-NOTFOUND
./CMakeCache.txt:DCMTK_dcmdata_LIBRARY:FILEPATH=/usr/local/HEV-beta/apps/dcmtk/dcmtk-3.x/dcmdata/libsrc/libdcmdata.a
./CMakeCache.txt:DCMTK_dcmimgle_INCLUDE_DIR:PATH=DCMTK_dcmimgle_INCLUDE_DIR-NOTFOUND
./CMakeCache.txt:DCMTK_dcmimgle_LIBRARY:FILEPATH=/usr/local/HEV-beta/apps/dcmtk/dcmtk-3.x/dcmimgle/libsrc/libdcmimgle.a
./CMakeCache.txt:DCMTK_dcmnet_LIBRARY:FILEPATH=/usr/local/HEV-beta/apps/dcmtk/dcmtk-3.x/dcmnet/libsrc/libdcmnet.a
./CMakeCache.txt:DCMTK_imagedb_LIBRARY:FILEPATH=DCMTK_imagedb_LIBRARY-NOTFOUND
./CMakeCache.txt:DCMTK_ofstd_INCLUDE_DIR:PATH=DCMTK_ofstd_INCLUDE_DIR-NOTFOUND
./CMakeCache.txt:DCMTK_ofstd_LIBRARY:FILEPATH=/usr/local/HEV-beta/apps/dcmtk/dcmtk-3.x/ofstd/libsrc/libofstd.a
./CMakeCache.txt://Advanced flag for variable: DCMTK_DIR
./CMakeCache.txt:DCMTK_DIR-ADVANCED:INTERNAL=1



find . -name CMake\* | xargs grep -i insighttoolkit

./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/Review
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/Patented
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/Utilities/vxl/core
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/Utilities/vxl/vcl
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/Utilities
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/Utilities/DICOMParser
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/Utilities/NrrdIO
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/Utilities/MetaIO
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/SpatialObject
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/Numerics/NeuralNetworks
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/Numerics/Statistics
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/Numerics/FEM
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/IO
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/Numerics
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/gdcm/src
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/expat
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/Common
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/BasicFilters
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit/Algorithms
./src/osgPlugins/dicom/CMakeFiles/CMakeDirectoryInformation.cmake:  
/usr/include/InsightToolkit
./CMakeCache.txt:// root of the build tree, or PREFIX/lib/InsightToolkit for an
./CMakeCache.txt:ITK_DIR:PATH=/usr/lib/InsightToolkit




Re: [osg-users] build errors with 2.8.1

2009-06-05 Thread John Kelso

Hi,

I'm a bit surprised by this because as far as I know we have a fairly recent
version of Centos.  Are there no other Centos users out there trying 2.8.1?


cat /etc/redhat-release

CentOS release 5.3 (Final)

which incudes this version of cmake:

rpm -q cmake

cmake-2.4.8-3.el5.i386

Is cmake-2.4.8 really old?

That said, I'll try to get our system guy to install a newer version of
cmake.

Thanks again,

John

On Fri, 5 Jun 2009, Robert Osfield wrote:


Hi Paul,

On Thu, Jun 4, 2009 at 8:11 PM, Paul
Melisosg-us...@assumetheposition.nl wrote:

First, try this:

In applications/osgversion/CMakeLists.txt change ENDIF() to
ENDIF(OSG_MAINTAINER)
That line looks fishy.


Cmake used to require the the IF() ENDIF() matched but this
requirement was dropped, and now the OSG-2.9.x and svn/trunk has been
cleaned up to not have the matching entries as it makes the code far
more readable and maintainable.

I was under the impression that this wouldn't cause too many problems
except for really old CMake versions.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] build errors with 2.8.1

2009-06-05 Thread John Kelso

Below is from our sysadmin when I asked him about getting a newer version cmake.

Any comments, anyone?

Can we really be the only site having this problem?

Thanks again,

John

-- Forwarded message --
We're using a current version of the most recent, one of the most popular 
Enterprise Linux distribution on the planet.  They chose to take a shortcut in 
their coding style that's not backward compatible.  And CentOS 5.3's cmake 
2.4.7 was released 2007-07-17, not even two years ago...is that classified as 
really old?  Keep in mind CentOS/RHEL has a 7 year lifespan, and they don't 
update package versions very often, just nice, warm  fuzzy stable fixes.  It 
keeps things running, minimizing breakage, which is the whole point of an 
enterprise distribution.


From the cmake FAQ 
http://www.vtk.org/Wiki/CMake_FAQ#Isn.27t_the_.22Expression.22_in_the_.22ELSE_.28Expression.29.22_confusing.3F, 
it appears the ability to have empty ENDIF() started in 2.6.0, released 
2008-05-06 http://www.cmake.org/files/. 
IMHO, they made a bad choice coding to a recent buildsystem requirement.



On 06/05/2009 11:40 AM, John Kelso wrote:

Any idea if we can get a newer cmake?  Please see below for gory details.

Many thanks,

John


-- Forwarded message --
Date: Fri, 5 Jun 2009 11:38:39 -0400 (EDT)
From: John Kelso ke...@nist.gov
To: OpenSceneGraph Users osg-users@lists.openscenegraph.org
Subject: Re: build errors with 2.8.1

Hi,

I'm a bit surprised by this because as far as I know we have a fairly recent
version of Centos.  Are there no other Centos users out there trying 2.8.1?


cat /etc/redhat-release

CentOS release 5.3 (Final)

which incudes this version of cmake:

rpm -q cmake

cmake-2.4.8-3.el5.i386

Is cmake-2.4.8 really old?

That said, I'll try to get our system guy to install a newer version of
cmake.

Thanks again,

John

On Fri, 5 Jun 2009, Robert Osfield wrote:


Hi Paul,

On Thu, Jun 4, 2009 at 8:11 PM, Paul
Melisosg-us...@assumetheposition.nl wrote:

First, try this:

In applications/osgversion/CMakeLists.txt change ENDIF() to
ENDIF(OSG_MAINTAINER)
That line looks fishy.


Cmake used to require the the IF() ENDIF() matched but this
requirement was dropped, and now the OSG-2.9.x and svn/trunk has been
cleaned up to not have the matching entries as it makes the code far
more readable and maintainable.

I was under the impression that this wouldn't cause too many problems
except for really old CMake versions.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 




___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] build errors with 2.8.1

2009-06-05 Thread John Kelso

Hi!

I missed the earlier note about the cmake problem being fixed in the branch.
Sorry to make noise about a fixed problem.  I do test (and squawk) when I
can, but as we all know, life sometimes has other plans.

I installed DCMTK in a local directory, and just tried to rebuild 2.8.1 in a
clean build directory.

The cmake command I used was:


  cmake \

 -D CMAKE_INSTALL_PREFIX=$HEVROOT/apps/osg/osg-2.x/installed \
 -D BUILD_OSG_EXAMPLES=1 \
 -D INVENTOR_INCLUDE_DIR=`coin-config --prefix`/include \
 -D INVENTOR_LIBRARY=`coin-config --prefix`/lib/libCoin.so \
 -D DCMTK_DIR=$HEVROOT/apps/dcmtk/dcmtk-3.x \
 ../OpenSceneGraph


When I make, it seems to still want to use our installed ITK for the DICOMED
plugin, and bombs with the same errors.

I also tried setting DCMTK_INCLUDE_DIRS and DCMTK_LIBRARIES instead, but got
the same result.

Is there something else I need to do to get OSG to use DCMTK instead of
ITK?

Thanks for your patience,

John


On Fri, 5 Jun 2009, Robert Osfield wrote:


Hi John and Jason,

Can we get a little perspective on this issue.  The build problem was
a warning that we've already fixed in OSG-2.8 branch.  As for snooty
admin's, best to leave them alone if helping you out is too much for
them.

The warning that occurred in OSG-2.8.1 because of something I merged
in from svn/trunk was in the release candidates and I made repeated
calls for testing... Had we known about the issue it would have been
fixed in less than five minutes and well before the release.  So
PLEASE don't ignore the calls for testing.

John, the errors you are getting in ITK look to have nothing to do
with the OSG.  Can you please look into this and provide feedback on
what is wrong.  There is chance the high levels of warning we use in
the OSG build now could be causing the compile to emit errors instead
of warnings when compiling ITK.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] build errors with 2.8.1

2009-06-04 Thread John Kelso

Hi again,

As a stab in the dark to try and fix my memory leak I decided to try using
the latest OSG release.

After downloading 2.8.1 and typing cmake, I got this error:

 The end of a CMakeLists file was reached with an IF statement that was not
 closed properly.  Within the directory:
 /usr/local/HEV-beta/apps/osg/osg-2.8.1/OpenSceneGraph/applications/osgversion
 The arguments are: OSG_MAINTAINER

Are all bets off from this point, or can this be safely ignored?

I plowed ahead anyway and the make died a horrible death.  Before I waste
time figuring out why, I'm wondering if I could get some advice about the
relevance and importance of the cmake message.

I can send a typescript if it would be of use.

Thanks,

John
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] memory leak in osg::StateSet::merge() ?

2009-06-03 Thread John Kelso

Hi,

I have discovered that this line of code:

toNode-getOrCreateStateSet()-merge(*(fromNode-getOrCreateStateSet())) ;

seems to cause a memory leak.  I busted it up:
osg::StateSet *fromNodeStateSet = fromNode-getOrCreateStateSet() ;
osg::StateSet *toNodeStateSet = toNode-getOrCreateStateSet() ;
toNodeStateSet-merge(*fromNodeStateSet) ;

and it still leaks.  If I comment out the third line the leak goes away.  (I
print the pointers, so hopefully the optimizer doesn't just toss all three
lines.)

Using either of these commands, although less than useful:
toNodeStateSet-merge(*toNodeStateSet) ;
fromNodeStateSet-merge(*fromNodeStateSet) ;
doesn't cause the leak.

Both nodes have StateSets, so this also works, and also leaks:
osg::StateSet *fromNodeStateSet = fromNode-getStateSet() ;
osg::StateSet *toNodeStateSet = toNode-getStateSet() ;
toNodeStateSet-merge(*fromNodeStateSet) ;

I'm sort of stumped, and hoping the group might have some suggestions of what
might be the source of the leak.

I'm using OSG Version 2.6.1, Linux.

Many thanks,

John

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] ANN: OSG Training in Washington DC

2009-01-30 Thread John Kelso

Hi,

I second that idea.  I think any evening would work for me.

Do you have a venue for the class yet?  That last place was ring-a-ding!

John

On Thu, 29 Jan 2009, Eric Sokolowsky wrote:


Paul Martz wrote:

Hi all -- Just a reminder regarding the upcoming public OSG training course,
to be held in Washington DC, March 9-12, 2009.



While I probably won't be attending the training, I hope that there will
be a small user's group meeting one evening while you're in town. The
last one was worthwhile. I'm not available on Monday, but any of the
other days should work for me.

-Eric
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osg::Matrixd -- How to remove rotation for a certainaxis?

2008-02-04 Thread John Kelso
Won't this also remove the scale?

-John

On Mon, 4 Feb 2008, Thrall, Bryan wrote:


 Sorry, hit send too soon, updated below...

 Thrall, Bryan wrote on Monday, February 04, 2008 12:21 PM:
 Tobias M?nch wrote on Monday, February 04, 2008 11:29 AM:
 Hello at all,

 I have osg::Matrixd view matrix and want to remove the rotation
 around x- and y-axis. Only rotation around z-axis should stay in the
 matrix. I try a lot of possibilties but couldn't find a solution.

 When I make the following steps, the rotation around all axis is
 removed, not only the two specified axis. The same with
 osg::Matrixd::makeRotate(..);

 matrix = osg::Matrixd::rotate(osg::DegreesToRadians(0.0),
 osg::Vec3(0,1,0));

 matrix = osg::Matrixd::rotate(osg::DegreesToRadians(0.0),
 osg::Vec3(1,0,0));


 I also tried to set the matrix with complete new values and to take
 given value for z-rotation, but therefore I miss a function to read
 the one rotation part (around the z-axis).

 How can help me?

 Both of those lines *set* matrix to a non-rotating matrix; what you
 want is to *modify* the matrix to remove the X and Y rotations.

 The easiest way is to modify the matrix directly:


 matrix(0,0) = 1;
 matrix(0,1) = 0;
 matrix(0,2) = 0;
 matrix(1,0) = 0;
 matrix(1,1) = 1;
 matrix(1,2) = 0;

 If I didn't mess up my indices, this zeroes out the X and Y rotations while 
 leaving the Z intact.

 HTH,
 --
 Bryan Thrall
 FlightSafety International
 [EMAIL PROTECTED]
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] simulation time and sequence node

2007-08-20 Thread John Kelso
Hi,

A simple question- is there any reason the simulation time can't go
backwards?

The current sequence node only supports time going forward, or stopped,
which made sense in OSG 1.

Now that it's using the simulation time in OSG 2, it would be nice if I
could run the sequence nodes back and forth by moving the simulation time
back and forth.

Before I get into making the changes to the sequence node, I wanted to
know if having the simulation time run backwards is officially approved
of.

Thanks,

-John

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org