Re: [osg-users] EXTERNAL: Re: voxelization using offscreen rendering

2012-06-08 Thread Pecoraro, Alexander N
Thanks, that sounds much better than the way I was attempting to do it.

Alex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org 
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert Osfield
Sent: Thursday, June 07, 2012 8:44 PM
To: OpenSceneGraph Users
Subject: EXTERNAL: Re: [osg-users] voxelization using offscreen rendering

Hi Alex,

To use the same view point for LOD calculations for nested Camera's the 
osg::Camera class has a support for setting the ReferenceFrame of the nested 
Camera to ABSOLUTE_RF_INHERIT_VIEWPOINT which allows them to have independent 
view matrices but use the same viewpoint and the parent Camera for LOD calcs.  
The osgShadow NodeKit uses this in it's implementations so have a look the 
source code entries for setReferenceFrame in it for examples.

Robert.

On 7 June 2012 22:09, Pecoraro, Alexander N alexander.n.pecor...@lmco.com 
wrote:
 Hi,



 I'm wondering if someone can suggest an elegant solution to the 
 problem of trying to voxelize some triangle based geometry. The 
 general algorithm is to render the scene from multiple different 
 camera view points and then use the resulting color and depth buffer 
 from each render pass to construct a voxel data set. However, the 
 problem is that the geometry that I'm attempting to voxelize has 
 multiple levels of detail so each camera view sees a slightly different level 
 of detail which ends up generating a fuzzy voxel data set.
 What I need is for the first render pass using the top down overhead 
 camera be the determining factor in the level of detail selection and 
 then all the other camera's should use the same level of detail (i.e. 
 same list of osg::Geometry nodes) when they render. How can I do this 
 using the osg::Camera class?



 Thanks.



 Alex


 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.
 org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] voxelization using offscreen rendering

2012-06-07 Thread Pecoraro, Alexander N
Hi,

I'm wondering if someone can suggest an elegant solution to the problem of 
trying to voxelize some triangle based geometry. The general algorithm is to 
render the scene from multiple different camera view points and then use the 
resulting color and depth buffer from each render pass to construct a voxel 
data set. However, the problem is that the geometry that I'm attempting to 
voxelize has multiple levels of detail so each camera view sees a slightly 
different level of detail which ends up generating a fuzzy voxel data set. What 
I need is for the first render pass using the top down overhead camera be the 
determining factor in the level of detail selection and then all the other 
camera's should use the same level of detail (i.e. same list of osg::Geometry 
nodes) when they render. How can I do this using the osg::Camera class?

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Switching osg::Program

2012-05-21 Thread Pecoraro, Alexander N
What is the proper way to switch the osg::Program that an osg::StateSet is 
using? Is the StateSet's update callback the only way to do it? I tried doing 
it in the Camera's post draw callback, but I still get a segfault from the 
render thread that is still using the old osg::Program.

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] colored geometry help

2012-04-30 Thread Pecoraro, Alexander N
I'm trying to draw some translucent colored triangles on top of some textured 
triangles and the triangle color does not appear to be working consistently. 
I'm setting the osg::Geometry color array and its binding to 
BIND_PER_PRIMITIVE_SET and adding one color to the array per primitive set. 
However, what I'm seeing is that the triangles just look black - they are 
transparent though - so the color arrays transparency seems to be having an 
effect, but the color does not. However, if I rotate the camera around a little 
and position it just right then the color starts taking effect, which makes me 
think that some other branch of scene graph is setting some state that is 
preventing the color array from being used correctly. Can anyone provide me 
some hints as to what to look for? Here's the states that I am setting on the 
colored geometry:

GL_COLOR_MATERIAL - OFF | PROTECTED
GL_CULL_MODE - OFF | PROTECTED
GL_LIGHTING - OFF | PROTECTED
GL_BLEND - ON | PROTECTED

I'm kind of stumped as to what I'm doing wrong, so any help would be 
appreciated.

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] multiple windows question

2012-02-16 Thread Pecoraro, Alexander N
What is the best way to have multiple windows with different views of the same 
scene graph? I took a look at the composite viewer example, but it has 
different views of the same scene graph with a single window. So not quite what 
I want.

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] bug in Imagef::readImageFromCurrentTexture

2012-02-15 Thread Pecoraro, Alexander N
I was trying to use osg::Image::readImageFromCurrentTexture() to read back a 
texture from the GPU. The glGetTexImage() call was failing because it was using 
an invalid pixel format. The pixel format that it uses comes from calling 
osg::Image::computePixelFormat() with the internal format of the texture as 
input. In this case my texture's internal format was GL_RGBA8, which 
computePixelFormat() incorrectly returns GL_RGBA8. It should return GL_RGBA. I 
also tried GL_RGBA16 and had the same experience. Probably just need to add:

case GL_RGBA8:
case GL_RGBA16:
return GL_RGBA;

to the switch statement in computePixelFormat() to fix the problem.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Draw Elements Instanced Shader Help

2011-10-26 Thread Pecoraro, Alexander N
I can't figure out why the shader embedded in the attached file doesn't work 
without the hard coded position and scale variables. I'm trying to write a 
hardware instancing shader that uses glDrawElementsInstancedEXT() to draw 
multiple instances of a box where the position and scale of the box is computed 
on the GPU using uniform arrays indexed by gl_InstanceID. It seems like my 
uniform arrays are not being initialized even though the scene graph has a 
StateSet that contains the Uniform attributes. The only way for me to make it 
work the way I want is to explicitly set the position and scale in the shader 
with some hard coded values, but once I remove that code it stops working.

Any idea what I'm doing wrong?

Thanks.

Alex


osgTile_41x27_0_models_lod-1.osg
Description: osgTile_41x27_0_models_lod-1.osg
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] EXTERNAL: Re: Draw Elements Instanced Shader Help

2011-10-26 Thread Pecoraro, Alexander N
Seems other people have had the same issue as me:

http://forum.openscenegraph.org/viewtopic.php?t=7287

Are there any plans to address the issue in an upcoming release?

Alex

From: Pecoraro, Alexander N
Sent: Wednesday, October 26, 2011 2:17 PM
To: OpenSceneGraph Users
Subject: RE: EXTERNAL: Re: [osg-users] Draw Elements Instanced Shader Help

I think I figured it out. When it links my program for some reason it uses the 
names InstancePositions[0] and InstanceScales[0] when it builds its 
_uniformInfoMap in the function linkeProgram().  It obtains the names of the 
uniforms that my program uses via glGetActiveUniform() (see snippet from 
Program.cpp below):

Program::PerContextProgram::linkProgram(osg::State state)
{
...
for( GLint i = 0; i  numUniforms; ++i )
{

_extensions-glGetActiveUniform( _glProgramHandle,
i, maxLen, 0, size, type, name );

GLint loc = _extensions-glGetUniformLocation( _glProgramHandle, 
name );

if( loc != -1 )
{
_uniformInfoMap[Uniform::getNameID(reinterpret_castconst 
char*(name))] = ActiveVarInfo(loc,type,size);

OSG_INFO  \tUniform \  name  \
  loc= loc
  size= size
  type=  Uniform::getTypename((Uniform::Type)type)
 std::endl;
}
}
...
}

The value of name that is returned is InstancePositions[0] and 
InstanceScales[0] - instead of just InstancePositions and InstanceScales 
(without the square brackets) in my scene graph's StateSet. So when it comes 
time to apply the uniforms it can't find a mapping from my StateSet's Uniform 
to the Program's list of uniforms because the names differ.

Is that normal behavior for glActiveUniform to return array names with square 
brackets appended?

Alex


From: osg-users-boun...@lists.openscenegraph.org 
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Paul Martz
Sent: Wednesday, October 26, 2011 1:55 PM
To: OpenSceneGraph Users
Subject: EXTERNAL: Re: [osg-users] Draw Elements Instanced Shader Help

On 10/26/2011 12:49 PM, Pecoraro, Alexander N wrote:
Any idea what I'm doing wrong?

Have you tried looking at the osgdrawinstanced example to see how your code 
differs from functioning code?

--

  -Paul Martz  Skew Matrix Software

   http://www.skew-matrix.com/
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] EXTERNAL: Re: Draw Elements Instanced Shader Help

2011-10-26 Thread Pecoraro, Alexander N
I think I figured it out. When it links my program for some reason it uses the 
names InstancePositions[0] and InstanceScales[0] when it builds its 
_uniformInfoMap in the function linkeProgram().  It obtains the names of the 
uniforms that my program uses via glGetActiveUniform() (see snippet from 
Program.cpp below):

Program::PerContextProgram::linkProgram(osg::State state)
{
...
for( GLint i = 0; i  numUniforms; ++i )
{

_extensions-glGetActiveUniform( _glProgramHandle,
i, maxLen, 0, size, type, name );

GLint loc = _extensions-glGetUniformLocation( _glProgramHandle, 
name );

if( loc != -1 )
{
_uniformInfoMap[Uniform::getNameID(reinterpret_castconst 
char*(name))] = ActiveVarInfo(loc,type,size);

OSG_INFO  \tUniform \  name  \
  loc= loc
  size= size
  type=  Uniform::getTypename((Uniform::Type)type)
 std::endl;
}
}
...
}

The value of name that is returned is InstancePositions[0] and 
InstanceScales[0] - instead of just InstancePositions and InstanceScales 
(without the square brackets) in my scene graph's StateSet. So when it comes 
time to apply the uniforms it can't find a mapping from my StateSet's Uniform 
to the Program's list of uniforms because the names differ.

Is that normal behavior for glActiveUniform to return array names with square 
brackets appended?

Alex


From: osg-users-boun...@lists.openscenegraph.org 
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Paul Martz
Sent: Wednesday, October 26, 2011 1:55 PM
To: OpenSceneGraph Users
Subject: EXTERNAL: Re: [osg-users] Draw Elements Instanced Shader Help

On 10/26/2011 12:49 PM, Pecoraro, Alexander N wrote:
Any idea what I'm doing wrong?

Have you tried looking at the osgdrawinstanced example to see how your code 
differs from functioning code?


--

  -Paul Martz  Skew Matrix Software

   http://www.skew-matrix.com/
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] external reference to shader file

2011-10-24 Thread Pecoraro, Alexander N
Is it possible to embed an external reference to a shader file with any of the 
osg file formats (.osg, .ive, .osgt, .osgx, etc)?

Looking at the plugin code it appears that the only attribute of the shader 
that gets written is the shader source code, but just wanted to be sure I 
wasn't missing something.

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Fixed function pipeline in OSG 3.0.1

2011-09-14 Thread Pecoraro, Alexander N
In OSG version 3.0.1, if I don't explicitly attach a shader to my scene graph 
does it fall back on the fixed function pipeline or is a fixed function 
pipeline equivalent shader generated for it?

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] osgb vs. ive ProxyNode database path

2011-08-02 Thread Pecoraro, Alexander N
I noticed that in the osgb format the database path of the ProxyNode is no 
longer determined at runtime. This is different than the ive format, which if 
you don't set the database path it will determine the database path by using 
the osgDB::Options' database path list. This was nice because it enalbed you to 
use a relative path in the file that contains the ProxyNode. Now that I've 
switched to the osgb format when I create a ProxyNode with a filename of 
./NameOfFile.osgb or just NameOfFile.osgb it won't load the file even 
though it is in the same directory as the file that contains the ProxyNode.

Is this behavior a bug or on purpose? If it is on purpose then I would say it 
seems a little inconsistent because the PagedLOD node in the osgb format still 
uses the Option's database path list to set the database path, so you can still 
have relative paths in the PagedLOD nodes, just not in the ProxyNodes.

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] transform to screen function

2010-08-31 Thread Pecoraro, Alexander N
Is there some easy function on the osg::Camera (or some other osg class)  that 
I can call to compute the 2d screen position of a 3d point? I couldn't find one 
when looking through the documentation, but I figured I would ask just in case 
I am missing it.

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] transform to screen function

2010-08-31 Thread Pecoraro, Alexander N
Well that's easy enough:

osg::Matrix MVPW(camera-getViewMatrix() *
 camera-getProjectionMatrix() *
 camera-getViewport()-computeWindowMatrix());

osg::Vec3 posIn2D = posIn3D * MVPW;

Thanks.

Alex


-Original Message-
From: osg-users-boun...@lists.openscenegraph.org 
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Tueller, 
Shayne R Civ USAF AFMC 519 SMXS/MXDEC
Sent: Tuesday, August 31, 2010 1:05 PM
To: OpenSceneGraph Users
Subject: EXTERNAL: Re: [osg-users] transform to screen function

Alex,

The short answer is no. 

However, this question has already been addressed...

http://thread.gmane.org/gmane.comp.graphics.openscenegraph.user/59941/focus=
59966

-Shayne

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Pecoraro, 
Alexander N
Sent: Tuesday, August 31, 2010 9:56 AM
To: OpenSceneGraph Users
Subject: [osg-users] transform to screen function

Is there some easy function on the osg::Camera (or some other osg class) that I 
can call to compute the 2d screen position of a 3d point? I couldn't find one 
when looking through the documentation, but I figured I would ask just in case 
I am missing it.

 

Thanks.

 

Alex

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] theading mode question

2009-12-17 Thread Pecoraro, Alexander N
Is there a way to set the threading mode so that the draw thread does not run 
until after the update traversal is done?

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] possible bug in osg reader/writer for PagedLOD nodes

2009-08-31 Thread Pecoraro, Alexander N
I think there is a bug in the osg reader/writer for PagedLOD nodes. If you set 
the LOD::CenterMode to USE_BOUNDING_SPHERE_CENTER then it doesn't get written 
to the output .osg file because the writer ignores its value unless it is set 
to USER_DEFINED_CENTER, in which case it just writes out the user defined 
center, but not the center mode (see LOD_writeLocalData() function in the osg 
plugin). This doesn't cause a problem for regular LOD nodes because their 
bounds can be computed from their children, but if you have a PagedLOD node 
that has no children (because its children area loaded by the pager) then its 
not possible to compute a valid bounding sphere to use (and it won't try to 
anyway because the center mode defaults to USER_DEFINED_CENTER). This is 
different than the way the ive writer works - when I write my PagedLOD nodes to 
the ive file format and then view them with the osgviewer it works fine, but 
when I write to osg the externally referenced files are never paged in because 
it thinks the center of the PagedLOD nodes is (0,0,0).

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Draw time vs. GPU time Stats question

2009-07-29 Thread Pecoraro, Alexander N
Is it fair to say that the Draw time in the OSG stats measures the time needed 
to sort and call the display lists of the Geometry nodes obtained by the Cull 
stage and make the other frame rendering related OpenGL calls? And GPU time is 
the time that the GPU spends rendering the geometry (vertices, textures, etc)?

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] osgDB::readNodeFile Thread Safe?

2009-07-17 Thread Pecoraro, Alexander N
Is the function osgDB::readNodeFile thread safe?

I seem to remember in previous versions of the OSG API that the 
osgDB::DatabasePager would use a mutex to prevent threading issues when calling 
readNodeFile(), but in 2.8.0 it doesn't seem to do that anymore.

Just wondering because I'm working on some conversion software that uses 
multiple threads and needs to call readNodeFile quite often.

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Simplifier for Texture Images

2009-07-02 Thread Pecoraro, Alexander N
Is there something similar to the polygon simplifier (osgUtil::Simplifier) but 
for texture images? I'm using the simplifier to generate a reduced version of 
some geometry, but I would also like it to have that geometry reference lower 
resolution versions of the textures.

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Simplifier for Texture Images

2009-07-02 Thread Pecoraro, Alexander N
Oh nice, I think perhaps the scaleImage() function of the Texture2D class will 
work for me.

Alex


From: osg-users-boun...@lists.openscenegraph.org 
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Pecoraro, 
Alexander N
Sent: Thursday, July 02, 2009 6:10 PM
To: 'OpenSceneGraph Users'
Subject: [osg-users] Simplifier for Texture Images

Is there something similar to the polygon simplifier (osgUtil::Simplifier) but 
for texture images? I'm using the simplifier to generate a reduced version of 
some geometry, but I would also like it to have that geometry reference lower 
resolution versions of the textures.

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] How to test for anti-alias support

2009-06-19 Thread Pecoraro, Alexander N
I'm running on Redhat Linux:

$ uname -a
Linux cavs 2.6.18-92.1.22.el5PAE #1 SMP Fri Dec 5 09:58:49 EST 2008 i686 i686 
i386 GNU/Linux

My video card is an NVIDIA GeForce 9600 GT, here is some driver information 
from my X log:

NVIDIA GLX Module  173.14.12  Thu Jul 17 18:36:35 PDT 2008
NVIDIA dlloader X Driver  173.14.12  Thu Jul 17 18:15:54 PDT 2008
NVIDIA Unified Driver for all Supported NVIDIA GPUs

Here is the rest of the NVidia related output from my X log (just in case it is 
useful):

(--) Chipset NVIDIA GPU found
(II) Module wfb: vendor=NVIDIA Corporation
(**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32
(==) NVIDIA(0): RGB weight 888
(==) NVIDIA(0): Default visual is TrueColor
(==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0)
(**) NVIDIA(0): Enabling RENDER acceleration
(II) NVIDIA(0): Support for GLX with the Damage and Composite X extension is
(II) NVIDIA(0): enabled.
(II) NVIDIA(0): NVIDIA GPU GeForce 9600 GT (G94) at PCI:8:0:0 (GPU-0)
(--) NVIDIA(0): Memory: 524288 kBytes
(--) NVIDIA(0): VideoBIOS: 62.94.11.00.02
(II) NVIDIA(0): Detected PCI Express Link width: 16X
(--) NVIDIA(0): Interlaced video modes are supported on this GPU
(--) NVIDIA(0): Connected display device(s) on GeForce 9600 GT at PCI:8:0:0:
(--) NVIDIA(0): DELL 1907FP (CRT-0)
(--) NVIDIA(0): DELL 1907FP (CRT-0): 400.0 MHz maximum pixel clock
(II) NVIDIA(0): Assigned Display Device: CRT-0
(II) NVIDIA(0): Validated modes:
(II) NVIDIA(0): 1280x1024
(II) NVIDIA(0): 1280x960
(II) NVIDIA(0): 1280x800
(II) NVIDIA(0): 1152x864
(II) NVIDIA(0): 1024x768
(II) NVIDIA(0): 800x600
(II) NVIDIA(0): 800x600
(II) NVIDIA(0): 640x480
(II) NVIDIA(0): 640x480
(II) NVIDIA(0): Virtual screen size determined to be 1280 x 1024
(--) NVIDIA(0): DPI set to (85, 86); computed from UseEdidDpi X config
(--) NVIDIA(0): option
(==) NVIDIA(0): Disabling 32-bit ARGB GLX visuals.
(II) NVIDIA(0): Initialized GPU GART.
(II) NVIDIA(0): Setting mode 1280x1024
(II) Loading extension NV-GLX
(II) NVIDIA(0): NVIDIA 3D Acceleration Architecture Initialized
(II) NVIDIA(0): Using the NVIDIA 2D acceleration architecture
(==) NVIDIA(0): Backing store disabled
(==) NVIDIA(0): Silken mouse enabled
(**) NVIDIA(0): DPMS enabled
(II) Loading extension NV-CONTROL

Akex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org 
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert Osfield
Sent: Friday, June 19, 2009 1:15 AM
To: OpenSceneGraph Users
Subject: Re: [osg-users] How to test for anti-alias support

Hi Alex,

What OS, Hardware, drivers are you using?

Robert.

On Thu, Jun 18, 2009 at 8:24 PM, Pecoraro, Alexander
Nalexander.n.pecor...@lmco.com wrote:
 Is there a different or better way to test for anti-aliasing support than to
 just call osg::DisplaySettings::instance()-setNumMultiSamples() repeatedly
 with smaller and smaller values, which doesn't seem to work for me anyway
 (so hopefully the answer is yes). I tried to have it set the number of multi
 samples and then call realize() on the viewer and if it failed to realize I
 have it try a smaller value for number of samples. What ends up happening is
 that it starts at 8, fails, then tries 4, which I know is supported by my
 video card so it should work and it appears to work, but the window opens
 and it is just black and nothing seems to render to it. So I'm wondering if
 my way of checking for anti-aliasing support is wrong. Here is the debug
 output from my attempts to make this work:



 Setting anti-aliasing samples to: 8

 GraphicsContext::registerGraphicsContext 0x8e4ff20

 Relaxing traits

 Error: Not able to create requested visual.

 close(1)0x8e4ff20

 close(0)0x8e4ff20

 GraphicsContext::unregisterGraphicsContext 0x8e4ff20

 Viewer::realize() - No valid contexts found, setting up view across all
 screens.

 GraphicsContext::getWindowingSystemInterface() 0x8e4c030   0x102ee20

 GraphicsContext::registerGraphicsContext 0x8e51e28

 Relaxing traits

 Error: Not able to create requested visual.

 close(1)0x8e51e28

 close(0)0x8e51e28

 GraphicsContext::unregisterGraphicsContext 0x8e51e28

   GraphicsWindow has not been created successfully.

 Viewer::realize() - failed to set up any windows



 Trying anti aliasing samples at 4

 Viewer::realize() - No valid contexts found, setting up view across all
 screens.

 GraphicsContext::getWindowingSystemInterface() 0x8e4c030   0x102ee20

 GraphicsContext::registerGraphicsContext 0x8e51e28

 GraphicsContext::getWindowingSystemInterface() 0x8e4c030   0x102ee20

 GraphicsContext::createNewContextID() creating contextID=0

 Updating the MaxNumberOfGraphicsContexts to 1

   GraphicsWindow has been created successfully.

 X window successfully opened



 So even though it appears that the GraphicsWindow has been created
 successfully, my app is not able to render anything into the window. The
 funny thing is that if I start with a number of multi-samples value of 4
 then 

[osg-users] How to test for anti-alias support

2009-06-18 Thread Pecoraro, Alexander N
Is there a different or better way to test for anti-aliasing support than to 
just call osg::DisplaySettings::instance()-setNumMultiSamples() repeatedly 
with smaller and smaller values, which doesn't seem to work for me anyway (so 
hopefully the answer is yes). I tried to have it set the number of multi 
samples and then call realize() on the viewer and if it failed to realize I 
have it try a smaller value for number of samples. What ends up happening is 
that it starts at 8, fails, then tries 4, which I know is supported by my video 
card so it should work and it appears to work, but the window opens and it is 
just black and nothing seems to render to it. So I'm wondering if my way of 
checking for anti-aliasing support is wrong. Here is the debug output from my 
attempts to make this work:

Setting anti-aliasing samples to: 8
GraphicsContext::registerGraphicsContext 0x8e4ff20
Relaxing traits
Error: Not able to create requested visual.
close(1)0x8e4ff20
close(0)0x8e4ff20
GraphicsContext::unregisterGraphicsContext 0x8e4ff20
Viewer::realize() - No valid contexts found, setting up view across all screens.
GraphicsContext::getWindowingSystemInterface() 0x8e4c030   0x102ee20
GraphicsContext::registerGraphicsContext 0x8e51e28
Relaxing traits
Error: Not able to create requested visual.
close(1)0x8e51e28
close(0)0x8e51e28
GraphicsContext::unregisterGraphicsContext 0x8e51e28
  GraphicsWindow has not been created successfully.
Viewer::realize() - failed to set up any windows

Trying anti aliasing samples at 4
Viewer::realize() - No valid contexts found, setting up view across all screens.
GraphicsContext::getWindowingSystemInterface() 0x8e4c030   0x102ee20
GraphicsContext::registerGraphicsContext 0x8e51e28
GraphicsContext::getWindowingSystemInterface() 0x8e4c030   0x102ee20
GraphicsContext::createNewContextID() creating contextID=0
Updating the MaxNumberOfGraphicsContexts to 1
  GraphicsWindow has been created successfully.
X window successfully opened

So even though it appears that the GraphicsWindow has been created 
successfully, my app is not able to render anything into the window. The funny 
thing is that if I start with a number of multi-samples value of 4 then 
everything works fine. Its only if I first test the realize() with a 
multi-samples value that is not supported by my card that makes subsequent 
valid multi-samples values not work.

Any advice for how to fix this?

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] OpenGL Overlays

2009-03-26 Thread Pecoraro, Alexander N
Is there any support for the use of OpenGL overlays in OSG?

 

Thanks.

 

Alex

 

 

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] how to use osgText to make sign

2009-03-11 Thread Pecoraro, Alexander N
I was wondering if there was an easy way (i.e. some osgText function) to
make a backdrop quad that sits behind an osgText in 3d world space in
order to make something like a road sign with the word STOP on it. I
looked through the osgText documentation, but didn't see anything
obvious, but I thought I would ask just in case I missed something.

 

If there is not an easy way to add a background quad I was going to
combine an osgText::Text node with an osg::Billboard node that contains
a geometry node that draws a quad whose vertices are determined from the
bounding box of the osgText. I was hoping that if I used the osgText's
SCREEN axis alignment mode and the Billboard's POINT_ROT_EYE mode that
they would rotate in the same plane and that if I used the PolygonOffset
attribute on the Billboard then it would stay behind the text. Does that
sound valid?

 

Thanks.

 

Alex 

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Compile Context Background Thread Question

2009-02-18 Thread Pecoraro, Alexander N
So when OSG_COMPIlE_CONTEXTS=ON then a background thread traverses the
scene graph and compiles the un-compiled display lists using a different
OpenGL context than the render thread's, which prevents the render
thread from having to do the display list compilation. Is that correct?

Thanks.

Alex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert
Osfield
Sent: Wednesday, February 18, 2009 12:59 AM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Compile Context Background Thread Question

Hi Alex,

One would typically use a compile context in conjunction with the
database pager, but it can be used for other purposes.  It's a feature
that is a bit bleeding edge on some platforms though, with OpenGL
drivers simply not coping with multiple context sharing the same GL
objects and running multi-threaded.

Robert.

On Tue, Feb 17, 2009 at 10:47 PM, Pecoraro, Alexander N
alexander.n.pecor...@lmco.com wrote:
 I noticed that there is an environment variable
OSG_COMPIlE_CONTEXTS=OFF/ON
 that, Enable/disable the use a backgrouind compile contexts and
threads. I
 was wondering if this particular functionality is only used in
conjuction
 with the database pager. In other words, if my database does not have
any
 PagedLOD nodes then will enabling this functionality have no effect?

 Thanks.



 Alex

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org

http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Draw threads serialized by default?

2009-02-18 Thread Pecoraro, Alexander N
I think it would be nice if the processor chosen for the draw thread by
the osgViewer was somehow configurable instead of it just defaulting to
starting at processor number 1 and going up from there. I, like Todd,
seem to have found that running the draw thread on my second processor
(on any of the four cores on my second processor) produces better
performance than running it on any of the cores of my first processor. I
can't explain why I get better performance on my second processor, but
the only way I was able to make the draw thread run on my second
processor was by modifying the osgViewer::startThreading() function
because I found that calling the draw thread's setProcessorAffinity()
function had no effect after the thread started running.

Perhaps something for 2.8.1?

Alex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert
Osfield
Sent: Monday, September 01, 2008 8:34 AM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Draw threads serialized by default?

HI Todd.

osgViewer already sets the affinity, and indeed this makes a big
difference to performance when running multi-threaded,
mult-context/mulit-gpu work.  The draw dispatch serialization that
osgViewer::Renderer does on top of this makes even more difference on
PCs I've tested.  I would guess that a decent multi-processing
architecture like the Onyx would scale much better, it might be that
some very high PC hardware set ups would also scale much better (the
AMD 4x4 motherboards spring to mind as a potential candidate for this
better scaling).

Robert.

On Mon, Sep 1, 2008 at 4:22 PM, Todd J. Furlong t...@inv3rsion.com
wrote:
 Robert,

 The post of yours that Paul linked to sounds very similar to something
we
 saw with VR Juggler  OSG a while back: terrible performance with OSG
apps
 that had parallel draw threads.  In our case, VR Juggler manages
threading,
 but the same may apply to OSG with osgViewer.

 For us, it turned out that we had to set the threads' affinity to lock
them
 to a particular CPU/core.  The Linux scheduler moved the threads
around and
 thrashed the cache, I believe.  Setting the affinity boosted the
parallel
 draw performance back up.

 The solution we ended up with is twofold:
 1. Add a default behavior that sequentially locks draw threads to CPU
cores
 (0,1,2,etc.  repeat)
 2. Use an environment variable to override the default behavior
 (VJ_DRAW_THREAD_AFFINITY, a space-delimited list of integers).

 The default behavior is good for most users, but we can squeeze out a
little
 more performance by tweaking the environment variable.  For a system
with
 two draw threads and two dual-core CPUs, the default behavior locks
the draw
 threads to CPUs 0  1, but we get slightly better performance if we
set
 VJ_DRAW_THREAD_AFFINITY=2 3.

 Regards,
 Todd

 Robert Osfield wrote:

 Hi Paul,

 On Sat, Aug 30, 2008 at 10:19 PM, Paul Martz pma...@skew-matrix.com
 wrote:

 Hi Robert -- Prior to the 2.2 release, code was added to serialize
the
 draw
 dispatch. Is there a reason that this behavior defaults to ON? (See
 DisplaySettings.cpp line 135.) I have somehow incorrectly documented
this
 as
 defaulting to OFF in the ref man. Now that I see it's ON by default,
I
 half
 wonder if this is a bug. Wanted to check with you: should I change
the
 documentation, or the code? Which is right?

 The settings has been ON since I introduced the option to serialize
 the draw dispatch.

 Just before the 2.6 release I did testing at my end and still found
 serializing the draw dispatch to be far more effiecent on my
 Linux/NVidia drivers so I left the option on.

 In the original thread when I introduced the optional draw mutex into
 the draw dispatch I did call for testing on the performance impact
but
 I didn't get sufficient feedback to make a more informed decision
than
 just basing it on my own testing.  I would still appreciate more
 testing, as I'd expect that best default setting to vary on different
 hardware and drivers - I for one would love to see better scalability
 in driver/hardware.

 Robert.
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org

http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org

http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Compile Context Background Thread Question

2009-02-17 Thread Pecoraro, Alexander N
I noticed that there is an environment variable
OSG_COMPIlE_CONTEXTS=OFF/ON that, Enable/disable the use a backgrouind
compile contexts and threads. I was wondering if this particular
functionality is only used in conjuction with the database pager. In
other words, if my database does not have any PagedLOD nodes then will
enabling this functionality have no effect?


Thanks.

 

Alex

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Cull time doubled?

2009-02-12 Thread Pecoraro, Alexander N
To get my viewer to work in multithread mode I had to make sure that all 
updates to the scene graph were happening in an update callback. I also had to 
set all my osgText objects to use draw callbacks and put mutexes on the 
string's that were being updated to prevent simultaneous access by both the 
update callback thread and the render thread.

 

Alex 

 

 



From: osg-users-boun...@lists.openscenegraph.org 
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Simon Loic
Sent: Thursday, February 12, 2009 3:57 AM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Cull time doubled?

 

Hi all, I hope I'm not hijacking too much this thread. I just  wanted the same 
kind of behaviour as Alex hoping it could help so let me know if I should start 
a new thread.

Robert,
I checked for the Atomic Build and I got the same  include/OpenThreads/Config 
as yours.
Meanwhile when a switch the Threading model (pressing m) I don't see any 
notable improvement.

I also tried to optimize my scene graph using the osgUtil::Optimizer with 
different options. An I don' t get neither an improvement. I tryed the 
following options :
-- osgUtil::Optimizer::FLATTEN_STATIC_TRANSFORMS_DUPLICATING_SHARED_SUBGRAPHS 
(to rule the problem of PAT you mentionned previously)
-- osgUtil::Optimizer::STATIC_OBJECT_DETECTION 
-- osgUtil::Optimizer::SPATIALIZE_GROUPS ( to get an octree structure)
-- osgUtil::Optimizer::SHARE_DUPLICATE_STATE (to handle duplicated statesets)
-- osgUtil::Optimizer::COPY_SHARED_NODES

Here are my stats. Tell me if you think they are not normal considering that my 
CG is not a brand new one but  a NVidia Quadro Fx 550 and the complexity of the 
3D scene.
Frame Rate : 8.5
Threading model: SingleThreaded
Event/Update/Cull/Draw/GPU:0.1/0.06/30/23/115
Vertices: 4.5M
Drawable:1382
Matrices:1382
triangle strips: 2.5M
polygons: 3k

This are thes stats I get in my own osg app. Note that the Threadingmodel is 
SingleThreaded because so far I get a segmentation fault when switching it in 
this app. But this is not the case in osgViewer.

Here are the stats for the same model in osgviewer:
Frame Rate : 7.5
Threading model: DrawThreadPerContext
Event/Update/Cull/Draw/GPU:0.1/0.06/42/86/117
Vertices: 4.5M
Drawable:1382
Matrices:1382
triangle strips: 2.5M
polygons: 3k

By the way I've also tried using small feature culling. I know it's enabled by 
default but there is a parameter which I don'tknow the default value :
SmallFeatureCullingPixelSize. What would be a reasonable value for it.



On Thu, Feb 12, 2009 at 10:25 AM, Robert Osfield robert.osfi...@gmail.com 
wrote:

Hi Alex,

You would be best served in your investigation by attaching the
osgViewer::StatsHandler to your viewer.  See the osgviewer.cpp example
code to see how.  This event handler will give you some pretty useful
on screen stats.

With the DrawThreadPerContext threading model what you should get is
the update + cull overlapping with the previous draw dispatch/gpu.
What I have see is that if the processor is overly contended then the
the cull and draw times go down.  Processor affinity is set by
osgViewer which should avoid this contention.

The other thing to look at is the DataVariance, in 2.x it's by default
using a value UNSPECIFIED, which means you haven't set it yet.  For
your StateSet and Drawables you make sure that if they don't contain
any dynamic data that they are set to STATIC, and if they contain
dynamic data make sure it's set to DYNAMIC.   The more STATIC you have
the more the frames will be able to overlap. The Optimizer has a
visitor that can help set the DataVaraince to either STATIC or
DYNAMIC, but will only override it if the value is currently
UNSPECIFIED.

Robert.

On Thu, Feb 12, 2009 at 12:17 AM, Pecoraro, Alexander N

alexander.n.pecor...@lmco.com wrote:
 Hi again,

 Might I suggest you examine the actual frame rates you get once again
 now that the atomic ref counts are in place.

 Here are some performance metrics that I get when running with atomic
 reference counting in OSG 2.6 (these don't correspond to the numbers in
 my previous email, which were from osgviewer, whereas these numbers come
 from my OSG based application, which is doing a little more work than
 osgviewer during the update stage - the scene graph is the same as
 before, but the viewpoint is different):

 OSG 2.6 Frame Rate/Update/Cull/Draw/GPU: 29/1/21/34/25
 OSG 1.2 Frame Rate/Update/Cull/Draw/GPU: 27/1/ 9/27/25

 I can get a faster frame rate in 2.6 because the frame rate is tied to
 the draw time only (due to DrawThreadPerContext functionality), whereas
 in 1.2 the cull and draw time are the biggest contributors to the frame
 rate.

 If I could somehow get my 2.6 draw time to be the same as the 1.2 draw
 time then I could get my frame rate up to 36 - 37.

 One does need to make sure that your scene graph is set up with
 STATIC + DYNAMIC DataVariance correctly to allow the frames to overlap
 the most without

Re: [osg-users] Cull time doubled?

2009-02-12 Thread Pecoraro, Alexander N
Oh yea I forgot that my original solution was to set the osgText objects to 
DYNAMIC, but when I did that I lost all frame overlap - so the cull and draw 
were happening as if it was running in single thread mode... not real sure why 
this happened (perhaps something to do with where the osgText objects were 
attached to the scene), but then I went with my current solution where I update 
the text via a mutex and use trylock in the draw callback to prevent blocking 
it. It seems to work fine - I don't notice any difference in performance with 
or without my osgText on the screen.

But yea, Simon, the easiest way to make the osgText thread safe is to use the 
DYNAMIC data variance setting.

Alex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org 
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert Osfield
Sent: Thursday, February 12, 2009 11:31 AM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Cull time doubled?

Hi Alex,

You should indeed just do updates in the update traversal unless
you've specifically handling the buffering in case of viewers with
multiple cameras running.  You shouldn't need to put in mutexes in
osgText, all you should need to do is set the DataVariance to DYNAMIC.
 Making sure all dynamically update StateSet and Drawables (ilke
osgText::Text) have their DataVariance set to DYNAMIC tells the
rendering threads when it's safe to let the next frame advance.

Robert.

On Thu, Feb 12, 2009 at 6:57 PM, Pecoraro, Alexander N
alexander.n.pecor...@lmco.com wrote:
 To get my viewer to work in multithread mode I had to make sure that all
 updates to the scene graph were happening in an update callback. I also had
 to set all my osgText objects to use draw callbacks and put mutexes on the
 string's that were being updated to prevent simultaneous access by both the
 update callback thread and the render thread.



 Alex





 

 From: osg-users-boun...@lists.openscenegraph.org
 [mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Simon Loic
 Sent: Thursday, February 12, 2009 3:57 AM
 To: OpenSceneGraph Users
 Subject: Re: [osg-users] Cull time doubled?



 Hi all, I hope I'm not hijacking too much this thread. I just  wanted the
 same kind of behaviour as Alex hoping it could help so let me know if I
 should start a new thread.

 Robert,
 I checked for the Atomic Build and I got the same
 include/OpenThreads/Config as yours.
 Meanwhile when a switch the Threading model (pressing m) I don't see any
 notable improvement.

 I also tried to optimize my scene graph using the osgUtil::Optimizer with
 different options. An I don' t get neither an improvement. I tryed the
 following options :
 --
 osgUtil::Optimizer::FLATTEN_STATIC_TRANSFORMS_DUPLICATING_SHARED_SUBGRAPHS
 (to rule the problem of PAT you mentionned previously)
 -- osgUtil::Optimizer::STATIC_OBJECT_DETECTION
 -- osgUtil::Optimizer::SPATIALIZE_GROUPS ( to get an octree structure)
 -- osgUtil::Optimizer::SHARE_DUPLICATE_STATE (to handle duplicated
 statesets)
 -- osgUtil::Optimizer::COPY_SHARED_NODES

 Here are my stats. Tell me if you think they are not normal considering that
 my CG is not a brand new one but  a NVidia Quadro Fx 550 and the complexity
 of the 3D scene.
 Frame Rate : 8.5
 Threading model: SingleThreaded
 Event/Update/Cull/Draw/GPU:0.1/0.06/30/23/115
 Vertices: 4.5M
 Drawable:1382
 Matrices:1382
 triangle strips: 2.5M
 polygons: 3k

 This are thes stats I get in my own osg app. Note that the Threadingmodel is
 SingleThreaded because so far I get a segmentation fault when switching it
 in this app. But this is not the case in osgViewer.

 Here are the stats for the same model in osgviewer:
 Frame Rate : 7.5
 Threading model: DrawThreadPerContext
 Event/Update/Cull/Draw/GPU:0.1/0.06/42/86/117
 Vertices: 4.5M
 Drawable:1382
 Matrices:1382
 triangle strips: 2.5M
 polygons: 3k

 By the way I've also tried using small feature culling. I know it's enabled
 by default but there is a parameter which I don'tknow the default value :
 SmallFeatureCullingPixelSize. What would be a reasonable value for it.

 On Thu, Feb 12, 2009 at 10:25 AM, Robert Osfield robert.osfi...@gmail.com
 wrote:

 Hi Alex,

 You would be best served in your investigation by attaching the
 osgViewer::StatsHandler to your viewer.  See the osgviewer.cpp example
 code to see how.  This event handler will give you some pretty useful
 on screen stats.

 With the DrawThreadPerContext threading model what you should get is
 the update + cull overlapping with the previous draw dispatch/gpu.
 What I have see is that if the processor is overly contended then the
 the cull and draw times go down.  Processor affinity is set by
 osgViewer which should avoid this contention.

 The other thing to look at is the DataVariance, in 2.x it's by default
 using a value UNSPECIFIED, which means you haven't set it yet.  For
 your StateSet and Drawables you make sure that if they don't contain

Re: [osg-users] Cull time doubled?

2009-02-11 Thread Pecoraro, Alexander N
Hi again,

 Might I suggest you examine the actual frame rates you get once again
now that the atomic ref counts are in place.

Here are some performance metrics that I get when running with atomic
reference counting in OSG 2.6 (these don't correspond to the numbers in
my previous email, which were from osgviewer, whereas these numbers come
from my OSG based application, which is doing a little more work than
osgviewer during the update stage - the scene graph is the same as
before, but the viewpoint is different):

OSG 2.6 Frame Rate/Update/Cull/Draw/GPU: 29/1/21/34/25
OSG 1.2 Frame Rate/Update/Cull/Draw/GPU: 27/1/ 9/27/25

I can get a faster frame rate in 2.6 because the frame rate is tied to
the draw time only (due to DrawThreadPerContext functionality), whereas
in 1.2 the cull and draw time are the biggest contributors to the frame
rate.

If I could somehow get my 2.6 draw time to be the same as the 1.2 draw
time then I could get my frame rate up to 36 - 37. 

 One does need to make sure that your scene graph is set up with
STATIC + DYNAMIC DataVariance correctly to allow the frames to overlap
the most without endangering thread safety.

I was actually wondering about this. Is the fact that my cull and draw
times improve to 16 and 31 when I run single threaded in 2.6 possibly
indicative of my data variance settings preventing me from obtaining
optimal frame overlap?

Is the basic rule for setting the data variance on a node is if any of
the values on the Node is going to change at any time during runtime
then it should be set to DYNAMIC?

Thanks.

Alex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert
Osfield
Sent: Wednesday, February 11, 2009 12:53 AM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Cull time doubled?

Hi Alex,

Good to hear that the settings worked in getting your build working
with atomic ref counts.

W.r.t performance, even atmoic ref counting is faster than using no
thread safety on ref counts, and no ref counting is faster than thread
unsafe ref counting.  Once difference between OSG 1.x and 2.x is that
2.x provides extra threading models, including ones that overlap the
draw with the update + cull traversals of the next frame.  One of
consequences of this threading is that the rendering back end has to
use thread safe ref counting, so you while you gain performance from
the better threading model, you loose a little from the extra cost of
the ref counting.

Even in your own tests the actual frame rate was shown to be the same
or higher when using the new threading model than in 1.2, even though
your cull and draw were way more expensive due to the lack of atomic
ref counts.  Might I suggest you examine the actual frame rates you
get once again now that the atomic ref counts are in place.

A lot has been written about the various threading models in 2.x over
the last two years so have a look through the osg-users archives and
search for items like DrawThreadPerContext,
CullThreadPerCameraDrawTheadPerContext.

FYI, Most of my models I get better performance, often significantly
better when using the new threading models, one does need to make sure
that your scene graph is set up with STATIC + DYNAMIC DataVariance
correctly to allow the frames to overlap the most without endangering
thread safety.

Robert.

On Wed, Feb 11, 2009 at 1:03 AM, Pecoraro, Alexander N
alexander.n.pecor...@lmco.com wrote:
 I followed the instructions in the previous email and I was able to
get
 the 2.6.1 API to build with atomic ref counting on my Enterprise
Redhat
 box. This change caused a 33% improvement in my culling time and an 8%
 improvement in my draw time when in cull thread per context mode. In
 single threaded mode it made a similar amount of improvement in both
 cull and draw.

 This improvement is definitely nice so thanks for the help, but I am
 still confused as to how the 1.2 API is still able to perform about
%12
 - %15 better than 2.6.1 even with atomic ref counting used. Although,
I
 will say that I only see this on certain databases - I have another
 OpenFlight database that essentially gets the same level of
performance
 on both versions of the API.

 Has anyone else noticed a difference in performance between 1.2 and
2.6
 on any of their old databases?

 Are there other settings (build or runtime) that I can use to improve
 performance?

 Are there different default settings or perhaps increased error
catching
 that was added in 2.X that could account for the difference that I am
 seeing?

 Thanks.

 Alex

 -Original Message-
 From: osg-users-boun...@lists.openscenegraph.org
 [mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of
Robert
 Osfield
 Sent: Tuesday, February 10, 2009 10:35 AM
 To: OpenSceneGraph Users
 Subject: Re: [osg-users] Cull time doubled?

 On Tue, Feb 10, 2009 at 4:39 PM, Jason Daly jd...@ist.ucf.edu wrote:
 On RHEL 5, you have to *explicitly* set

Re: [osg-users] Cull time doubled?

2009-02-11 Thread Pecoraro, Alexander N
Oh and btw for comparison sake - the numbers in the previous email were
from after I got atomic reference counting working. Before I got it
working I had these numbers:

Frame Rate/Update/Cull/Draw/GPU: 26/1/31/38/25

So it did make a difference to get atomic ref/unref working.

Alex


-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of
Pecoraro, Alexander N
Sent: Wednesday, February 11, 2009 4:18 PM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Cull time doubled?

Hi again,

 Might I suggest you examine the actual frame rates you get once again
now that the atomic ref counts are in place.

Here are some performance metrics that I get when running with atomic
reference counting in OSG 2.6 (these don't correspond to the numbers in
my previous email, which were from osgviewer, whereas these numbers come
from my OSG based application, which is doing a little more work than
osgviewer during the update stage - the scene graph is the same as
before, but the viewpoint is different):

OSG 2.6 Frame Rate/Update/Cull/Draw/GPU: 29/1/21/34/25
OSG 1.2 Frame Rate/Update/Cull/Draw/GPU: 27/1/ 9/27/25

I can get a faster frame rate in 2.6 because the frame rate is tied to
the draw time only (due to DrawThreadPerContext functionality), whereas
in 1.2 the cull and draw time are the biggest contributors to the frame
rate.

If I could somehow get my 2.6 draw time to be the same as the 1.2 draw
time then I could get my frame rate up to 36 - 37. 

 One does need to make sure that your scene graph is set up with
STATIC + DYNAMIC DataVariance correctly to allow the frames to overlap
the most without endangering thread safety.

I was actually wondering about this. Is the fact that my cull and draw
times improve to 16 and 31 when I run single threaded in 2.6 possibly
indicative of my data variance settings preventing me from obtaining
optimal frame overlap?

Is the basic rule for setting the data variance on a node is if any of
the values on the Node is going to change at any time during runtime
then it should be set to DYNAMIC?

Thanks.

Alex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert
Osfield
Sent: Wednesday, February 11, 2009 12:53 AM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Cull time doubled?

Hi Alex,

Good to hear that the settings worked in getting your build working
with atomic ref counts.

W.r.t performance, even atmoic ref counting is faster than using no
thread safety on ref counts, and no ref counting is faster than thread
unsafe ref counting.  Once difference between OSG 1.x and 2.x is that
2.x provides extra threading models, including ones that overlap the
draw with the update + cull traversals of the next frame.  One of
consequences of this threading is that the rendering back end has to
use thread safe ref counting, so you while you gain performance from
the better threading model, you loose a little from the extra cost of
the ref counting.

Even in your own tests the actual frame rate was shown to be the same
or higher when using the new threading model than in 1.2, even though
your cull and draw were way more expensive due to the lack of atomic
ref counts.  Might I suggest you examine the actual frame rates you
get once again now that the atomic ref counts are in place.

A lot has been written about the various threading models in 2.x over
the last two years so have a look through the osg-users archives and
search for items like DrawThreadPerContext,
CullThreadPerCameraDrawTheadPerContext.

FYI, Most of my models I get better performance, often significantly
better when using the new threading models, one does need to make sure
that your scene graph is set up with STATIC + DYNAMIC DataVariance
correctly to allow the frames to overlap the most without endangering
thread safety.

Robert.

On Wed, Feb 11, 2009 at 1:03 AM, Pecoraro, Alexander N
alexander.n.pecor...@lmco.com wrote:
 I followed the instructions in the previous email and I was able to
get
 the 2.6.1 API to build with atomic ref counting on my Enterprise
Redhat
 box. This change caused a 33% improvement in my culling time and an 8%
 improvement in my draw time when in cull thread per context mode. In
 single threaded mode it made a similar amount of improvement in both
 cull and draw.

 This improvement is definitely nice so thanks for the help, but I am
 still confused as to how the 1.2 API is still able to perform about
%12
 - %15 better than 2.6.1 even with atomic ref counting used. Although,
I
 will say that I only see this on certain databases - I have another
 OpenFlight database that essentially gets the same level of
performance
 on both versions of the API.

 Has anyone else noticed a difference in performance between 1.2 and
2.6
 on any of their old databases?

 Are there other settings (build or runtime) that I can use to improve

Re: [osg-users] Cull time doubled?

2009-02-10 Thread Pecoraro, Alexander N
I followed the instructions in the previous email and I was able to get
the 2.6.1 API to build with atomic ref counting on my Enterprise Redhat
box. This change caused a 33% improvement in my culling time and an 8%
improvement in my draw time when in cull thread per context mode. In
single threaded mode it made a similar amount of improvement in both
cull and draw. 

This improvement is definitely nice so thanks for the help, but I am
still confused as to how the 1.2 API is still able to perform about %12
- %15 better than 2.6.1 even with atomic ref counting used. Although, I
will say that I only see this on certain databases - I have another
OpenFlight database that essentially gets the same level of performance
on both versions of the API.

Has anyone else noticed a difference in performance between 1.2 and 2.6
on any of their old databases?

Are there other settings (build or runtime) that I can use to improve
performance?

Are there different default settings or perhaps increased error catching
that was added in 2.X that could account for the difference that I am
seeing?

Thanks.

Alex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert
Osfield
Sent: Tuesday, February 10, 2009 10:35 AM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Cull time doubled?

On Tue, Feb 10, 2009 at 4:39 PM, Jason Daly jd...@ist.ucf.edu wrote:
 On RHEL 5, you have to *explicitly* set CXXFLAGS to -march=i486 or
higher
 *before* running CMake.  For some reason, the default configuration
will
 evaluate to using mutexes, even if your CPU supports the GCC builtins.

In long hand I think Jason is suggesting something like:


// remove the previous CMakeCache.txt to force a full reconfigure
rm CMakeCache.txt

// set the CXXFLAGS to tell cmake that you plan to use a specific
architecture
export CXXFLAGS=-march=i686

// call the ./configure script or run cmake .
-DCMAKE_BUILD_TYPE=Release that it's equivalent to
./configure

// then run the parallel build to use all those loverly cores that
modern machines have :-)
make -j 4


Could you let us know how you get on with this recipe.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Cull time doubled?

2009-02-10 Thread Pecoraro, Alexander N
If you want to know in detail what each optimization the osgUtil::Optimizer 
does you'll probably have to read the code, but the doxygen documentation has 
some info (scroll down to the Classes section): 

 

http://www.openscenegraph.org/documentation/OpenSceneGraphReferenceDocs/a01526.html

 

Alex

 

 

 



From: osg-users-boun...@lists.openscenegraph.org 
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Simon Loic
Sent: Tuesday, February 10, 2009 2:20 PM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Cull time doubled?

 

One thing I would like to try before balancing the scene graph is to use 
osgUtil::Optimizer in order to diagnose the problem of my scene graph. Do you 
think this make sense? If yes can someone explain me quickly the effect of the 
different optimizer options or just point me to a document which does so. 

Thanks a lot.

On Tue, Feb 10, 2009 at 10:46 PM, Simon Loic simon1l...@gmail.com wrote:

I'm quite interesting in getting my scenegraph  more balanced scenegraph. I 
will take the time to think about it (and probably ask for help then).

Concerning the  _OPENTHREADS_ATOMIC_USE_GCC_BUILTINS I didn't get how to 
generate the include/OpenThreads/Config file. Anyway, I don't get neither how 
it can come into play as I'm profiling with the SingleThreaded option and the 
results are too bad.

regards,

 

On Tue, Feb 10, 2009 at 5:32 PM, Robert Osfield robert.osfi...@gmail.com 
wrote:

Hi Simon,

Like Alex I recommend that you have a look at whether your build is
using atomic ref counts.

Second up, your explanation of your scene graphs suggest to me that is
very poorly balanced.  You cull/draw times are all very long, even for
complex scenes I would expect cull and draw times 1/10th of these.

Lots of PAT's all under a single Group node is really badly balanced -
you should try to create a quad tree structure as this improves cull
performance significantly as how subgraphs containing thousands of
nodes can be culled in a single op.  Also see if you can use an design
that doesn't rely heavily of PAT's, as each transform in your scene
requires the view frustum to be transformed into the local coords of
the PAT, which is relative expensive operation.

Robert.


On Tue, Feb 10, 2009 at 8:50 AM, Simon Loic simon1l...@gmail.com wrote:

 I think I have a problem similar to Alex one.
 The stats I get follow :

 MultiThreaded/SingleThreaded
 Cull Time:  40/33
 Draw Time:  55/52
 GPU Time:   85/85

 I also attached a callgrind output (you can use kcachegrind to analyse it)
 where it seems that the osgUtil::SceneView::cull() costs 70% of
 osgViewer::Renderer::cull_draw() while osgUtil::SceneView::draw() counts for
 only 30% !! However I'm not  much aware with the way osgViewer works, yet I
 find this result weird in comparison with the stats results.

 In my case I have a scene graph composed of many nodes (i don't think I can
 post the .osg which is 16MB). The strucuture is not complex : I have a
 shadow scene as root with as child a group (the real root of the graph some
 how). Then the root is connected to plenty of PositionAttitudeTransfoms
 having each one a child which is a geode.

 In fact the geode are quite complex polygonal meshes. So I would expect that
 the draw part is more costly than the draw part.
 I hope this can  help to see what is wrong for Alex and for me.


 On Tue, Feb 10, 2009 at 7:54 AM, J.P. Delport jpdelp...@csir.co.za wrote:

 Hi,

 Mathias Fröhlich wrote:

 Hi,

 On Monday 09 February 2009 23:54, Pecoraro, Alexander N wrote:

 On Linux this translated to -O3 -DNDEBUG (at least that's what
 cmake-gui says is defined for the CMAKE_CXX_FLAGS_RELEASE variable).

 May be I know cmake to little, but the only way to really make sure I got
 cmake correct was to set CMAKE_VERBOSE_MAKEFILE=TRUE and see how the build
 scripts really call the compiler.

 you can also run make other options VERBOSE=1
 on the normal cmake generated Makefile.

 jp


 Robert,
 I know that it is a matter of taste if one sets this flag. But may be it
 will help people to catch unoptimized builds due to the possible
 intransparency of cmakes compile flags handling. So should that be on by
 default?

 Greetings

 Mathias


 --
 This message is subject to the CSIR's copyright terms and conditions,
 e-mail legal notice, and implemented Open Document Format (ODF) standard.
 The full disclaimer details can be found at
 http://www.csir.co.za/disclaimer.html.

 This message has been scanned for viruses and dangerous content by
 MailScanner, and is believed to be clean.  MailScanner thanks Transtec
 Computers for their support.

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



 --
 Loïc Simon

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http

Re: [osg-users] Cull time doubled?

2009-02-09 Thread Pecoraro, Alexander N
I've gotten similar results for the osgviewer running on a Redhat
Enterprise 5 Linux desktop and a Window's XP laptop. I used an animation
path to make sure that I was looking at the database from the same
viewpoint when collecting statistics.

Linux Stats:
2.6.1 Draw Thread Per Ctx / 2.6.1 Single Threaded / 1.2 Single Threaded
Frame Rate: 16 / 11 / 15
Cull Time:  42 / 34 / 16
Draw Time:  62 / 55 / 49
GPU Time:   62 / 55 / 48

Windows XP Stats (for this one I used OSG 2.8 rc1 because it has more
stats information so it allowed me to verify that the vertex and
primitive counts were the same for both versions of the viewer):
2.8 Draw Thread Per Ctx / 2.8 Single Threaded / 1.2 Single Threaded
Frame Rate: 15 / 10 / 12
Cull Time:  50 / 40 / 24
Draw Time:  65 / 60 / 59
GPU Time:   60 / 60 / 58

The biggest difference between the two versions of the viewer was always
the cull time, but on Linux the draw times were also fairly different
(and only slightly different on the Windows laptop).

 Is full compile optimization enabled?

I just used the settings that were given to the Release build for the
Visual Studio project files and the Linux Makefiles. 

On Linux this translated to -O3 -DNDEBUG (at least that's what
cmake-gui says is defined for the CMAKE_CXX_FLAGS_RELEASE variable).

On Viz Studio this translated to:

Optimization: Maximize Speed
Inline Function Expansion: Any Suitable
Enable Intrinsic Functions: No
Favor Size or Speed: Neither
Omit Frame Pointers: No
Enable Fiber Safe Optimizations: No
Whole Program Optimization: No

 Is the atomic reference counting being compiled in correctly?

How would I verify that atomic reference counting is compiled in
correctly?

Is there something with how culling is done that has changed between OSG
1.2 and OSG 2.6/2.8?

Thanks.

Alex


-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert
Osfield
Sent: Friday, February 06, 2009 1:58 AM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Cull time doubled?

Hi Alex,

There shouldn't be a performance drop if everything is compiled
correctly.  What platform are you working on?  Is full compile
optimization enabled?  Is the atomic reference counting being compiled
in correctly?  Could the CPU thread management causing problems?  Try
out different threading models to see what happens.

Robert.

On Fri, Feb 6, 2009 at 12:28 AM, Pecoraro, Alexander N
alexander.n.pecor...@lmco.com wrote:
 I've recently upgraded an old 3d viewer that was using OSG API version
1.2
 to version 2.6.1. Oddly enough some databases that I was using with
the old
 viewer actually perform worse with the new version of the API. For
some
 reason the cull time on these databases is 1.5 to 2 times higher on
version
 2.6.1 than it was on version 1.2. The scene graph node structure is
exactly
 the same, but the culling time has increased. Why would that happen?
Has
 anyone else seen this?



 I can provide a small test case if anyone is interested in seeing what
I
 mean.



 Thanks.



 Alex

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org

http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Cull time doubled?

2009-02-09 Thread Pecoraro, Alexander N
On my Linux box it was set to:

_OPENTHREADS_ATOMIC_USE_MUTEX

I'm going to see if switching to ATOMIC_USE_GCC_BUILTINS works better.

On my Windows box it was set to:

_OPENTHREADS_ATOMIC_USE_WIN32_INTERLOCKED

Is that the best choice for Windows?

Alex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert
Osfield
Sent: Monday, February 09, 2009 3:11 PM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Cull time doubled?

Hi Alex,

The cull is done pretty well the same between 1.x and 2.x so is very
unlikely to be related to the difference.

It could be that the database has been optimised differently in each
case.  Try switching off the Optimizer to see if this makes any
difference.

The next area to look at is the thread safe ref/unref that is now used
by default, and should be using atomic ref counting.  To see what is
being use have a look at the file:

include/OpenThreads/Config

Mine looks like:

/** Comments cut out...
  ...
  */
#ifndef _OPENTHREADS_CONFIG
#define _OPENTHREADS_CONFIG

#define _OPENTHREADS_ATOMIC_USE_GCC_BUILTINS
/* #undef _OPENTHREADS_ATOMIC_USE_MIPOSPRO_BUILTINS */
/* #undef _OPENTHREADS_ATOMIC_USE_SUN */
/* #undef _OPENTHREADS_ATOMIC_USE_WIN32_INTERLOCKED */
/* #undef _OPENTHREADS_ATOMIC_USE_BSD_ATOMIC */
/* #undef _OPENTHREADS_ATOMIC_USE_MUTEX */
/* #undef OT_LIBRARY_STATIC */

#endif

Note that ATMOIC_USE_GCC_BUILTINS is used.

What processor and OS type (32bit or 64bit) are you using?

The long cull/draw/GPU times in draw thread per context suggest to me
that the processor is being overly contended, as if CPU affinity isn't
functioning well.  If you only have a single core CPU then this will
be the reason.

Finally in all your mentioned cases the cull, draw and GPU times are
all very long.  I'd suspect that the scene graph might not be well
balanced and could probably be done far more efficiently.  Without
knowing the database I wouldn't be able to say exactly what.

Robert.


On Mon, Feb 9, 2009 at 10:54 PM, Pecoraro, Alexander N
alexander.n.pecor...@lmco.com wrote:
 I've gotten similar results for the osgviewer running on a Redhat
 Enterprise 5 Linux desktop and a Window's XP laptop. I used an
animation
 path to make sure that I was looking at the database from the same
 viewpoint when collecting statistics.

 Linux Stats:
 2.6.1 Draw Thread Per Ctx / 2.6.1 Single Threaded / 1.2 Single
Threaded
 Frame Rate: 16 / 11 / 15
 Cull Time:  42 / 34 / 16
 Draw Time:  62 / 55 / 49
 GPU Time:   62 / 55 / 48

 Windows XP Stats (for this one I used OSG 2.8 rc1 because it has more
 stats information so it allowed me to verify that the vertex and
 primitive counts were the same for both versions of the viewer):
 2.8 Draw Thread Per Ctx / 2.8 Single Threaded / 1.2 Single Threaded
 Frame Rate: 15 / 10 / 12
 Cull Time:  50 / 40 / 24
 Draw Time:  65 / 60 / 59
 GPU Time:   60 / 60 / 58

 The biggest difference between the two versions of the viewer was
always
 the cull time, but on Linux the draw times were also fairly different
 (and only slightly different on the Windows laptop).

 Is full compile optimization enabled?

 I just used the settings that were given to the Release build for the
 Visual Studio project files and the Linux Makefiles.

 On Linux this translated to -O3 -DNDEBUG (at least that's what
 cmake-gui says is defined for the CMAKE_CXX_FLAGS_RELEASE variable).

 On Viz Studio this translated to:

 Optimization: Maximize Speed
 Inline Function Expansion: Any Suitable
 Enable Intrinsic Functions: No
 Favor Size or Speed: Neither
 Omit Frame Pointers: No
 Enable Fiber Safe Optimizations: No
 Whole Program Optimization: No

 Is the atomic reference counting being compiled in correctly?

 How would I verify that atomic reference counting is compiled in
 correctly?

 Is there something with how culling is done that has changed between
OSG
 1.2 and OSG 2.6/2.8?

 Thanks.

 Alex


 -Original Message-
 From: osg-users-boun...@lists.openscenegraph.org
 [mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of
Robert
 Osfield
 Sent: Friday, February 06, 2009 1:58 AM
 To: OpenSceneGraph Users
 Subject: Re: [osg-users] Cull time doubled?

 Hi Alex,

 There shouldn't be a performance drop if everything is compiled
 correctly.  What platform are you working on?  Is full compile
 optimization enabled?  Is the atomic reference counting being compiled
 in correctly?  Could the CPU thread management causing problems?  Try
 out different threading models to see what happens.

 Robert.

 On Fri, Feb 6, 2009 at 12:28 AM, Pecoraro, Alexander N
 alexander.n.pecor...@lmco.com wrote:
 I've recently upgraded an old 3d viewer that was using OSG API
version
 1.2
 to version 2.6.1. Oddly enough some databases that I was using with
 the old
 viewer actually perform worse with the new version of the API. For
 some
 reason the cull time on these databases is 1.5 to 2 times higher on
 version

Re: [osg-users] Cull time doubled?

2009-02-09 Thread Pecoraro, Alexander N
So I tried to build my OSG using:

With -D_OPENTHREADS_ATOMIC_USE_GCC_BUILTINS and that seemed to generate
the include/OpenThreads/Config file with 

#define _OPENTHREADS_ATOMIC_USE_GCC_BUILTINS

but, then I got a linker undefined symbol error to the following
functions:

__sync_bool_compare_and_swap_4
__sync_add_and_fetch_4
__sync_sub_and_fetch_4

Any idea why this happened?

Thanks.

Alex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of
Pecoraro, Alexander N
Sent: Monday, February 09, 2009 3:41 PM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Cull time doubled?

On my Linux box it was set to:

_OPENTHREADS_ATOMIC_USE_MUTEX

I'm going to see if switching to ATOMIC_USE_GCC_BUILTINS works better.

On my Windows box it was set to:

_OPENTHREADS_ATOMIC_USE_WIN32_INTERLOCKED

Is that the best choice for Windows?

Alex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert
Osfield
Sent: Monday, February 09, 2009 3:11 PM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Cull time doubled?

Hi Alex,

The cull is done pretty well the same between 1.x and 2.x so is very
unlikely to be related to the difference.

It could be that the database has been optimised differently in each
case.  Try switching off the Optimizer to see if this makes any
difference.

The next area to look at is the thread safe ref/unref that is now used
by default, and should be using atomic ref counting.  To see what is
being use have a look at the file:

include/OpenThreads/Config

Mine looks like:

/** Comments cut out...
  ...
  */
#ifndef _OPENTHREADS_CONFIG
#define _OPENTHREADS_CONFIG

#define _OPENTHREADS_ATOMIC_USE_GCC_BUILTINS
/* #undef _OPENTHREADS_ATOMIC_USE_MIPOSPRO_BUILTINS */
/* #undef _OPENTHREADS_ATOMIC_USE_SUN */
/* #undef _OPENTHREADS_ATOMIC_USE_WIN32_INTERLOCKED */
/* #undef _OPENTHREADS_ATOMIC_USE_BSD_ATOMIC */
/* #undef _OPENTHREADS_ATOMIC_USE_MUTEX */
/* #undef OT_LIBRARY_STATIC */

#endif

Note that ATMOIC_USE_GCC_BUILTINS is used.

What processor and OS type (32bit or 64bit) are you using?

The long cull/draw/GPU times in draw thread per context suggest to me
that the processor is being overly contended, as if CPU affinity isn't
functioning well.  If you only have a single core CPU then this will
be the reason.

Finally in all your mentioned cases the cull, draw and GPU times are
all very long.  I'd suspect that the scene graph might not be well
balanced and could probably be done far more efficiently.  Without
knowing the database I wouldn't be able to say exactly what.

Robert.


On Mon, Feb 9, 2009 at 10:54 PM, Pecoraro, Alexander N
alexander.n.pecor...@lmco.com wrote:
 I've gotten similar results for the osgviewer running on a Redhat
 Enterprise 5 Linux desktop and a Window's XP laptop. I used an
animation
 path to make sure that I was looking at the database from the same
 viewpoint when collecting statistics.

 Linux Stats:
 2.6.1 Draw Thread Per Ctx / 2.6.1 Single Threaded / 1.2 Single
Threaded
 Frame Rate: 16 / 11 / 15
 Cull Time:  42 / 34 / 16
 Draw Time:  62 / 55 / 49
 GPU Time:   62 / 55 / 48

 Windows XP Stats (for this one I used OSG 2.8 rc1 because it has more
 stats information so it allowed me to verify that the vertex and
 primitive counts were the same for both versions of the viewer):
 2.8 Draw Thread Per Ctx / 2.8 Single Threaded / 1.2 Single Threaded
 Frame Rate: 15 / 10 / 12
 Cull Time:  50 / 40 / 24
 Draw Time:  65 / 60 / 59
 GPU Time:   60 / 60 / 58

 The biggest difference between the two versions of the viewer was
always
 the cull time, but on Linux the draw times were also fairly different
 (and only slightly different on the Windows laptop).

 Is full compile optimization enabled?

 I just used the settings that were given to the Release build for the
 Visual Studio project files and the Linux Makefiles.

 On Linux this translated to -O3 -DNDEBUG (at least that's what
 cmake-gui says is defined for the CMAKE_CXX_FLAGS_RELEASE variable).

 On Viz Studio this translated to:

 Optimization: Maximize Speed
 Inline Function Expansion: Any Suitable
 Enable Intrinsic Functions: No
 Favor Size or Speed: Neither
 Omit Frame Pointers: No
 Enable Fiber Safe Optimizations: No
 Whole Program Optimization: No

 Is the atomic reference counting being compiled in correctly?

 How would I verify that atomic reference counting is compiled in
 correctly?

 Is there something with how culling is done that has changed between
OSG
 1.2 and OSG 2.6/2.8?

 Thanks.

 Alex


 -Original Message-
 From: osg-users-boun...@lists.openscenegraph.org
 [mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of
Robert
 Osfield
 Sent: Friday, February 06, 2009 1:58 AM
 To: OpenSceneGraph Users
 Subject: Re: [osg-users] Cull time doubled?

 Hi Alex,

 There shouldn't be a performance drop if everything is compiled

Re: [osg-users] Cull time doubled?

2009-02-09 Thread Pecoraro, Alexander N
 It could be that the database has been optimised differently in each
case.

The frame rates and times from first email were from an unoptimized
scene graph (i.e. I set the OSG_OPTIMIZER env var to OFF).

 What processor and OS type (32bit or 64bit) are you using?

On the Enterprise Redhat Linux computer I have 4 64 bit dual core Xeon
3Ghz processors. 

On my Windows XP computer I have an Intel 64 bit Core2 DUO 2Ghz CPU.

BTW, I found in the GCC documentation that if the gcc linker reports an
undefined reference to __sync_add_and_fetch_4 then that means the
built-in atomic functions are not supported on your processor. So my
Linux computer must not support atomic built in functions, but that
means that both the 1.2 version of the API and the 2.6 version of the
API are not using the built-in atomic functions, but still exhibiting
large differences in cull time.

 I'd suspect that the scene graph might not be well balanced and could
probably be done far more efficiently.

I'd suspect that you are probably right, but I would expect that the
amount of inefficiency would be the same in both versions of the API.

Alex


-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert
Osfield
Sent: Monday, February 09, 2009 3:11 PM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Cull time doubled?

Hi Alex,

The cull is done pretty well the same between 1.x and 2.x so is very
unlikely to be related to the difference.

It could be that the database has been optimised differently in each
case.  Try switching off the Optimizer to see if this makes any
difference.

The next area to look at is the thread safe ref/unref that is now used
by default, and should be using atomic ref counting.  To see what is
being use have a look at the file:

include/OpenThreads/Config

Mine looks like:

/** Comments cut out...
  ...
  */
#ifndef _OPENTHREADS_CONFIG
#define _OPENTHREADS_CONFIG

#define _OPENTHREADS_ATOMIC_USE_GCC_BUILTINS
/* #undef _OPENTHREADS_ATOMIC_USE_MIPOSPRO_BUILTINS */
/* #undef _OPENTHREADS_ATOMIC_USE_SUN */
/* #undef _OPENTHREADS_ATOMIC_USE_WIN32_INTERLOCKED */
/* #undef _OPENTHREADS_ATOMIC_USE_BSD_ATOMIC */
/* #undef _OPENTHREADS_ATOMIC_USE_MUTEX */
/* #undef OT_LIBRARY_STATIC */

#endif

Note that ATMOIC_USE_GCC_BUILTINS is used.

What processor and OS type (32bit or 64bit) are you using?

The long cull/draw/GPU times in draw thread per context suggest to me
that the processor is being overly contended, as if CPU affinity isn't
functioning well.  If you only have a single core CPU then this will
be the reason.

Finally in all your mentioned cases the cull, draw and GPU times are
all very long.  I'd suspect that the scene graph might not be well
balanced and could probably be done far more efficiently.  Without
knowing the database I wouldn't be able to say exactly what.

Robert.


On Mon, Feb 9, 2009 at 10:54 PM, Pecoraro, Alexander N
alexander.n.pecor...@lmco.com wrote:
 I've gotten similar results for the osgviewer running on a Redhat
 Enterprise 5 Linux desktop and a Window's XP laptop. I used an
animation
 path to make sure that I was looking at the database from the same
 viewpoint when collecting statistics.

 Linux Stats:
 2.6.1 Draw Thread Per Ctx / 2.6.1 Single Threaded / 1.2 Single
Threaded
 Frame Rate: 16 / 11 / 15
 Cull Time:  42 / 34 / 16
 Draw Time:  62 / 55 / 49
 GPU Time:   62 / 55 / 48

 Windows XP Stats (for this one I used OSG 2.8 rc1 because it has more
 stats information so it allowed me to verify that the vertex and
 primitive counts were the same for both versions of the viewer):
 2.8 Draw Thread Per Ctx / 2.8 Single Threaded / 1.2 Single Threaded
 Frame Rate: 15 / 10 / 12
 Cull Time:  50 / 40 / 24
 Draw Time:  65 / 60 / 59
 GPU Time:   60 / 60 / 58

 The biggest difference between the two versions of the viewer was
always
 the cull time, but on Linux the draw times were also fairly different
 (and only slightly different on the Windows laptop).

 Is full compile optimization enabled?

 I just used the settings that were given to the Release build for the
 Visual Studio project files and the Linux Makefiles.

 On Linux this translated to -O3 -DNDEBUG (at least that's what
 cmake-gui says is defined for the CMAKE_CXX_FLAGS_RELEASE variable).

 On Viz Studio this translated to:

 Optimization: Maximize Speed
 Inline Function Expansion: Any Suitable
 Enable Intrinsic Functions: No
 Favor Size or Speed: Neither
 Omit Frame Pointers: No
 Enable Fiber Safe Optimizations: No
 Whole Program Optimization: No

 Is the atomic reference counting being compiled in correctly?

 How would I verify that atomic reference counting is compiled in
 correctly?

 Is there something with how culling is done that has changed between
OSG
 1.2 and OSG 2.6/2.8?

 Thanks.

 Alex


 -Original Message-
 From: osg-users-boun...@lists.openscenegraph.org
 [mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf

Re: [osg-users] Cull time doubled?

2009-02-09 Thread Pecoraro, Alexander N
What version of Linux/GCC/processor that supports the built-in atomic
functions?

Thanks.

Alex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of
Pecoraro, Alexander N
Sent: Monday, February 09, 2009 5:19 PM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Cull time doubled?

 It could be that the database has been optimised differently in each
case.

The frame rates and times from first email were from an unoptimized
scene graph (i.e. I set the OSG_OPTIMIZER env var to OFF).

 What processor and OS type (32bit or 64bit) are you using?

On the Enterprise Redhat Linux computer I have 4 64 bit dual core Xeon
3Ghz processors. 

On my Windows XP computer I have an Intel 64 bit Core2 DUO 2Ghz CPU.

BTW, I found in the GCC documentation that if the gcc linker reports an
undefined reference to __sync_add_and_fetch_4 then that means the
built-in atomic functions are not supported on your processor. So my
Linux computer must not support atomic built in functions, but that
means that both the 1.2 version of the API and the 2.6 version of the
API are not using the built-in atomic functions, but still exhibiting
large differences in cull time.

 I'd suspect that the scene graph might not be well balanced and could
probably be done far more efficiently.

I'd suspect that you are probably right, but I would expect that the
amount of inefficiency would be the same in both versions of the API.

Alex


-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert
Osfield
Sent: Monday, February 09, 2009 3:11 PM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Cull time doubled?

Hi Alex,

The cull is done pretty well the same between 1.x and 2.x so is very
unlikely to be related to the difference.

It could be that the database has been optimised differently in each
case.  Try switching off the Optimizer to see if this makes any
difference.

The next area to look at is the thread safe ref/unref that is now used
by default, and should be using atomic ref counting.  To see what is
being use have a look at the file:

include/OpenThreads/Config

Mine looks like:

/** Comments cut out...
  ...
  */
#ifndef _OPENTHREADS_CONFIG
#define _OPENTHREADS_CONFIG

#define _OPENTHREADS_ATOMIC_USE_GCC_BUILTINS
/* #undef _OPENTHREADS_ATOMIC_USE_MIPOSPRO_BUILTINS */
/* #undef _OPENTHREADS_ATOMIC_USE_SUN */
/* #undef _OPENTHREADS_ATOMIC_USE_WIN32_INTERLOCKED */
/* #undef _OPENTHREADS_ATOMIC_USE_BSD_ATOMIC */
/* #undef _OPENTHREADS_ATOMIC_USE_MUTEX */
/* #undef OT_LIBRARY_STATIC */

#endif

Note that ATMOIC_USE_GCC_BUILTINS is used.

What processor and OS type (32bit or 64bit) are you using?

The long cull/draw/GPU times in draw thread per context suggest to me
that the processor is being overly contended, as if CPU affinity isn't
functioning well.  If you only have a single core CPU then this will
be the reason.

Finally in all your mentioned cases the cull, draw and GPU times are
all very long.  I'd suspect that the scene graph might not be well
balanced and could probably be done far more efficiently.  Without
knowing the database I wouldn't be able to say exactly what.

Robert.


On Mon, Feb 9, 2009 at 10:54 PM, Pecoraro, Alexander N
alexander.n.pecor...@lmco.com wrote:
 I've gotten similar results for the osgviewer running on a Redhat
 Enterprise 5 Linux desktop and a Window's XP laptop. I used an
animation
 path to make sure that I was looking at the database from the same
 viewpoint when collecting statistics.

 Linux Stats:
 2.6.1 Draw Thread Per Ctx / 2.6.1 Single Threaded / 1.2 Single
Threaded
 Frame Rate: 16 / 11 / 15
 Cull Time:  42 / 34 / 16
 Draw Time:  62 / 55 / 49
 GPU Time:   62 / 55 / 48

 Windows XP Stats (for this one I used OSG 2.8 rc1 because it has more
 stats information so it allowed me to verify that the vertex and
 primitive counts were the same for both versions of the viewer):
 2.8 Draw Thread Per Ctx / 2.8 Single Threaded / 1.2 Single Threaded
 Frame Rate: 15 / 10 / 12
 Cull Time:  50 / 40 / 24
 Draw Time:  65 / 60 / 59
 GPU Time:   60 / 60 / 58

 The biggest difference between the two versions of the viewer was
always
 the cull time, but on Linux the draw times were also fairly different
 (and only slightly different on the Windows laptop).

 Is full compile optimization enabled?

 I just used the settings that were given to the Release build for the
 Visual Studio project files and the Linux Makefiles.

 On Linux this translated to -O3 -DNDEBUG (at least that's what
 cmake-gui says is defined for the CMAKE_CXX_FLAGS_RELEASE variable).

 On Viz Studio this translated to:

 Optimization: Maximize Speed
 Inline Function Expansion: Any Suitable
 Enable Intrinsic Functions: No
 Favor Size or Speed: Neither
 Omit Frame Pointers: No
 Enable Fiber Safe Optimizations: No
 Whole Program Optimization: No

 Is the atomic reference counting

[osg-users] Cull time doubled?

2009-02-05 Thread Pecoraro, Alexander N
I've recently upgraded an old 3d viewer that was using OSG API version
1.2 to version 2.6.1. Oddly enough some databases that I was using with
the old viewer actually perform worse with the new version of the API.
For some reason the cull time on these databases is 1.5 to 2 times
higher on version 2.6.1 than it was on version 1.2. The scene graph node
structure is exactly the same, but the culling time has increased. Why
would that happen? Has anyone else seen this?

 

I can provide a small test case if anyone is interested in seeing what I
mean.

 

Thanks.

 

Alex

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] When is right time to update text on the screen

2009-02-04 Thread Pecoraro, Alexander N
Ok I figured out by looking closer at the stats handler that a better
way to do it is to implement a draw callback for the text object that
uses a mutex to prevent multiple threads from accessing the text string
simultaneously.

 

Alex



From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of
Pecoraro, Alexander N
Sent: Tuesday, February 03, 2009 5:17 PM
To: OpenSceneGraph Users
Subject: Re: [osg-users] When is right time to update text on the screen

 

I seem to have fixed the problem by setting the data variance on my
osgText object to DYNAMIC. I'm wondering if this is the proper way to
handle this situation though - because when I look at the StatsHandler
for an example it appears to be modifying it's osgText nodes, but it
does not set the data variance to DYNAMIC. Why does it work for the
StatsHandler, but not for my code? Is it because the StatsHandler
modifies its text during the event traversal and I modify my text during
the update traversal? Should I be modifying it during event traversal
only?

 

Thanks.

 

Alex

 



From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of
Pecoraro, Alexander N
Sent: Tuesday, February 03, 2009 4:26 PM
To: OpenSceneGraph Users
Subject: [osg-users] When is right time to update text on the screen

 

Is there a proper time to make changes to an osgText object's text? I
seem to be having a problem where if I update some text in an update
callback function it causes a segfault when I'm running the viewer in
multi-threaded mode, but not in single threaded mode. I'm guessing
because the text object is modified while is it being used by the render
thread. Is there something wrong with how I am doing it?

 

Thanks.

 

Alex

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] When is right time to update text on the screen

2009-02-03 Thread Pecoraro, Alexander N
Is there a proper time to make changes to an osgText object's text? I
seem to be having a problem where if I update some text in an update
callback function it causes a segfault when I'm running the viewer in
multi-threaded mode, but not in single threaded mode. I'm guessing
because the text object is modified while is it being used by the render
thread. Is there something wrong with how I am doing it?

 

Thanks.

 

Alex

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] possible bug in cfg plugin

2009-02-03 Thread Pecoraro, Alexander N
That change seemed to fix it on my computer.

Thanks.

Alex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert
Osfield
Sent: Wednesday, January 28, 2009 1:05 AM
To: OpenSceneGraph Users
Subject: Re: [osg-users] possible bug in cfg plugin

Hi Alexander,

I didn't see the crash (I work under Linux) but a code review of the
RenderSurface constructor did reveal that it was lacking a number of
member variable initializers.  I have added these back including the
missing _realized = false; line.   These changes are now checked in.
I've also attached the changed src/osgPlugins/cfgRenderSurface.cpp.
Could you test and let me know if it fixes things.

FYI, you don't need to svn access to check svn - you can do it all on
the website - just go to Browse Source link from the front page.  Our
svn server also support https, there is a chance this might work for
you.

Robert.

On Tue, Jan 27, 2009 at 11:53 PM, Pecoraro, Alexander N
alexander.n.pecor...@lmco.com wrote:
 I noticed a bug in the 2.6.0 version of the API with the viewer config
file
 reader plugin. I can't access the latest developer source code right
now so
 I can't verify that the bug still exists in the 2.7 version of the
API, but
 I've attached the config file that I used to reproduce the problem.
The
 problem is that the plugin causes a segfault when it reads that
 configuration file because the _realized member variable of the
 RenderSurface class defined in src/osgPlugins/cfg/RenderSurface.cpp is
never
 given an initial value. Interestingly enough this problem does not
occur on
 my Linux box, which is running Redhat Enterprise Client 5. However, on
my
 Windows box with Visual Studio 2005 Professional the _realized
variable is
 auto-initialized to true which ends up causing the segfault to occur.
I'm
 guessing, but haven't verified, that the reason it works on my Linux
box is
 because the GCC compiler auto-initializes the variable to false which
 prevents the bug from occurring.



 Here is how I ran the osgviewer to get it to crash:



 osgviewer -c oneWindow.cfg name of osg file



 I also found that once I went into the ReaderSurface's constructor and
added
 _realized = false; to it then the bug went away.



 Alex

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org

http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] When is right time to update text on the screen

2009-02-03 Thread Pecoraro, Alexander N
I seem to have fixed the problem by setting the data variance on my
osgText object to DYNAMIC. I'm wondering if this is the proper way to
handle this situation though - because when I look at the StatsHandler
for an example it appears to be modifying it's osgText nodes, but it
does not set the data variance to DYNAMIC. Why does it work for the
StatsHandler, but not for my code? Is it because the StatsHandler
modifies its text during the event traversal and I modify my text during
the update traversal? Should I be modifying it during event traversal
only?

 

Thanks.

 

Alex

 



From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of
Pecoraro, Alexander N
Sent: Tuesday, February 03, 2009 4:26 PM
To: OpenSceneGraph Users
Subject: [osg-users] When is right time to update text on the screen

 

Is there a proper time to make changes to an osgText object's text? I
seem to be having a problem where if I update some text in an update
callback function it causes a segfault when I'm running the viewer in
multi-threaded mode, but not in single threaded mode. I'm guessing
because the text object is modified while is it being used by the render
thread. Is there something wrong with how I am doing it?

 

Thanks.

 

Alex

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] possible bug in cfg plugin

2009-01-27 Thread Pecoraro, Alexander N
I noticed a bug in the 2.6.0 version of the API with the viewer config
file reader plugin. I can't access the latest developer source code
right now so I can't verify that the bug still exists in the 2.7 version
of the API, but I've attached the config file that I used to reproduce
the problem. The problem is that the plugin causes a segfault when it
reads that configuration file because the _realized member variable of
the RenderSurface class defined in src/osgPlugins/cfg/RenderSurface.cpp
is never given an initial value. Interestingly enough this problem does
not occur on my Linux box, which is running Redhat Enterprise Client 5.
However, on my Windows box with Visual Studio 2005 Professional the
_realized variable is auto-initialized to true which ends up causing the
segfault to occur. I'm guessing, but haven't verified, that the reason
it works on my Linux box is because the GCC compiler auto-initializes
the variable to false which prevents the bug from occurring.

 

Here is how I ran the osgviewer to get it to crash:

 

osgviewer -c oneWindow.cfg name of osg file

 

I also found that once I went into the ReaderSurface's constructor and
added _realized = false; to it then the bug went away.

 

Alex

Camera Camera 1
{
RenderSurface GDE Viewer {
Visual  { SetSimple }
Screen 0;
WindowRect 0 0 1024 768;
Border on;
}
Lens {
Perspective 40.0 30.0 1.0 1.0;
}
Offset {
Shear 0.0 0.0;
}
}
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] svn

2009-01-16 Thread Pecoraro, Alexander N
I can't checkout that url from svn either.

I also can't get to the main website.

Alex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Matt
Fair
Sent: Friday, January 16, 2009 11:49 AM
To: OpenSceneGraph Users
Subject: Re: [osg-users] svn

Robert,
Same things.  Could this be an issue on my side?
Matt

On Jan 16, 2009, at 12:13 PM, Robert Osfield wrote:

 Hi Matt,

 Could you try again, svn is working for me right now.

 Robert.

 On Fri, Jan 16, 2009 at 6:11 PM, Matt mbf...@lanl.gov wrote:
 I have been trying to checkout the code in svn and am getting an  
 error:

 svn co

http://www.openscenegraph.org/svn/osg/OpenSceneGraph/tags/OpenSceneGraph
-2.6.1 
  .

 svn: PROPFIND request failed on
 '/svn/osg/OpenSceneGraph/tags/OpenSceneGraph-2.6.1'
 svn: PROPFIND of '/svn/osg/OpenSceneGraph/tags/ 
 OpenSceneGraph-2.6.1': 302
 Found (http://www.openscenegraph.org)

 Is anyone else getting this?  I know that there was some problems  
 yesterday
 with svn, is this still a part of it?

 Thanks,
 Matt
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org

http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org

http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] svn

2009-01-16 Thread Pecoraro, Alexander N
I take that back - I can get to the website now. It is just really slow.

Alex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of
Pecoraro, Alexander N
Sent: Friday, January 16, 2009 11:50 AM
To: OpenSceneGraph Users
Subject: Re: [osg-users] svn

I can't checkout that url from svn either.

I also can't get to the main website.

Alex

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Matt
Fair
Sent: Friday, January 16, 2009 11:49 AM
To: OpenSceneGraph Users
Subject: Re: [osg-users] svn

Robert,
Same things.  Could this be an issue on my side?
Matt

On Jan 16, 2009, at 12:13 PM, Robert Osfield wrote:

 Hi Matt,

 Could you try again, svn is working for me right now.

 Robert.

 On Fri, Jan 16, 2009 at 6:11 PM, Matt mbf...@lanl.gov wrote:
 I have been trying to checkout the code in svn and am getting an  
 error:

 svn co

http://www.openscenegraph.org/svn/osg/OpenSceneGraph/tags/OpenSceneGraph
-2.6.1 
  .

 svn: PROPFIND request failed on
 '/svn/osg/OpenSceneGraph/tags/OpenSceneGraph-2.6.1'
 svn: PROPFIND of '/svn/osg/OpenSceneGraph/tags/ 
 OpenSceneGraph-2.6.1': 302
 Found (http://www.openscenegraph.org)

 Is anyone else getting this?  I know that there was some problems  
 yesterday
 with svn, is this still a part of it?

 Thanks,
 Matt
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org

http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org

http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] shared VertexBufferObject question

2008-09-22 Thread Pecoraro, Alexander N
I did some reading about VBOs on the NVidia developer site and it turns
out that the glBindBuffer() call is not the one that I should worry
about. The white paper I read said that limiting the number of calls to
glVertexPointer() was the proper way to optimize the use of VBOs.

So I noticed that there was some commented out code in include/State
that appeared to be for the purpose of preventing redundant calls to
glVertexPointer() - see below:

inline void setVertexPointer( GLint size, GLenum type,
  GLsizei stride, const GLvoid *ptr
)
{
   (only showing relevant parts of code)
  //if (_vertexArray._pointer!=ptr || _vertexArray._dirty)
  {
_vertexArray._pointer=ptr;
glVertexPointer( size, type, stride, ptr );
  }
  _vertexArray._dirty = false;
}

Seems like if the commented out IF statement was not commented out then
I could have multiple Geometry nodes share a vertex buffer object and a
vertex array and only require one call to glVertexPointer(). Wouldn't
that be more efficient?

Alex

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Robert
Osfield
Sent: Saturday, September 20, 2008 1:56 AM
To: OpenSceneGraph Users
Subject: Re: [osg-users] shared VertexBufferObject question

Hi Alex,

The unbind is done to prevent state leakage.  One could potentially
using lazy state updating on VBO state by placing more controls into
osg::State, but this would require all Drawables to be careful about
what they assume is current state.  It's possible but it's quite a bit
of work.

Robert.

On Sat, Sep 20, 2008 at 12:42 AM, Pecoraro, Alexander N
[EMAIL PROTECTED] wrote:
 I want to create a VertexBufferObject that is shared by several
Geometry
 nodes so that the number of calls to glBindBuffer() are decreased, but
I
 noticed that on lines 1561 - 1567 of Geometry.cpp there is some code
that
 automatically unbinds the vertex buffer object effectively forcing
each
 Geometry node to rebind the VBO each time. Why does it do this? Isn't
this
 preventing a shared VBO from being used in the most efficient way
possible?



 Thanks.



 Alex

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org

http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] shared VertexBufferObject question

2008-09-19 Thread Pecoraro, Alexander N
I want to create a VertexBufferObject that is shared by several Geometry
nodes so that the number of calls to glBindBuffer() are decreased, but I
noticed that on lines 1561 - 1567 of Geometry.cpp there is some code
that automatically unbinds the vertex buffer object effectively forcing
each Geometry node to rebind the VBO each time. Why does it do this?
Isn't this preventing a shared VBO from being used in the most efficient
way possible?

 

Thanks.

 

Alex

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] osgFX::BumpMapping

2008-09-04 Thread Pecoraro, Alexander N
Is it possible to write osgFX::BumpMapping nodes to an ive file? I
noticed in OSG 2.4 and 2.5 there seems to be some code for writing it to
.osg, but not .ive. I also noticed that in the OSG trunk on SVN there
appears to be some code for writing it to ive, but I was hoping to be
able to use a released version. I figured since it appears the
osgFX::BumpMapping just uses built in OSG nodes that it would be able to
write out the OSG nodes that are created by osgFX::BumpMapping. Is that
the case?

 

Thanks.

 

Alex

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Bug or misleading code in Texture2D

2008-08-28 Thread Pecoraro, Alexander N
It seems a little misleading or a bug that Texture2D::getNumImages()
always returns 1 even if the _image member variable is NULL. Shouldn't
it return 0 if the _image member is NULL? 

I know it seems odd to have a Texture2D with a NULL _image, but I took a
look at the ive reader and it appears that if it fails to read the image
file then the Texture2D's _image will end up being NULL. See below:

 From ive reader plugin Texture2D::read()

IncludeImageMode includeImg = (IncludeImageMode)in-readChar();

osg::Image *image = in-readImage(includeImg);
if(image) {
setImage(image);
}

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Bug in CombineLODVisitor

2008-06-06 Thread Pecoraro, Alexander N
Sorry, I just noticed that I forgot to put a subject line on my first
email, so I am resending it.
 
I think there is a bug in the osgUtil::Optimizer::CombineLODVisitor - at
line 1530 of Optimizer.cpp it does a dynamic_cast on and osg::Node* to
osg::LOD* and then at line 1563 it calls getChild(i) (even if
getNumChildren() == 0) on the dynamically casted LOD node. This works
fine when the node is an LOD node, but when it is a PagedLOD node then
it causes in invalid access to the _children vector. I attached a screen
shot to show what I mean.

This situation would only ocurr when a PagedLOD node was a sibling of an
LOD node, which is probably why it hasn't been spotted before.

Not sure if this is the accepted way to submit a fix, but anyway I made
a fix to the Optimizer code (version 2.4) and attached it to the email.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Attention: osgTerrain class renaming in progress

2008-04-14 Thread Pecoraro, Alexander N
What version of the OSG API and the VPB do I need to use in order to
take advantage of the new osgTerrain functionality and optimizations?

Thanks.

Alex

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Robert
Osfield
Sent: Wednesday, March 26, 2008 2:13 PM
To: OpenSceneGraph Users
Subject: [osg-users] Attention: osgTerrain class renaming in progress

Hi All,

As part of the work on scaling up VirtualPlanetBuilder to comfortable
handle terrabyte database I am also working on osgTerrain, the two bits
of work are quite closely related as VPB is at its most efficient when
outputting osgTerrain based databases (the --terrain option makes it
about 100-200x faster than it is when build normal polygonal
databases). Not only VPB is faster when generating osgTerrain
based databases but the resulting databases will be more flexible and
compact, with future opportunities for improving visual quality and
performance.  So all good stuff... but before we get to this terrain
nirvana a few things will need to be refactored...

My current work on osgTerrain is related to making better use of
recycling of deleted objects and sharing of things like tex coord arrays
etc where possible.  This is something that now is required to better
cope with users charging around a multi-terrabyte database at high speed
- as it tends to push memory much more than we're previously done.
Recycling and sharing of object requires a shared container for each
paged terrain database, to this end I'll be introducing a new terrain
node that decorates the whole paged terrain
database.   This new terrain node will also help track the hierarchy
and adjancancy of the tiles being loaded via a tile system.

The idea of a terrain node containing all the PagedLOD nodes, and the
terrain tiles that do the actual rendering of the tiles height fields
brings about a new relationship to osgTerrain, and the the naming really
needs to evolve to better suit it, unfortunately with picking more
suitable names of the new usage model we'll end up break strict
backwards compatibility.   osgTerrain is still young, so I'd rather
take the pain of hit in backwards compatibility now rather than suffer
inappropriate names for the rest of the NodeKits life.The new
naming scheme goes:


   osgTerrain::Terrain node is renamed osgTerrain::TerrainTile,

 osgTerrain::TerrainTile API and usage model remain almost
entirely the same as the old osgTerrain::Terrain so most developers
would just
 need to rename osgTerrain::Terrain to
osgTerrain::TerrainTile and then everything will compile once more.

   The new terrain node that decorates the whole terrain scene graph
will be called osgTerrain::Terrain,

The API of the new Terrain node will have some overlap with
TerrainTile, such as provide a default TerrainTechnique that nested
TerrainTile has
clone, but otherwise its a totally different type of Node,
its a decorator node rather than a rendering element.

Reusing an original name might cause initial confusion, and
compile errors might through one off the scent.  But... I feel that once
things
are settled down the new naming will be more sense i.e.
The a Terrain has subgraph that contains one or more TerrainTile.
Each Terrain is
conceptually a single overall block of terrain, for instance
if you have a solar system, then each planet would be its has its own
Terrain
node.

All the Layer and TerrainTechnique classes remained unchanged relative
to recent 2.3.x, although these themselves have evolved since 2.0.

Existing .ive database that contain old osgTerrain::Terrain(Tile)
objects will still load, so backwards compatibility has been maintained,
with the objects just loading as TerrainTile and everything behaving
itself as before.  The .osg format won't be able to be mapped so well
unfortunately, to keep .osg files with Terrain nodes in them working
you'll need to do a search a replace of Terrain to TerrainTile.

In my local copy of the OSG I've already made most of these changes, and
plan to check them in tomorrow.  At this point if you directly use
osgTerrain then you'll need to rename Terrain to TerrainTile to get
things back working again, if you don't use osgTerrain then you'll not
notice any different at all.

Thanks in advance for you patience in tracking these changes, Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.or
g
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] NodeVisitor won't visit TerrainTile

2008-04-11 Thread Pecoraro, Alexander N
I am trying to write a NodeVisitor class that does something to
osgTerrain::TerrainTile nodes, but for some reason it never enters my
apply(TerrainTile tile) function. It just seems to fall back into the
apply(Group group) of the base NodeVisitor class. Finally I just gave
up and overrided the apply(Group group) and attempted to dynamic cast
each group to a TerrainTile and then called my apply(TerrainTile)
function and that worked, but it doesn't seem like the proper way to do
it.

What am I doing wrong? Here is my code (in short):

class TerrainTileVisitor : public osg::NodeVisitor
{
public:
TerrainTileVisitor() :
osg::NodeVisitor(osg::NodeVisitor::TRAVERSE_ALL_CHILDREN)
{
}

virtual void apply(osgTerrain::TerrainTile tile)
{
std::cout  TerrainTile Found!  std::endl;
  traverse(tile);
}

//Added this function to make it work
virtual void apply(osg::Group group)
{
osgTerrain::TerrainTile* tile =
dynamic_castosgTerrain::TerrainTile*(group);
if(tile)
apply(*tile);
else
traverse(group);
}
};

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Clear depth buffer

2007-07-26 Thread Pecoraro, Alexander N
I was wondering if anyone could offer me a suggestion on how to force a
clear of the depth buffer in the middle of a rendering pass. I have a 3d
model that I want to show up right in front of the viewer and always be
on top. I used a absolute reference frame transform node and an
orthographic projection node to put the model in the middle of the
screen, but now when my viewer gets too close to other models they peek
through. I figure if right before I draw my orthographic model I clear
the depth buffer then I would guarentee that nothing would get draw on
top of my model (as long as I draw my model last). I found that I can't
just disable depth testing on the model because then the various pieces
of the model don't get drawn in the right order and so certain pieces
end up on top of pieces that they shouldn't end up on top of.

Any help?

Thanks.

Alex
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org