Re: [osg-users] PBOs and stutterless texture uploading

2010-10-07 Thread Eduard - Gabriel Munteanu

Ulrich Hertlein wrote:
 Hi Eduard,
 
 On 28/09/10 6:08 , Eduard - Gabriel Munteanu wrote:
 
  I've been investigating an issue in Flightgear, an OSS flight sim using
  OSG. We have lots of stutter, usually in multiplayer, caused by loading
  new models (this happens whenever somebody join). Loading textures from
  disk happens on another thread (via DatabasePager), but I traced the
  issue to glTexImage2D() calls. So it's texture uploading to graphics
  driver / card that's causing it. It's not uncommon to see delays of
  300ms for 500KiB - 1MiB textures.
  
 
 Did you try to enable object pre-compiling on the DatabasePager thread via
 'DatabasePager::setDoPreCompile(true)'?
 
 /ulrich
 
 


Thanks for your reply and sorry for the delay, I've been a bit busy.

Yes, I tried setting precompiling on through both environment variables and 
code. It didn't help. :(

Here's some more information. I'm using OSG from SVN revision 11785 (quite 
recent) on a Linux system, although other 2.9.x releases I tried exhibit the 
same problem. I use the latest Catalyst drivers for my graphics card.

My hunch is somehow the card/driver doesn't like the pixel format in those 
textures and it's doing some internal conversions (e.g. RGB - RGBA). Geometry 
data doesn't seem to be a bottleneck here, and I'm not sure if precompiling 
helps at all.

So if I could make use of PBOs, DMA and loading would happen asynchronously and 
the problem would go away. I don't mind if those parts of the scene graph get 
rendered 5-10 seconds later as long as it doesn't cause stutter. It would also 
remove any smaller stutters and improve framerate.

On the other hand, I can't really go and fix up all textures in the data 
tree, even if there was a format that could improve the situation.

Any thoughts on how to get this done?

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=32513#32513





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] GLU integration

2010-10-07 Thread Robert Osfield
Hi Stephan,

I checked fixes for these errors yesterday evening - there were
warnings for me with g++ 4.3.3.  Could you do an svn update and let me
know if this fixes things.

Cheers,
Robert.

On Wed, Oct 6, 2010 at 9:11 PM, Stephan Huber ratzf...@digitalmind.de wrote:
 Hi Robert,

 on OS X I get a lot of compile errors in mipmap.cpp:

 /Users/hudson/.hudson/jobs/osg.current/workspace/osg/src/osg/glu/libutil/mipmap.cpp:3532:0
 /Users/hudson/.hudson/jobs/osg.current/workspace/osg/src/osg/glu/libutil/mipmap.cpp:3532:
 error: invalid conversion from 'void*' to 'GLushort*'


 /Users/hudson/.hudson/jobs/osg.current/workspace/osg/src/osg/glu/libutil/mipmap.cpp:3534:0
 /Users/hudson/.hudson/jobs/osg.current/workspace/osg/src/osg/glu/libutil/mipmap.cpp:3534:
 error: invalid conversion from 'void*' to 'GLushort*'


 /Users/hudson/.hudson/jobs/osg.current/workspace/osg/src/osg/glu/libutil/mipmap.cpp:7394:0
 /Users/hudson/.hudson/jobs/osg.current/workspace/osg/src/osg/glu/libutil/mipmap.cpp:7394:
 error: invalid conversion from 'void*' to 'GLushort*'


 /Users/hudson/.hudson/jobs/osg.current/workspace/osg/src/osg/glu/libutil/mipmap.cpp:7396:0
 /Users/hudson/.hudson/jobs/osg.current/workspace/osg/src/osg/glu/libutil/mipmap.cpp:7396:
 error: invalid conversion from 'void*' to 'GLushort*'


 /Users/hudson/.hudson/jobs/osg.current/workspace/osg/src/osg/glu/libutil/mipmap.cpp:7905:0
 /Users/hudson/.hudson/jobs/osg.current/workspace/osg/src/osg/glu/libutil/mipmap.cpp:7905:
 error: invalid conversion from 'const void*' to 'const GLubyte*'


 /Users/hudson/.hudson/jobs/osg.current/workspace/osg/src/osg/glu/libutil/mipmap.cpp:7905:0
 /Users/hudson/.hudson/jobs/osg.current/workspace/osg/src/osg/glu/libutil/mipmap.cpp:7905:
 error:   initializing argument 4 of 'void halveImage_ubyte(GLint,
 GLuint, GLuint, const GLubyte*, GLubyte*, GLint, GLint, GLint)'


 /Users/hudson/.hudson/jobs/osg.current/workspace/osg/src/osg/glu/libutil/mipmap.cpp:7905:0
 /Users/hudson/.hudson/jobs/osg.current/workspace/osg/src/osg/glu/libutil/mipmap.cpp:7905:
 error: invalid conversion from 'void*' to 'GLubyte*'


 etc.

 Any hints how to fix this?

 cheers,
 Stephan



 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Creating holes in a PagedLOD

2010-10-07 Thread Robert Osfield
Hi John,

VPB itself doesn't support cutting out of holes.  You will need to
post process the tiles to insert these.  If you have built the
database with osgTerrain::TerrainTile (this is now the default) you
will have to think about writing your own TerrainTechnique for
converting the tiles height fields into a mesh with the required holes
in them.

Robert.

On Wed, Oct 6, 2010 at 8:09 PM,  ra...@hush.ai wrote:
 Hello All,

 I'm using vpb to generate a terrain for a project I'm working on.
 And it is fantastic.

 I do have one question though, I have a need to 'cut' holes in the
 terrain so models can be placed on/into hillsides, or in a hole in
 the ground. e.g. a dam in the side of a hill.

 I've looked a bit and do not see an obvious solution?  any pointers
 in the right direction would be much appreciated.

 Thanks,
 John

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] PBOs and stutterless texture uploading

2010-10-07 Thread Robert Osfield
Hi Eduard,

The use of PBO may well help, but only if the pixel formats are
accelerated properly by the driver, this can be a bit of lottery.

It just so happens that my recent work on integrating GLU
implementations directly into the core OSG has been prompted by the
wish to pre-process imagery into a form that is better for downloading
to the graphics card.  I will be tweaking the gluScaleImage function
to enable us to call it from any thread, rather being restricted to
just threads with a valid graphics context, this opens the door to
resizing, altering pixel formats and generating mipmaps all in normal
CPU threads, such as parts threads that are loading data.

Another step along this route I plan to integrate the
NvidiaTextureTools SDK into a plugin to allow us to compress imagery
to GL friendly compressed pixel formats, again this can be done as
runtime pre-processing step so that all the data is a form that is
ideal for downloading to the GPU and minimizing memory footprint and
bandwidth.

This will all be part of the next dev release, 2.9.10, and the up
coming 3.0 release.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Regarding texture rectangle in MRT example

2010-10-07 Thread Mahendra G.R
Hello,

I'm doing exactly as it is shown in the osgmultiplerendertarget example, but
i get an error when i declare and try to initialize an object of
textureRectangle

code :

osg::TextureRectangle* textureRect[256] = {0,0,0,0};
for (int i=0;iZ;i++)
{
textureRect[i] = new osg::TextureRectangle;
   //..
}

I get an error saying invalid use of undefined type struct
osg::TextureRectangle,

Another question : Should i create a quad and attach the textures and then
render it?



 please excuse me if its a silly question.




-- 
http://www.mahendragr.com
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Save texture changes via fragment shader

2010-10-07 Thread Aitor Ardanza

Frederic Bouvier wrote:
 
  // load texture as an image
  imgTexture = osgDB::readImageFile(model/BAKE_OVERRIDE.jpg);
  // if the image is successfully loaded
  if (imgTexture)
  {
  imgTexture-allocateImage(3500, 3500, 1, GL_RGBA,
  GL_UNSIGNED_BYTE);
  
 
 Why are you reallocating space for an image loaded from file ?
 Shouldn't the texture dimension be a power of 2 ?
 

Ok, it is a mistake. The model I load (obj) has linked a texture that is loaded 
automatically.
I want to paint in another texture (size like models texture) in fragment 
shader
I am following the example of osgMultiplerendertargets ... but I can not get 
good textures ...

Code:
osg::Group *scene = new osg::Group();
modelTransf = new osg::PositionAttitudeTransform();
node = osgDB::readNodeFile(model\\file.obj);
node-setName(AVATAR);
modelTransf-addChild(node);
scene-addChild(modelTransf);
setSceneData(scene);

unsigned tex_width = 4096;
unsigned tex_height = 4096;
// textures to render to and to use for texturing of the final quad
osg::TextureRectangle* textureRect[2] = {0,0};
for (int i=0;i2;i++) {
textureRect[i] = new osg::TextureRectangle;
textureRect[i]-setTextureSize(tex_width, tex_height);
textureRect[i]-setInternalFormat(GL_RGBA);

textureRect[i]-setFilter(osg::Texture2D::MIN_FILTER,osg::Texture2D::LINEAR);

textureRect[i]-setFilter(osg::Texture2D::MAG_FILTER,osg::Texture2D::LINEAR);
}
//create the geometry of the quad
{ 
osg::Geometry* polyGeom = new osg::Geometry();

polyGeom-setSupportsDisplayList(false);

osg::Vec3Array* vertices = new osg::Vec3Array;
osg::Vec2Array* texcoords = new osg::Vec2Array;

vertices-push_back(osg::Vec3d(0,0,0));
texcoords-push_back(osg::Vec2(0,0));

vertices-push_back(osg::Vec3d(200,0,0));
texcoords-push_back(osg::Vec2(tex_width,0));

vertices-push_back(osg::Vec3d(200,0,200));
texcoords-push_back(osg::Vec2(tex_width,tex_height));

vertices-push_back(osg::Vec3d(0,0,200));
texcoords-push_back(osg::Vec2(0,tex_height));

polyGeom-setVertexArray(vertices);
polyGeom-setTexCoordArray(0,texcoords);

osg::Vec4Array* colors = new osg::Vec4Array;
colors-push_back(osg::Vec4(1.0f,1.0f,1.0f,1.0f));
polyGeom-setColorArray(colors);
polyGeom-setColorBinding(osg::Geometry::BIND_OVERALL);

polyGeom-addPrimitiveSet(new 
osg::DrawArrays(osg::PrimitiveSet::QUADS,0,vertices-size()));

// now we need to add the textures (generated by RTT) to the Drawable.
osg::StateSet* stateset = new osg::StateSet;
for (int i=0;i2;i++) {
stateset-setTextureAttributeAndModes(i, textureRect[i], 
osg::StateAttribute::ON);
}

polyGeom-setStateSet(stateset);
static const char *shaderSource = {
uniform sampler2DRect textureID0;\n
uniform sampler2DRect textureID1;\n
void main(void)\n
{\n
gl_FragData[0] = \n
  vec4(texture2DRect( textureID0, gl_TexCoord[0].st 
).rgb, 1);  \n
}\n
};
osg::ref_ptrosg::Shader fshader = new osg::Shader( 
osg::Shader::FRAGMENT , shaderSource);
osg::ref_ptrosg::Program program = new osg::Program;
program-addShader( fshader.get());
stateset-setAttributeAndModes( program.get(), 
osg::StateAttribute::ON | osg::StateAttribute::OVERRIDE );
stateset-addUniform(new osg::Uniform(textureID0, 0));
stateset-addUniform(new osg::Uniform(textureID1, 1));
osg::Geode* geode = new osg::Geode();
geode-addDrawable(polyGeom);

scene-addChild(geode);
}

getCamera()-setViewport(new osg::Viewport(0,0,width(),height()));
getCamera()-setProjectionMatrixAsPerspective(30.0f,
static_castdouble(width())/static_castdouble(height()), 1.0f, 
1.0f);
getCamera()-setGraphicsContext(getGraphicsWindow());

// now create the camera to do the multiple render to texture
{
osg::Camera* camera = new osg::Camera;

// set up the background color and clear mask.
camera-setClearColor(osg::Vec4(0.1f,0.1f,0.3f,1.0f));
camera-setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// the camera is going to look at our input quad
camera-setProjectionMatrix(osg::Matrix::ortho2D(0,1,0,1));
camera-setReferenceFrame(osg::Transform::ABSOLUTE_RF);
camera-setViewMatrix(osg::Matrix::identity());

// set viewport
camera-setViewport(0, 0, width(), height());

// set the camera to render before the main camera.
camera-setRenderOrder(osg::Camera::PRE_RENDER);

// tell the camera to use OpenGL frame buffer objects


Re: [osg-users] Workaround for nVidia + fullscreen + Windows 7

2010-10-07 Thread Wojciech Lewandowski
Hi Everyone,

Big Thanks to Farshid for the solution :-) 

Support for his workaround, to use Copy as a swap Method, was recently included 
in OSG trunk. 
SwapCopy is not active by default - people not using Aero should be still happy 
 with default SwapExchange. 

These who would like to activate the SwapCopy method can use environment 
variables or osgViewer command line arguments ( provided they use Viewer( 
ArgumentParser ) ctor ).

env var method:

set OSG_SWAP_METHOD=COPY

command line method:

osgviewer --swap-method COPY

Inside the code one can select swap method for a particular window via 
GraphicsContext::Traits  or for all windows by changing the default set in 
DisplaySettings. Traits default to method set in DisplaySettings. 
DisplaySettings use whats set by env var or  command line. If no option is 
given DEFAULT is used. I hope such solution is fairly complete and covers all 
possible use cases.

All 4 allowed swap method options are: 

SWAP_EXCHANGE  - flip back  front buffers
SWAP_COPY - copy contents of back  buffer into front buffer
SWAP_UNDEFINED - move contents of back  buffer into front buffer, leaving back 
buffer contents undefined 
SWAP_DEFAULT - let the driver select the method (in my observation NVidia 
drivers on Win7 defaults to EXCHANGE) 

Cheers,
Wojtek Lewandowski



From: Wojciech Lewandowski 
Sent: Monday, September 27, 2010 2:31 PM
To: OpenSceneGraph Users 
Subject: Re: [osg-users] Workaround for nVidia + fullscreen + Windows 7


Hi, 

I have submitted code changes. Look at osg-submissions for details.

Wojtek Lewandowski


From: Wojciech Lewandowski 
Sent: Friday, September 24, 2010 9:44 PM
To: OpenSceneGraph Users 
Subject: Re: [osg-users] Workaround for nVidia + fullscreen + Windows 7


Hi,

Exactly as Farshid Said I have modified PreparePixelFormatSpecification 
function in GraphicsWindowWin32.cpp to test the workaround. Interestingly 
PreparePixelFormatSpecification has a input allowSwapExchangeARB parameter as 
if someone had similar problem before. But this parameter is used when function 
is called but not influenced directly by GraphicContext::Traits. In my opinion 
the best option would be expose Swap method in the GraphicContext::Traits.

I may try to come up with a patch on monday.  Anyone to beat me on this ;-)  ?

Wojtek

From: Farshid Lashkari 
Sent: Friday, September 24, 2010 6:40 PM
To: OpenSceneGraph Users 
Subject: Re: [osg-users] Workaround for nVidia + fullscreen + Windows 7


Hi Robert,


On Fri, Sep 24, 2010 at 9:28 AM, Robert Osfield robert.osfi...@gmail.com 
wrote: 
  Did you modify the OSG to achieve this?  If so could you post the
  changes.  Perhaps this could be made as an runtime option in
  osgViewer.




My application handles all the windowing code itself, so I didn't need to make 
any changes to OSG.


I noticed that GraphicsWindowWin32.cpp hard codes the swap method to 
WGL_SWAP_EXCHANGE_ARB. To apply this workaround the users would just need to 
change this to WGL_SWAP_COPY_ARB and recompile. Having this configurable would 
be ideal, however I'm not very familiar with osgViewer, so I'm probably not the 
best person to make this change, otherwise I would have already submitted a 
patch ;)


Cheers,
Farshid









___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org






___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org






___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Regarding texture rectangle in MRT example

2010-10-07 Thread Aitor Ardanza
Hi,

you need to declare #include osg/TextureRectangle

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=32520#32520





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] OsgViewer QT threads

2010-10-07 Thread Gustavo Puche
Hi all,

There are somebody who knows where can I find an example of OsgViewerQt with 
Threads?

Thank you! :D 

Cheers,
Gustavo

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=32523#32523





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Conceptual questions about Cameras in general, and Slave Cameras

2010-10-07 Thread Fred Smith
Hi Robert,

Thanks.
It also works if I just give it the master camera's GraphicsContext, too, which 
is a little bit easier to do.

Fred

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=32524#32524





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] GLU integration

2010-10-07 Thread Robert Osfield
Hi All,

I have now moved the subset of glu functions that are now part of the
core OSG library into the osg namespace, so the likes of gluScaleImage
now would be used osg::gluScaleImage.

I have also introduced a gluScaleImage version that doesn't use
glGet's, instead you pass in a PixelStorageModes object that provides
all the appropriate settings.  This version of gluScaleImage is
particularly useful as it allows the function to be used anywhere in
your application - you aren't limited just calling it from a thread
with a valid graphics context.

The osg::Image::scaleImage() and osg::Image::copySubImage() methods
now use this new gluScaleImage function which means that both these
methods can be now called anywhere in your app, at any stage, so you
can move scaling and changing pixel formats into plugins, or into
pre-processing functions in your application.

The osgphotoalbum and osgtexture3D examples both had to use a local
graphics context to do rescaling work then needed to do, and now
thanks to new flexibility they no longer need this temporary graphics
context, so the code is now simplified - one just directly calls the
scale image functions without restriction.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture

2010-10-07 Thread Aitor Ardanza
Hi Mahendra,

Mahendra G.R wrote:
 Hello Robert,
 
 Thanks, i figured it out.
 
 On Fri, Oct 1, 2010 at 2:42 PM, Robert Osfield  () wrote:
 -- 
 http://www.mahendragr.com (http://www.mahendragr.com)
 
  --
 Post generated by Mail2Forum


I'm trying something similar like you, but in fragment shader I need to work 
with two textures...
Can you explain briefly the steps you followed?
Thanks!

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=32527#32527





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture

2010-10-07 Thread Mahendra G.R
Hi Aitor,

Sorry, what exactly do you want me to tell?, i'm trying to RTT of some data
with a FBO, in a fragment shader that is.  Then take this texture data and
save it onto to the disc as an image, this is where i'm struck, because what
i'm doing is for 3D data, it would be really nice if someone explains me how
to save/read the data from the texture and save it as an image.  I checked
the examples but itseems i'm missing something, i have directly attached an
image to the texture and trying to save it, something like this :


textureCamera-attach( osg::Camera::COLOR_BUFFER, Image );
osgDB::writeImageFile(*Image, sample.png);


Regards,

On Thu, Oct 7, 2010 at 4:31 PM, Aitor Ardanza aitoralt...@terra.es wrote:

 Hi Mahendra,

 Mahendra G.R wrote:
  Hello Robert,
 
  Thanks, i figured it out.
 
  On Fri, Oct 1, 2010 at 2:42 PM, Robert Osfield  () wrote:
  --
  http://www.mahendragr.com (http://www.mahendragr.com)
 
   --
  Post generated by Mail2Forum


 I'm trying something similar like you, but in fragment shader I need to
 work with two textures...
 Can you explain briefly the steps you followed?
 Thanks!

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=32527#32527





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org




-- 
http://www.mahendragr.com
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] [vpb] Building VPB, cmake caused configure problem [solved]

2010-10-07 Thread Paul Wessling
I had a slight issue building VirtualPlanetBuilder 0.9.11 in that ./configure 
would fail saying:

ERROR: Version 2.9.5 or higher of OpenSceneGraph is required. Version 2.9.5 was 
found.

Error endif() unmatched line 128 in CMakeModules/FindOSG.cmake. Arguments 
unrecognized (or something similar, sorry lost this exact error msg)

This error was corrected by upgrading from cmake 2.6.0 to cmake 2.8.2.

I looked for a bit and didnt find any reports of this mesage here so figured I 
would post for the next guy.

Cheers,

Paul

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=32529#32529





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture

2010-10-07 Thread Aitor Ardanza
Hi,

Is it only possible, with FBO, to use the same color  for each texture channel? 
if I use the following code on channel 0 give me a black textured ...

Code:
static const char *shaderSource = {
uniform sampler2D baseMap;\n
varying vec2 Texcoord;\n
void main(void)\n
{\n
gl_FragData[0] = texture2D( baseMap, Texcoord );\n
gl_FragData[1] = vec4(0,1,0,1);\n
gl_FragData[2] = vec4(0,0,1,1);\n
gl_FragData[3] = vec4(0,0,1,1);\n
}\n
};


With other 3 channels can take one color texture and save it in png.

Thank you!

Cheers,
Aitor

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=32530#32530





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] render a node last

2010-10-07 Thread lucie lemonnier
Hi,

How to render a node in my scene always last?
I want this object is over the other objects in my scene.

Thank you!

Cheers,
lucie

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=32531#32531





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] render a node last

2010-10-07 Thread Paul Martz
You can use a Camera post draw callback (setPostDrawCallback) or you can use 
render bins (setRenderBinDetails). Search the newsgroup or OSG source for more 
info. You might also need to deal with the depth buffer.

   -Paul


On 10/7/2010 10:22 AM, lucie lemonnier wrote:

Hi,

How to render a node in my scene always last?
I want this object is over the other objects in my scene.

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] GLU integration

2010-10-07 Thread Stephan Maximilian Huber
Hi Robert,

Am 07.10.10 10:02, schrieb Robert Osfield:
 Could you do an svn update and let me
 know if this fixes things.

Compile went fine for 32bit and 64bit OS X.

Thanks again,

Stephan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] GLU integration

2010-10-07 Thread Robert Osfield
Hi Stephan,

On Thu, Oct 7, 2010 at 5:54 PM, Stephan Maximilian Huber
ratzf...@digitalmind.de wrote:
 Compile went fine for 32bit and 64bit OS X.

That's excellent news.  Looks like Windows, Linux and OSX now build
fine.  My guess is that other unix platforms will be fine too.

Next step will be to test build out on GLES targets, this may
introduce new problems as the GLU code base won't ever have been used
when building against GLES.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] OSG seems to have a problem scaling to multiple windows on multiple graphics cards

2010-10-07 Thread John Kelso

Hi all,

Our immersive system is a single host computer with 8 cores and 4 graphics
cards running Linux. (1)  We are using OSG 2.8.3.

We are having a heck of a hard time getting OSG to take advantage of
our multiple graphics cards.  Help!

Here's what we did:

If we load a fairly large model into our test program we can get a frame
rate of about 150 FPS when displaying in a single window. (2) We are
running single-threaded, and assign to a specific core.

When we background this and run a second copy of the program to another
graphics card and core then both programs run at 150 FPS.  Same thing for
running three and four copies at once.

That is, four processes using four graphics cards on four cores run just as
fast as a single process.  All four cores are at near 100% CPU utilization
according to top.  So far, so good.

Now we modify the program to load the model and create multiple windows on
multiple cards.  There's one window per card and each uses a different
core. (3)

The threading model is CullThreadPerCameraDrawThreadPerContext, the
default chosen by OSG.  The environment variable OSG_SERIALIZE_DRAW_DISPATCH
is not set, so it defaults to ON, which we think means draw in serial.

If we draw to four windows on four different cards we get about 36 FPS.
There are four different cores being used, and each has about 25% of the
CPU.  This probably this makes sense as the draws are in serial.  150 FPS/4
is about 36 FPS.  As expected, we get nearly identical results if we create
four windows on a single card using four different cores.

If we set OSG_SERIALIZE_DRAW_DISPATCH=OFF we hope to see better performance,
but with four windows on four graphics cards we only get 16 FPS!  There are
four different cores bring used, one at about 82%, and the other three at
75%, but what are they doing?  Again, we get nearly identical results if
using four windows on a single card.

So

How can we get OSG to draw to four windows on four cards in one process as
fast as running four separate processes?

Any pointers or suggestions are welcome.

Thanks,

John


1 - Our immersive system consists of 3 projectors and a console each driven
by an Nvidia FX5800 graphics card all genlocked for 3D stereo
display. The four graphics cards are in two QuardoPlex Model D2 units
connected to the host.  The host computer is an 8 core Dell Precision
T5400 running 64 bit Linux (CentOS 5.5). We are using Nvidia driver
version 195.36.24

2 - the program is attached- it uses only OSG.  We run our tests with
_GL_SYNC_TO_VBLANK=0 to get the maximum frame rate.

3 - one graphics context per window and one camera per window#include osgDB/ReadFile

#include osgViewer/Viewer

#include osgViewer/ViewerEventHandlers

#include osgGA/TrackballManipulator

#include iostream

#include OpenThreads/Thread



#include Nerves.h



void newWindow(osgViewer::Viewer viewer, unsigned int sn, char *name=NULL)

{

osg::ref_ptrosg::GraphicsContext::Traits traits = new 
osg::GraphicsContext::Traits;

//printf(traits-referenceCount() = %d\n,traits-referenceCount()) ;

traits-screenNum = sn ;

traits-x = 111 ;

traits-y = 0 ;

traits-width = 1058 ;

traits-height = 990 ;

traits-windowDecoration = true;

traits-doubleBuffer = true;

traits-sharedContext = 0;

char foo[256] = display- ;

strcat(foo,name) ;

if (name) traits-windowName = foo ;



osg::ref_ptrosg::GraphicsContext gc = 
osg::GraphicsContext::createGraphicsContext(traits.get());

//printf(traits-referenceCount() = %d\n,traits-referenceCount()) ;

//printf(gc-referenceCount() = %d\n,gc-referenceCount()) ;

if (gc.valid())

{

osg::notify(osg::INFO)  GraphicsWindow has been created 
successfully.std::endl;

}

else

{

osg::notify(osg::NOTICE)  GraphicsWindow has not been created 
successfully.std::endl;

}



osg::ref_ptrosg::Camera camera = new osg::Camera;

//printf(camera-referenceCount() = %d\n,camera-referenceCount()) ;

camera-setGraphicsContext(gc.get());

//printf(gc-referenceCount() = %d\n,gc-referenceCount()) ;

camera-setViewport(new osg::Viewport(0, 0, traits-width, traits-height));

// running in mono

GLenum buffer = traits-doubleBuffer ? GL_BACK : GL_FRONT;

camera-setDrawBuffer(buffer);

// does this make any difference?

camera-setReadBuffer(buffer);



viewer.addSlave(camera.get()) ;

//printf(camera-referenceCount() = %d\n,camera-referenceCount()) ;

//printf(traits-referenceCount() = %d\n,traits-referenceCount()) ;

//printf(gc-referenceCount() = %d\n,gc-referenceCount()) ;

//printf(-camera-referenceCount() = 
%d\n,camera-referenceCount()) ;

}



int main( int argc, char **argv )

{

  

osgViewer::Viewer viewer ;

viewer.addEventHandler(new osgViewer::StatsHandler) ;

viewer.addEventHandler(new osgViewer::ThreadingHandler) ;



//viewer.setThreadingModel( 

Re: [osg-users] OSG seems to have a problem scaling to multiple windows on multiple graphics cards

2010-10-07 Thread Wojciech Lewandowski

Hi John,

This is odd but it sounds bit like swap buffers of the windows  are somehow 
waiting for each other. I believe that WGL_NV_swap_group extension is not 
used by OSG. This extension could possible help you there.


But I could be wrong on above. It is not really my main point I wanted to 
mention. Instead I wanted to suggest you check SLI mosaic mode. We have done 
some experiments on 4 channels on Linux / Nvidia QuadroPlex D2 in the past. 
At first we tried to go the same path as you describe. But later we have 
read somewhere that fastest method is to use one window filing whole desktop 
and split this window into 4 screen quarter slave views.  Each slave view 
could be positioned so that it covers one monitor output. Such 4 monitor 
setup is possible with QP D2 drivers in SLI mosaic mode.


Using producer config files one may easily create a .cfg that could be 
passed from command line to osgViewer and set 4 channel slaves on single 
window. Best thing with using one window is that all four views use the same 
context so GL resources are shared and all four are swaped at once with 
single SwapBuffer call.


In our project we ended up with 4 channel rendering using SLI mosaic and we 
were pleasently surprised how fast it was performing in comparison to 
separate gl contexts on 4 windows. You may check SLI mosaic if you haven't 
done this before


Hope this helps,
Wojtek Lewandowski
--
From: John Kelso ke...@nist.gov
Sent: Thursday, October 07, 2010 9:35 PM
To: osg-users@lists.openscenegraph.org
Subject: [osg-users] OSG seems to have a problem scaling to multiple windows 
on multiple graphics cards



Hi all,

Our immersive system is a single host computer with 8 cores and 4 graphics
cards running Linux. (1)  We are using OSG 2.8.3.

We are having a heck of a hard time getting OSG to take advantage of
our multiple graphics cards.  Help!

Here's what we did:

If we load a fairly large model into our test program we can get a frame
rate of about 150 FPS when displaying in a single window. (2) We are
running single-threaded, and assign to a specific core.

When we background this and run a second copy of the program to another
graphics card and core then both programs run at 150 FPS.  Same thing for
running three and four copies at once.

That is, four processes using four graphics cards on four cores run just 
as

fast as a single process.  All four cores are at near 100% CPU utilization
according to top.  So far, so good.

Now we modify the program to load the model and create multiple windows on
multiple cards.  There's one window per card and each uses a different
core. (3)

The threading model is CullThreadPerCameraDrawThreadPerContext, the
default chosen by OSG.  The environment variable 
OSG_SERIALIZE_DRAW_DISPATCH

is not set, so it defaults to ON, which we think means draw in serial.

If we draw to four windows on four different cards we get about 36 FPS.
There are four different cores being used, and each has about 25% of the
CPU.  This probably this makes sense as the draws are in serial.  150 
FPS/4
is about 36 FPS.  As expected, we get nearly identical results if we 
create

four windows on a single card using four different cores.

If we set OSG_SERIALIZE_DRAW_DISPATCH=OFF we hope to see better 
performance,
but with four windows on four graphics cards we only get 16 FPS!  There 
are

four different cores bring used, one at about 82%, and the other three at
75%, but what are they doing?  Again, we get nearly identical results if
using four windows on a single card.

So

How can we get OSG to draw to four windows on four cards in one process as
fast as running four separate processes?

Any pointers or suggestions are welcome.

Thanks,

John


1 - Our immersive system consists of 3 projectors and a console each 
driven

by an Nvidia FX5800 graphics card all genlocked for 3D stereo
display. The four graphics cards are in two QuardoPlex Model D2 units
connected to the host.  The host computer is an 8 core Dell Precision
T5400 running 64 bit Linux (CentOS 5.5). We are using Nvidia driver
version 195.36.24

2 - the program is attached- it uses only OSG.  We run our tests with
_GL_SYNC_TO_VBLANK=0 to get the maximum frame rate.

3 - one graphics context per window and one camera per window





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] OSG seems to have a problem scaling to multiple windows on multiple graphics cards

2010-10-07 Thread John Kelso

Hi,

Many thanks for your speedy reply.

We were considering trying mosaic mode if we couldn't come up with something
that would fix the problem with our current display configuration.

Switching to mosaic mode will require a good bit of code rewrite, but if
that's the way to go I guess it's worth it in the long run.

I'll look into WGL_NV_swap_group extension too.

Any other ideas from the group?

Thanks,

John

On Thu, 7 Oct 2010, Wojciech Lewandowski wrote:


Hi John,

This is odd but it sounds bit like swap buffers of the windows  are somehow
waiting for each other. I believe that WGL_NV_swap_group extension is not
used by OSG. This extension could possible help you there.

But I could be wrong on above. It is not really my main point I wanted to
mention. Instead I wanted to suggest you check SLI mosaic mode. We have done
some experiments on 4 channels on Linux / Nvidia QuadroPlex D2 in the past.
At first we tried to go the same path as you describe. But later we have
read somewhere that fastest method is to use one window filing whole desktop
and split this window into 4 screen quarter slave views.  Each slave view
could be positioned so that it covers one monitor output. Such 4 monitor
setup is possible with QP D2 drivers in SLI mosaic mode.

Using producer config files one may easily create a .cfg that could be
passed from command line to osgViewer and set 4 channel slaves on single
window. Best thing with using one window is that all four views use the same
context so GL resources are shared and all four are swaped at once with
single SwapBuffer call.

In our project we ended up with 4 channel rendering using SLI mosaic and we
were pleasently surprised how fast it was performing in comparison to
separate gl contexts on 4 windows. You may check SLI mosaic if you haven't
done this before

Hope this helps,
Wojtek Lewandowski
--
From: John Kelso ke...@nist.gov
Sent: Thursday, October 07, 2010 9:35 PM
To: osg-users@lists.openscenegraph.org
Subject: [osg-users] OSG seems to have a problem scaling to multiple windows
on multiple graphics cards


Hi all,

Our immersive system is a single host computer with 8 cores and 4 graphics
cards running Linux. (1)  We are using OSG 2.8.3.

We are having a heck of a hard time getting OSG to take advantage of
our multiple graphics cards.  Help!

Here's what we did:

If we load a fairly large model into our test program we can get a frame
rate of about 150 FPS when displaying in a single window. (2) We are
running single-threaded, and assign to a specific core.

When we background this and run a second copy of the program to another
graphics card and core then both programs run at 150 FPS.  Same thing for
running three and four copies at once.

That is, four processes using four graphics cards on four cores run just
as
fast as a single process.  All four cores are at near 100% CPU utilization
according to top.  So far, so good.

Now we modify the program to load the model and create multiple windows on
multiple cards.  There's one window per card and each uses a different
core. (3)

The threading model is CullThreadPerCameraDrawThreadPerContext, the
default chosen by OSG.  The environment variable
OSG_SERIALIZE_DRAW_DISPATCH
is not set, so it defaults to ON, which we think means draw in serial.

If we draw to four windows on four different cards we get about 36 FPS.
There are four different cores being used, and each has about 25% of the
CPU.  This probably this makes sense as the draws are in serial.  150
FPS/4
is about 36 FPS.  As expected, we get nearly identical results if we
create
four windows on a single card using four different cores.

If we set OSG_SERIALIZE_DRAW_DISPATCH=OFF we hope to see better
performance,
but with four windows on four graphics cards we only get 16 FPS!  There
are
four different cores bring used, one at about 82%, and the other three at
75%, but what are they doing?  Again, we get nearly identical results if
using four windows on a single card.

So

How can we get OSG to draw to four windows on four cards in one process as
fast as running four separate processes?

Any pointers or suggestions are welcome.

Thanks,

John


1 - Our immersive system consists of 3 projectors and a console each
driven
by an Nvidia FX5800 graphics card all genlocked for 3D stereo
display. The four graphics cards are in two QuardoPlex Model D2 units
connected to the host.  The host computer is an 8 core Dell Precision
T5400 running 64 bit Linux (CentOS 5.5). We are using Nvidia driver
version 195.36.24

2 - the program is attached- it uses only OSG.  We run our tests with
_GL_SYNC_TO_VBLANK=0 to get the maximum frame rate.

3 - one graphics context per window and one camera per window





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org