Re: [osg-users] latest NVIDIA drivers

2010-09-14 Thread Fred Smith

robertosfield wrote:
 
 The desktop market is already rather stagnant in comparison to the growth of 
 mobile device,
 mindshare and market share are moving away from the desktop to mobile,
 with this sea change Microsoft and hence DirectX will be loosing
 ground.


I am appalled to see that GLSL development tools are almost non-existent.
Microsoft historically succeeded by attracting developers, and I think this 
might also be true for DirectX.

Look at shader debugging tools. I'm not talking about small tools used to play 
for a few minutes with your shader, but real debuggers with breakpoints.

Apart from glslDevil and its very limited capabilities there is just no 
development tool available.

HLSL has plenty of support from Microsoft (PIX), nVidia (FX Composer and now 
Parralel nSight) and ATI (GPU PerfStudio).

ATI does not plan to support GLSL debugging in the near term I was told. nVidia 
doesn't look much more interested. Also, their latest Optimus technology isn't 
compatible with Linux yet, which doesn't help...

That's a pity.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=31587#31587





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] latest NVIDIA drivers

2010-09-14 Thread Trajce (Nick) Nikolov
our GLSL code is bug free ... no need of debugging .. *smile*
-Nick


On Tue, Sep 14, 2010 at 5:06 PM, Fred Smith osgfo...@tevs.eu wrote:


 robertosfield wrote:
 
  The desktop market is already rather stagnant in comparison to the growth
 of mobile device,
  mindshare and market share are moving away from the desktop to mobile,
  with this sea change Microsoft and hence DirectX will be loosing
  ground.


 I am appalled to see that GLSL development tools are almost non-existent.
 Microsoft historically succeeded by attracting developers, and I think this
 might also be true for DirectX.

 Look at shader debugging tools. I'm not talking about small tools used to
 play for a few minutes with your shader, but real debuggers with
 breakpoints.

 Apart from glslDevil and its very limited capabilities there is just no
 development tool available.

 HLSL has plenty of support from Microsoft (PIX), nVidia (FX Composer and
 now Parralel nSight) and ATI (GPU PerfStudio).

 ATI does not plan to support GLSL debugging in the near term I was told.
 nVidia doesn't look much more interested. Also, their latest Optimus
 technology isn't compatible with Linux yet, which doesn't help...

 That's a pity.

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=31587#31587





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Screenshot to non OSG image

2010-09-14 Thread benedikt naessens
Dear all,

I want to combine a movie (movie shot by a camera) with some OSG 3D objects. I 
already managed to make sure that the position and orientation of the camera 
are stored each time a picture is grabbed (as part of the movie). The sequence 
of the movie is stored in memory (can choose between RGB24 and RGB32) in a 
buffer (an array of char* blocks). I just need to call an API function of the 
camera to convert this to an AVI sequence. 

Now before I call this function to convert the contents of the buffer to an AVI 
movie, I want to do the OSG rendering (this is my postprocessing step). 

The technique I use, is:
* Adding an OSG camera to the OSG root node and adding the scene data as a 
child to the OSG camera
* Set the render order as POST_RENDER
* Set the target of the rendering to FRAME_BUFFER_OBJECT
* I explicitly don't set the clear mask for the camera: 
setClearMask((GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) is not present in 
the code. I hope this is right ?

This results in the following code:


Code:

m_pSnapshotcamera = new Camera;
m_refpARRoot-addChild( m_pSnapshotcamera );

m_pSnapshotcamera-setReferenceFrame(osg::Transform::ABSOLUTE_RF);
m_pSnapshotcamera-setProjectionMatrixAsPerspective(snapVerFov, 
snapAspect,0.01, 4);
m_pSnapshotcamera-setDrawBuffer(GL_BACK);
m_pSnapshotcamera-setReadBuffer(GL_BACK);
m_pSnapshotcamera-setRenderOrder(osg::Camera::POST_RENDER);
m_pSnapshotcamera-setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);




Now, how do I assign the (AVI sequence) memory as the frame buffer to write to 
? I am aware of the attach() member function of the Camera class, but this 
needs an (OSG) image as a parameter. 

Also, how can I make sure that OSG renders the 3D objects into a window with 
the size of the camera (video) pictures? I clarify: in the code above, I 
already defined the horizontal and vertical field of view (using the aspect 
ratio), but still the system can't deduce what the size of the frame buffer is 
(let's assume that the video is 640 x 480, which is different than the standard 
size of my OSG widgets).

Thank you!

Kind regards,
Benedikt Naessens.[/code]

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=31594#31594





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] [vpb] Problem with tiles

2010-09-14 Thread Massimo Tarantini
Hi all, 

i have the following problem with the terrain i have created. 
The two snapshots show  4 tiles swapped in, but they are not aligned with the 
grid (the texture is well aligned, when is visible).

First
[Image: http://img251.imageshack.us/img251/8871/immagine1rw.jpg ]

After zoom in
[Image: http://img251.imageshack.us/img251/4338/immagine2pb.jpg ]

I have used OSG 2.9.6, and vpbmaster to create the source files. Then i have 
create some .bat files to generate the terrain. 
I Have already used this workflow succesfully, but now i have done some mistake.

Any idea/hint what is wrong?

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=31597#31597





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Geometry Drawing Performance

2010-09-14 Thread Frank Sullivan
Hello everybody,

Can I trouble you with some questions about performance? My situation is 
basically this. I have a list of 54 drawables that I'm generating in-code. Not 
all of them are necessarily displayed on the screen at any given time, and in 
some cases, a certain drawables might be drawn more than once (accomplished by 
adding a second MatrixTransform parent above the associated Geode).

In any case, I went for a naive implementation at first. Each drawable was an 
osg::Geometry object attached to an osg::Geode (which again, may have one or 
more osg::MatrixTransforms as parents). Each osg::Geometry had its own 
Vec3Array as a vertex array. The performance wasn't bad. It was around 700fps 
in wireframe mode on my machine. But I figured I could do better.

So, I groups all of the verts for these 54 drawables into a single Vec3Array 
that all of the osg::Geometry objects shared. The primitive set (an 
osg::DrawArrays) would specify which subset of the vertex array to draw.

I figured this would help because it would eliminate a lot of useless VBO 
bindings (or whatever else is going on under the hood). However, this actually 
cut the frame rate by about a third. 

This confused me, but in a way it made sense. My new master vertex array has 
well over a million verts! So maybe the memory requirements of this outweighs 
the savings I get from reducing VBO bindings?

So anyway, I figured it might be a good idea to try to index this geometry, to 
cut down on the number of verts in the vertex array. Doing this, I was actually 
able to cut down the number of verts to just over two thousand. That's a 
savings of 99.84%!

Of course, now I've got 54 separate primitive sets (I had to use 
osg::DrawElements, because it doesn't look like osg::DrawArrays supposed 
indexed geometry). But since these primitive sets are essentially vectors of 
USHORTs (rather than vectors of Vec3's), I'm still saving a significant amount 
of memory (about 725,00 bytes for the Vec3's and USHORTS versus about 16 
million bytes for the huge Vec3Array).

Yet, the frame rate remains at about 220fps, which is significantly lower than 
the naive method involving 54 separate Vec3Arrays totaling 1.4 million verts!

I still have a ways to go. I'm thinking about seeing if I can use a triangle 
strip instead of GL_TRIANGLES, to save even more on memory used for verts. 
However, the logic for building the meshes with triangle strips will be much 
tougher, and will likely require smart use of degenerate triangles. 

I'm happy to do the work, but before I do, I just want to make sure that there 
isn't something that I'm missing. I tried setting the data variance of the 
osg::Geometry objects to Static, hoping that perhaps if I signal to OSG that I 
have no intention of changing those objects, it could put it in a 
more-efficient memory pool (perhaps). However, that didn't seem to affect the 
frame rate.

Any advice at all would be greatly appreciated.

Cheers,
Frank

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=31598#31598





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Geometry Drawing Performance

2010-09-14 Thread Frank Sullivan
Hello again,

There is something else that I thought of. Here is a diagram of what one of 
these drawables might look like:

 http://imgur.com/lefTl.gif

Because of the way that this drawable is generated (by recursively subdividing 
a quad), the indices are not necessarily in an order that is cache-friendly. 
For instance, the triangle in green is composed of indices 8, 12, 41. This may 
cause OpenGL to have to jump around the VBO in a random access fashion. Could 
this cause the lower frame rates when compared to my naive brute force method?

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=31599#31599





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Geometry Drawing Performance

2010-09-14 Thread Frank Sullivan
Another thing,

Looking at the GL Trace output in glslDevil, it looks like my program is using 
vertex arrays. I can't tell for sure, but I think it is using vertex arrays 
because I don't see any calls to glBindBuffer and the pointer used in 
glVertexPointer looks like an actual pointer to data (035ce860) rather than an 
offset into a VBO. 

Is there any way to force OSG to use VBOs? I'm using 2.8.2.

Thanks again,
Frank

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=31600#31600





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgGA modifications for IPhone and other

2010-09-14 Thread Thomas Hogarth
Looks ace Stephan

I've not had a chance to checkout your osgGA changes yet, but now I can't
wait. Also nice to see it's been tested on the IPad, I've only got an IPod
touch so the testing has been pretty limited. I'm getting and IPod touch 4th
gen this week so I can finally use gles 2.0, it's less then £200 so decent
mobile AR is now looking promising :).

I'll check out your changes (those include my cmake stuff right?) and have a
play, but by the look of things you've done it already, nice one.

Tom
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Geometry Drawing Performance

2010-09-14 Thread Roland Smeenk
Hi Frank,

you layout doesn't really look cache friendly. Take a look at the optimizer and 
it's options especially INDEX_MESH, VERTEX_POSTTRANSFORM and 
VERTEX_PRETRANSFORM. 

http://www.openscenegraph.org/projects/osg/wiki/Support/UserGuides/OptimizerOptions

And of course make sure your Geometry drawables are using the fast path.

--
Roland

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=31602#31602





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] osgGA modifications for IPhone and other stuff

2010-09-14 Thread David Glenn

Jason Daly wrote:
 
 There is a multiple pointer extension for X (called MPX) that has recently 
 been added to the X.org server:
 
 http://en.wikipedia.org/wiki/Multi-Pointer_X 
 (http://en.wikipedia.org/wiki/Multi-Pointer_X)
 
 The latest distributions should have it (I know Fedora has had it since 
 release 12). It's most likely disabled by default.
 
 --J
 
  --
 Post generated by Mail2Forum


Well, if it can support 8 mice, it can use a smartboard! At least that what 
I've been told!


D Glenn (a.k.a David Glenn) - Join the Navy and See the World ... from your 
Desk!

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=31603#31603





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org