Re: [osg-users] Sharing vertex and normal buffer with another library

2010-05-04 Thread David Spilling
Hi Dženan,

It's very hard to tell from the limited amount of code you have posted,
since it seems to be missing so much, but - and apologies if I'm pointing
out the obvious :

a) You don't ever seem to attach your geode to the scene anywhere

b) The scope of the osg::Geometry* geom in mcHelper looks suspect as it goes
out of scope outside mcHelper. Can I suggest liberaly use of ref_ptr's?


osg-users mailing list

[osg-users] Render to VBO?

2010-04-13 Thread David Spilling
Dear All,

Does OSG support rendering to a Vertex Buffer Object?

I'm happy with regular texture RTT, but is rendering to VBO as simple as

camera-attach(osg::CameraNode::COLOR_BUFFER, m_texture);

where m_texture's image is pointing to a vertex buffer? e.g. something

m_vertex = (osg::Vec4f*) new osg::Vec4f[m_nx * m_ny];
m_vertexArray = new osg::Array(m_nx * m_ny, m_vertex);


Does this (apparently simple) operation ensure that the VBO stays entirely
on the GPU, and doesn't do a round trip via the driver/CPU?

Thanks all,

osg-users mailing list

Re: [osg-users] Render to VBO?

2010-04-13 Thread David Spilling
Hi Robert

I haven't tried rendering to VBO before but it's my understanding that

the technique used to render to FBO, then copy the appropriate buffer
 to PBO, then re-assign this PBO as a VBO.  You might be able to
 implement this right now using a post draw callback attached to an
 osg::Camera set up as at RTT to FRAME_BUFFER_OBJECT.

Thanks, I will try this.

I was really wondering if anything had been done since the conversation
between you and Art Tevs from osg-submissions, 28/11/08 (thread entitled
General PBO implementation for GPU only memory handling (usefull for CUDA
interoperability)), in which Art said:

This shouldn't be a big problem. The good thing with PBOs is that just
 changing its target will change its functionality. Hence I think, it
 shouldn't be a big issue to just bind the PBO as VBO before one does need it
 (assuming there is correct data in the PBO).

I'll let you know how I get on, if I get anywhere at all.

Best regards,

osg-users mailing list

Re: [osg-users] Render to VBO?

2010-04-13 Thread David Spilling
One other thought - is this render to FBO/copy to PBO/assign as VBO method
likely to be any faster than rendering to texture and then using vertex
texture fetch?

I guess the answer will be just to try it and see...


osg-users mailing list

Re: [osg-users] Render to VBO?

2010-04-13 Thread David Spilling

Thanks for your input, but I think you've misunderstood me. I already use
VBOs quite happily. The question is whether you can use a pre-render pixel
stage to render directly into the vertex buffer (the technique is sometimes
abbreviated RTVB); i.e. to use a fragment/pixel shader to populate vertex
array data.

Anyway, I've got plenty to go on now, thanks to all for your help.

Best regards,

osg-users mailing list

Re: [osg-users] 3d file formats supporting bump map

2010-01-12 Thread David Spilling
Hi Davide,

Certainly the obj loader supports bump maps, but let me be clear about what
it actually does. Apologies if this comes over as patronising; I have no
idea of your level of expertise.

In OSG/OpenGL land, textures (diffuse, bump map, specular map, etc.) are all
attached to a particular geometry via a texture ID - this is an integer
index ranging from 0 upwards. Shaders are then applied to this geometry that
will colour the pixel based on a particular understanding of what each
texture ID actually is. So for example, I can write a shader that assumes
that texture unit 0 is the diffuse colour, texture unit 1 is some sort of
specularity map, texture unit 2 is a bump map, etc. Or I can assume (for
example as osgShadow tends to) that texture unit 0 is the diffuser colour,
texture unit 1 is the shadow map.

However - and this is the key - there is nothing that enforces this mapping
between texture unit and meaning.

3DS, and the obj format, both explicitly include entries in the model for
diffuse texture and specular map and bump map and so on. However the
3DS or obj format loader doesn't know what shader you are going to apply to
this model, so has no idea which texture ID/unit to assign a loaded bump
map texture to. Should it be 1? 2? etc.

Some formats (e.g. .osg) allow you to save an objects shader within the
object definition itself, thereby guaranteeing that the textures that the
object loads are assigned to the right texture unit, and used in the right
way. 3DS and .obj don't.

AFAIK, there is currently no OSG-wide method for enforcing any consistency.

All that being said, the OBJ loader allows you to specify what texture units
you want which maps to go to when you load the model, via an options string.
The options are all listed in the first few lines of ReaderWriterOBJ.cpp.
The mapping between the flags in the .obj file and the name of the texture
is in lines 434 onwards of obj.cpp. (It's useful to have a look at that,
because the OBJ spec is not rigourously followed by all modelling tools -
e.g. you see map_opacity in some obj files, even though it should be map_d
according to the spec).

I'm not amazingly familiar with osgFX, but reading the source for the
BumpMapping technique, I notice that
a) you can define which texture units should be used for diffuse and normal
(note : not bump) textures, and
b) by default these are 0=normal, 1=diffuse.

I also don't know what Blender exports in terms of maps. When you say that
the blender export isn't exporting the bump map, have you looked in the OBJ
file to actually see whether there is a map_bump (or map_Bump, or just
bump) in it? Has it been called something else by the exporter?

Hope that helps,

osg-users mailing list

Re: [osg-users] osg::TextureCubeMap and HDR

2010-01-11 Thread David Spilling
Dear Peter,

I think that some (older) graphics cards support floating point 2D textures,
but not cube maps.

I'm also aware that even if you do support floating point cube maps, many
older cards can't do interpolation on them. Can you turn it to GL_NEAREST
and see if that makes a difference?


osg-users mailing list

Re: [osg-users] OSG_FEM

2009-09-29 Thread David Spilling
Hi Ugras,

What you ask is slightly strange, and I apologies if this reply appears too
patronising - it isn't meant to be!

OSG is basically a rendering package; it just draws things.

To try to interpret your question sensibly:

1) Can OSG import FEM meshes?

I imagine that if your mesher can export the mesh definition in a manner in
which OSG can read it (i.e. in one of the many graphics formats that OSG
supports) then you can display the mesh.

However note that all OSG formats are shells, not solids. Solid nodes, in
the FE sense, don't really exist in graphics land.

2) Can OSG generate meshes?

There are a number of utilities within OSG that might give you a starting
point for getting OSG to mesh objects (given the surface issue limitations
described above); for example, the delauneytriangulator, or the tesselator,
or the optimiser. However, I am not aware of any OSG plugins that perform
meshing based on FEM rules.

Similarly, OSG doesn't natively know about constraints, or connectivity,
between surfaces, unless it has somehow been built into the scenegraph.

3) Can OSG perform numerical analysis?

This, again, is something you would have to develop. OSG provides a simple
library of vector/matrix operations, but decomposition and optimisation are
not in the current methods. I would also suggest that, rather than extend
OSG to cover these operations, you use freely available external high
performance libraries to do this, e.g. linpack/blas etc.

4) Can OSG display FEM results (e.g. mode shapes)

Well, OSG can display anything you like, but the trick, again, would be to
find some mutual compatible format. For example, you could use OSG to
animate a model's vertices such that it shows the eigenfrequencies like a
movie, but most of this would be programmatic, I imagine.

I hope that helps.

osg-users mailing list

Re: [osg-users] create a image

2009-07-29 Thread David Spilling

I think you want this:

   data[i] = 255;//red

osg-users mailing list

Re: [osg-users] ViewerBase::stopThreading() and renderers

2009-06-26 Thread David Spilling

FWIW, I do what you do : I have an OpenThreads::Thread class which _has_ a
viewer. The parent application starts this thread up, and lets it get on
with viewer creation, frame dispatch, viewer deletion and so on.

I had very similar problems to you a while ago ( 1 year? ) when I
originally set this up.

...assuming that it is ok to call viewer-setDone() from a different

This sounds familiar. I only call viewer-setDone from within the thread
that owns it. (The thread that owns the viewer has a Stop method, which
calls setDone).

This works for me.

Hope that helps,

osg-users mailing list

Re: [osg-users] osgOcean release

2009-05-14 Thread David Spilling

it does sound difficult to implement as a general case.

Actually, I think that the only way to meet all current techniques with as
common a scenegraph structure as possible is to render the heightfield to
texture (generated somehow, e.g. FFT on CPU, sum of sines, FFT on GPU), then
use vertex texture fetch to draw the vertices (e.g. projective, tiled,

The main drawback (IMHO) is current performance : vertex texture fetch is
slow on many cards; even fewer seem to support GL_LINEAR in hardware (which
you need). Undoubtedly this will get better, but when I looked at this a
little while ago, it was sluggish in comparison to techniques not involving

To be honest I've don't think I've seen a real time
 ocean simulation with geometric wake formations, they usually just use
 texture overlays.

I  agree; vertex deformation by wakes is... uncommon (we've done it in the
past, but for very very application specific reasons that probably wouldn't
be appropriate). However applying a deformation to your _normal_ field works
easily, and works especially well at high altitudes in calm seas when you
can't / shouldn't be able to see the vertices anyway. It's equivalent in
construction to the way the surface wake foam textures work.

One other point : for semi-infinite tiling oceans, have a look at the new
instancing stuff that I recently saw contributed into OSG. I haven't played
with yet it myself, but just repeating the base (high res) tile as an
instance is probably higher performance than having to bother with LOD,
skirts, and all the rest.

Best regards,

osg-users mailing list

Re: [osg-users] [osgPlugins] osgDB OBJ Plugin

2009-05-14 Thread David Spilling

At the moment the OBJ loader builds a material to stateset map, which is
indexed by the material name. However when the stateset is applied to a
geometry, the material name is effectively lost.

Also the OBJ loader reads the group name (i.e. g groupname in the .obj)
and any object name (i.e. o objectname in the .obj). For any g field in
the OBJ file, it creates a new geode under the toplevel group, with a name
as in the form groupname:objectname

You should be able to see this visually by converting models into .osg and
reading the results.

However, the OBJ writer outputs everything as o objectname, and doesn't
preserve groups at all.

Unfortunately, I think that what you want is not possible with the OBJ
loader coded as it is. I think you (or someone) might need to dive in and:

1) Attach a loaded material name to a stateset on loading
2) Output things as g geodename rather than o geodename. Probably ditch
the writing (and reading?) of object completely.
3) The code for actually outputting material names seems to still be there.

IMplementing (1) would not break any body elses code. Implementing (2),
however, might. Does anybody actually use o objectname in their modelling
pipeling? From the spec, Wavefront claims to ignore this anyway...


osg-users mailing list

Re: [osg-users] osgOcean release

2009-05-11 Thread David Spilling

For very large
 expanses of ocean the problem I forsee is the time it takes to update the
 vertices and primitive sets.
 However, since the FFT technique is tileable it would be possible to only
 update 1 tile and then translate
 them into position using a shader. This would rule out any surface

Surface interactions in terms of locally modifying the vertex heights?
Tileable FFT doesn't definately rule it out : you can use the following

1) render a FFT derived heightfield into a texture
2) RTT local heightfields (wakes and so on) attached to some object , as if
you were the camera.
3) Use a vertex shader operating on a screen-aligned grid (i.e. projective
grid approach) and some vertex texture operations to sample the FFT
heightfield for the correct world position, and also to sample the RTT
texture of local vertex heights.

Unfortunately, the possibility of this type of technique makes providing
some sort of overall osgOcean architecture that can act as a framework for
all techniques very tricky, IMHO. (you get entangled in too many overall
scenegraph issues)

osg-users mailing list

Re: [osg-users] osgOcean release

2009-05-06 Thread David Spilling

A nice piece of work.

In case it helps anyone, for FFTW on Windows, I used 3.2.1. I didn't bother
compiling it, but just did this:

which worked fine (even with the free Visual C++ 9.0 ! )

I had a compile of minor issues with addCullCallback - I guess this is
because we are using different OSG versions - but changing it to
setCullCallback seemed to work fine.

osg-users mailing list

Re: [osg-users] osgOcean release

2009-05-06 Thread David Spilling

I implemented my ocean surface which is composed of Sum of Sinus method

Bear in mind that so long as your wave numbers  are integer subdivisions of
the tile size, the result from an FFT approach is the same as the result
from a sum of sinusoids approach, just higher performance.

osg-users mailing list

Re: [osg-users] osgOcean release

2009-05-06 Thread David Spilling

...Error : When I open up the osgOceanExample there is some error in vertex
shader as you can see from the attached screenshot.

I had something similar - I think this is just coz the shader constructor
can't find the underlying shaders; AFAIK the resource folder has to be
located in the same directory as the executable. Moving things around might
work for you.

osg-users mailing list

Re: [osg-users] How to maximize and minimize the window of the osg program

2009-05-01 Thread David Spilling

The WindowSizeHandler (in osgViewer/ViewerEventHandlers) does exactly this.
Look at the toggleFullScreen method.

Better yet, just add a WindowSizeHandler to your viewers event hander list

viewer.addEventHandler(new osgViewer::WindowSizeHandler());

Hope that helps,

osg-users mailing list

Re: [osg-users] Geometryconsidered in near+far plane auto computation

2009-04-28 Thread David Spilling
Hi J-S,

The problem when the skydome renders last is that it won't be blended
 correctly with transparent objects (they need to be rendered after all
 opaque objects, and sorted back to front).

Ah. I hadn't considered that in detail. (I wonder what my app's behaviour is
then? I don't have many transparent objects so probably wouldn't have
noticed if something was awry - I'll have to check).

For me, I will probably control the renderbins (if I'm not already doing
it), and render opaque objects skydome transparent objects; i.e.
putting the skydome in the regular opaque bin. Most of my objects are opaque
so I get some benefit here.

Thanks for the caveat!

osg-users mailing list

Re: [osg-users] Geometryconsidered in near+far plane auto computation

2009-04-26 Thread David Spilling

  After some brain-twisting, I did realize that even with z comparison off,
 OGL is
 probably rejecting the skydome because it's beyond the far clip plane. I've
 been trying to
 think of a way to fool this, but it seems like it is unavoidable.

That's exactly what I found (or even wierder, the skydome vertices would
clip, but the inter-vertex points wouldn't due to interpolation, so the dome
looked patchy.

I had to use the approach I posted to simultaneously:

1) Make sure the skydome didn't participate in near/far autocalculation
2) Make sure OSG didn't cull the skydome
3) Make sure that OGL would actually draw something
4) Allow the skydome to draw last.

In my app, I don't see a Camera with NESTED_RENDER as much of a per-frame
overhead - it's pretty much negligible with all the rest of the CPU activity
I'm engaging in, but I can appreciate that this might not be the case for

Plus it had the advantage of being scenegraph agnostic - i.e. injection of
this as a (for example) .osg worked without having to appeal to a particular
application scenegraph structure in terms of PRE_RENDER, depth clears, etc.

osg-users mailing list

Re: [osg-users] Geometry considered in near+far plane auto computation

2009-04-21 Thread David Spilling
J-S (and others),

You could look at doing this is the same way the depth partition node does
it, which is what I do:

I use a class based on an OSG camera with an overriden traverse method, that
forces the projection matrix to a particular z near and z far. Oh, and the
camera has setComputeNearFarMode(osg::CullSettings::DO_NOT_COMPUTE_NEAR_FAR)
in its constructor. The skydome is then a child of this camera.

You then set the z near and z far to be whatever you want (i.e. enveloping
your skydome radius). I typically have a skydome with a radius of 1

void CExtentsCamera::traverse(osg::NodeVisitor nv)
// If the scene hasn't been defined (i.e. we have no children at all)
then don't do anything
if(_children.size() == 0) return;

// If the visitor is not a cull visitor, then we are not interested in
intercepting it, so pass it directly onto the scene.
osgUtil::CullVisitor* cv = dynamic_castosgUtil::CullVisitor*(nv);

// Get a reference to the Cull Visitor's projection matrix
osg::Matrix* projection = cv-getProjectionMatrix();

// NB : This might be possible to simplify and hence optimise (haven't
yet checked)
double a = (*projection)(2,2);
double b = (*projection)(3,2);
double c = (*projection)(2,3);
 double d = (*projection)(3,3);
double trans_near = (-_zNear*a + b) / (-_zNear*c + d);
double trans_far  = ( -_zFar*a + b) / ( -_zFar*c + d);
double ratio = fabs(2.0/(trans_near - trans_far));
double center = -0.5*(trans_near + trans_far);

// Set the projection matrix
projection-postMult(osg::Matrixd(1.0, 0.0, 0.0,0.0,
0.0, 1.0, 0.0,0.0,
0.0, 0.0, ratio,0.0,
0.0, 0.0, center*ratio, 1.0));


Probably a better way of doing it, but it works fine for me.

I also do this on the camera's stateset

osg::Depth* depth = new osg::Depth;
stateSet-setAttributeAndModes(depth,osg::StateAttribute::ON );

so that you can render the sky last, and any expensive pixel shaders don't
get unnecessarily run.

Hope that helps,

osg-users mailing list

Re: [osg-users] HDR Skybox

2009-03-31 Thread David Spilling

What image format are you actually loading?


osg-users mailing list

Re: [osg-users] Problems shutting down Composite Viewer/View/Custom drawables

2009-03-27 Thread David Spilling

I have had similar issues (with crashes in releaseGLObjects when views get
destroyed) but can't actually recall what I did to fix them.

You could try calling viewer-releaseGLObjects() before you destroy the
view. (previous posts seem to suggest that this might be the right thing to

osg-users mailing list

Re: [osg-users] DoomLike manipulator

2009-03-19 Thread David Spilling

FYI, there was a posting of a (presumably similar) WASD type manipulater by
Viggo Løvli back in August 08 - seach the archives for How to write a
camera manipulator...

osg-users mailing list

Re: [osg-users] Strange setImage behaviour

2009-03-11 Thread David Spilling

Not that I want to hijack the thread, but a small (more OpenGL) question on
this area, as it has always confused me:

The internalTextureFormat is the pixel type when stored down on the
 GPU.  Typically it's be GL_RGB, or just 3 (number of components) for
 RGB data.  Both the internalTextureFormat and pixelFormat are provided
 as sometimes you want the driver to change the data type on download
 or pack it differently, for instance the source image might be packed
 as GL_BGR.

So for internalTextureFormat, 3, GL_RGB and GL_RGB8 are all equivalent?

What conversion takes place in the driver if sourceFormat is (say) GL_RGB8
and the internalTextureFormat is GL_RGB32F?


osg-users mailing list

Re: [osg-users] Strange setImage behaviour

2009-03-10 Thread David Spilling

For starters, you probably want GL_RGB8 (0x8051) and not GL_TEXTURE_2D
(0x0DE1) in your setImage call.

But in general it looks a bit odd to me, and I'm not sure what your
intention was. First you get the pointer to the textures image, and then you
set it to something else. I imagine you just need to do something like:

void updateTexture( IplImage *img, ref_ptrNode geode)
   ref_ptrStateSet state = geode-getOrCreateStateSet();
   ref_ptrTexture2D Tex = (Texture2D*)
   ref_ptrImage teximg = osgDB::readImageFile(test.jpg);

with some appropriate dirtying (although this assumes that Tex is non null,
so you might want to have some of your previous code to actually set up the
texture if it is null).

Hope that helps,

osg-users mailing list

Re: [osg-users] Strange setImage behaviour

2009-03-10 Thread David Spilling
Okay, so something like this should work, I guess.

void updateTexture( IplImage *img, ref_ptrNode geode)
   ref_ptrStateSet state = geode-getOrCreateStateSet();
   ref_ptrTexture2D Tex = (Texture2D*)
osg-users mailing list

Re: [osg-users] Float textures seem to be clamped in GLSL

2009-03-04 Thread David Spilling

My initial thought was that nowhere were you saying that the image was
floating point. Digging further, I realised that TransferFunction should be
doing it for you - I've never used this before - but this line (in
osg/TransferFunction1D.cpp) looks a little odd to me:

_image-setImage(numX,1,1,GL_RGBA, GL_RGBA, GL_FLOAT, (unsigned
char*)_colors[0], osg::Image::NO_DELETE);

Shouldn't that be GL_RGBA32F_ARB,GL_RGBA, GL_FLOAT?

I guess also doing a texture1D-setInternalFormat(GL_RGBA32F_ARB) might
help, but I think (without looking at the code) that osg probably picks this
up from the image format anyway.

Hope that helps.

osg-users mailing list

Re: [osg-users] osg::View NO_LIGHT bug?

2009-02-17 Thread David Spilling

osg::View's setLightingMode with NO_LIGHT as a parameter doesn't actually
turn any lights off (just look at the source in osg/View.cpp). If the
lightingMode is *not* NO_LIGHT, then it sets light 0 with the default 0.8
diffuse value etc. I presume this is by design, although I'm not sure why!

Actually, this is all a bit confusing. For example, SceneView sets a global
ambient light model that you have to suppress after the fact.

Similarly, I notice that osg/View has LightingMode as enum {NO_LIGHT,
HEADLIGHT, SKY_LIGHT}, whereas SceneView has enum Options {
NO_SCENEVIEW_LIGHT = 0x0, HEADLIGHT = 0x1, SKY_LIGHT = 0x2}, which looks a
little bit... random... to me if any default conversion between the enums is
done. However a quick trawl through SceneView looks okay (LightingMode in
SceneView is typedef'd to Options).

osg-users mailing list

Re: [osg-users] Swapping Textutes for Thermal Signatures

2009-02-13 Thread David Spilling

You can also do it via shaders. Your model would have texture unit 0 =
diffuse texture, and tex unit 1 = thermal texture. In the application you
would set a uniform that declares which texture unit to use (e.g. uniform
int TexUnit). The shader could then select the texture based on the tex

Although this sort of technique has a higher learning cost to you (more
concepts to pick up) than scenegraph based methods, in the long term it's
good exposure given today's GPU evolution...

osg-users mailing list

Re: [osg-users] osgPango (Stop Worrying Love The Bomb)

2009-02-06 Thread David Spilling

Thanks for that. I must admit I am a little bit confused between the various
things that have been mentioned for text rendering, and would appreciate a
one liner explanation of what the difference between osgPango and osgCairo
is. Plus I've seen other libraries mentioned in this context (Poppler?). I
guess my question is, in the end, is osgPango meant as a replacement for


osg-users mailing list

Re: [osg-users] Load uniform sampler2D in shader from .osg file

2009-02-06 Thread David Spilling

Shouldn't this:

gl_FragColor = vec4(texture2D( baseMap,*gl_TexCoord[0]*.st).xyz,

be this?

gl_FragColor = vec4(texture2D( baseMap,*TexCoord*.st).xyz, 0.0);

osg-users mailing list

Re: [osg-users] Anti-Grain and OpenSceneGraph

2009-01-28 Thread David Spilling
Jeremy, Kurt,

I care too!

I admit, I've been lurking on the font quality issues because in our apps I
have very tight control of the text size and positioning, so can tweak the
placement/resolution to get the right look. However developments in this
area wld be extremely valuable to me in the long term. Typically though, I
have no resource to help develop is, so I will have to remain a lurker...

osg-users mailing list

Re: [osg-users] Head mounted display for OSG...

2009-01-07 Thread David Spilling

We use these (
experimentally, because they are cheap, and slightly surprisingly support
quad buffer stereo, which means they more or less work with OSG (on Nvidia
Quadro's, at least) straight out of the box.

Unfortunately, most of our in-house OSG apps hijack the camera in one way or
another and assume that they are not being run stereo, so all goes a bit
awry. Work in progress...

osg-users mailing list

Re: [osg-users] Help Rendering and Reading Floating Point Textures

2008-12-17 Thread David Spilling
Brian, JP,

The osgmultiplerendertargets example uses (by default) GL_FLOAT_RGBA32_NV,
rather than either GL_RGBA32F_ARB, or even GL_RGBA16F_ARB. Could this be a
card issue?

I would be interested to know whether this example works for you if you
change to GL_RGBA32F_ARB or GL_RGBA16F_ARB. I admit, I haven't tried it
myself yet - and now intend to - but certainly in my own app (on ATI), I
have had issues with GL_RGBA16F_ARB and GL_FLOAT in the past. Things might
be better now.

osg-users mailing list

Re: [osg-users] Help Rendering and Reading Floating Point Textures

2008-12-16 Thread David Spilling

For some odd reason, a source type of GL_UNSIGNED_BYTE rather than _FLOAT
seems to work for me for a internal format of GL_RGBA16F_ARB, so you might
want to give that a go.

osg-users mailing list

Re: [osg-users] Mesh/Triangulation Examples?

2008-12-09 Thread David Spilling

For the geometry, I do this, more or less:

osg::ref_ptrosg::Geometry geometry = new osg::Geometry();

 osg::ref_ptrosg::Vec3Array vertices = new osg::Vec3Array;// unsized
 array of vertices

 vertices-push_back(osg::Vec3(x,y,z)); // etc,

 osg::ref_ptrosgUtil::DelaunayTriangulator dt = new
 dt-triangulate();// Generate the triangles

and then add the colour array as per the osggeometry example.

For colouring it in by z-value, I also use a texgen to set up texture
coordinates; something like this

osg::StateSet* stateset = getOrCreateStateSet();

 osg::ref_ptrosg::TexGen texgen = new osg::TexGen;



The m_texture is a texture containing a coloured bar image, with which
your elevation will get shaded.

Hope thats a good pointer...

osg-users mailing list

Re: [osg-users] Multipass Rendering objects that intersect given shapes/bounding volumes

2008-12-08 Thread David Spilling

 am I understanding correctly that
 what's primarily done is to use the Z-buffer to cut down on the amount
 of geometry that has to be lit?

From what I understant, that's not quite it. You render the scene with no
lighting, but enough info per pixel to sort out the lighting later (in one
lighting pass). It requires multirender targets, as you need things like the
normal, and (sometimes) the pixel depth, and material properties/material
index. It's a technique that is fairly orthogonal to the normal rendering
approach, but from what I've read seems particularly good for large numbers
of lights.

However rather than me trying to explain it (badly) I'll defer to the
various papers on it that you can find. There are also some good demos

osg-users mailing list

Re: [osg-users] Multipass Rendering objects that intersect given shapes/bounding volumes

2008-12-06 Thread David Spilling

Not that it really answers your direct question, but have you tried looking
at a deferred rendering approach? With that many lights I would have thought
the performance benefits would be good.

osg-users mailing list

Re: [osg-users] Math problem relative to OSG

2008-12-04 Thread David Spilling
Hi Vincent,

The only thing I can offer is that you have to be careful when you check
your node position/rotation, and when you apply your manipulator update. For
example, if your order is eventTraversal and then updateTraversal, then your
manipulator may be one frame out (i.e. changing based on last frame's node
position, not this one). You might see this as jitter.

osg-users mailing list

Re: [osg-users] Performance query : Drawing lots of instances of avertex array

2008-11-29 Thread David Spilling

Digging through the code now - looks exactly the kind of thing I was after,
so thanks!

osg-users mailing list

[osg-users] Performance query : Drawing lots of instances of a vertex array

2008-11-28 Thread David Spilling
Dear All,

I have a geometry with a simple heightfield type vertex array; I'm also
using VBO. I want to repeat that VBO in quite a lot of places.

The heighfield can be considered as a terrain tile (e.g. 100m x 100m) ,
with which I want to tile a much larger area (e.g. 1km x 1km).

The method I have so far implemented is to have the following scenegraph:

 /   |  \
\   | /
 \  |   /
   Tile with VBO and vertex shader

where G1..Gn are simple groups which set a tile_offset uniform. The vertex
shader on the tile uses the tile_offset to modify the vertexes for each
tile. This all works OK.

However, I am concerned that this is not the best route for high performance
when the tile count is quite high. e.g. is it better for G1...Gn to be
matrix transforms (presumably not, although an easy thing for me to test).
Is this a good example for the geometry instancing extensions?

So, does anybody have any good advice for doing this kind of thing in a
better way within OSG? Pointers or examples (or even hints) would be

Thanks in advance,

osg-users mailing list

Re: [osg-users] Performance query : Drawing lots of instances of avertex array

2008-11-28 Thread David Spilling

If this was raw OpenGL, I'd be tempted to set up my heightfield as a display
list, and then change the modelview matrix for each call to the display
list. I'm not quite sure how to force similar behaviour in OSG other than to
set up the scenegraph with multiple PATs.

I'm working on a similar problem: rendering a large vector field as several
 arrows, all with different positions and orientations. If I multiparent
 the Geometry to several PAT nodes, the cull time cost is quite high.

I have the notion that cull times can be improved by not letting the cull
traverse go across the PATs. The parent of the PATs probably knows something
that could limit the set of PATs that actually needs to be traversed in the
draw. In my application that's certainly true - the top Group gets the
view/projection matrices, and since each tile is the same size, can decide
which of it's children really need drawing without having to check the
bounding spheres of its many PAT children.

Unfortunately, my heightfield does sometimes change and so I haven't really
pursued the tradeoff between the cost of display list compiles and other

I'm looking at adding support for ARB_draw_instanced to reduce both the cull
 and draw time

If you get anything working, I'd be delighted to test it in my context, time
permitting (by then I will have probably got the codepaths for the various
other techniques going).

osg-users mailing list

Re: [osg-users] Effects of locale setting

2008-11-24 Thread David Spilling

Only just caught this thread. I'm happy to update the OBJ plugin (reader
only, presumably) if you want to lose the sscanf, as it's only recently I
was looking at it anyway.

I assume that you want all sscanf(blah, %f, my_float); to be replaced by
sscanf(blah, %s, my_char); my_float=atof(my_char), more or less, or did
you want the whole thing done with std::string or something?

Was the original problem (top of the thread) a recent one, as I seem to have
missed it if it was...

And sorry to be dense, but is the issue also whether OBJ writers (modelling
programs) are locale specific, and how a user might choose the OSG .obj
plugin to respect locale or assume a default?

osg-users mailing list

Re: [osg-users] StateSetSwitchNode suggestion.

2008-11-19 Thread David Spilling

I guess I must be missing something, because I'm not sure why you can't just
use a combination of a switch node with several children, each of which is
parent to the (same) scenegraph. Then each child can have its own shaders
and state, which the switch selects between.

If you are trying to operate via nodemasks, you don't even need the switch,
presumably, just use a group with several children, each of which has a
shader, stateset, and separate nodemask.

Some combination of that and custom update or cull callbacks would seem to
already solve most problems...

(not that I'm trying to put you off contributing, obviously - the more the
merrier - I'm just trying to understand your motivation...)

osg-users mailing list

Re: [osg-users] Slave views

2008-11-19 Thread David Spilling

When compiling against SVN,
 the two are shown, but a crash happens when pressing 's' several times

I think (someone correct me if I'm wrong) that this is a known issue, and
something to do with the thread safety of the stats handler, and/or NVIDIA

If the former is still the case, then running single-threaded might help.
Try searching the archives for crash and stats.

osg-users mailing list

Re: [osg-users] Shader Uniform Performance Question(s)

2008-11-19 Thread David Spilling

From my perspective :

How much overhead is there in having a uniform?

GLSL? Not much.

 If there only a performance hit if the uniform changes values or every
 frame. What if I change the Uniform in OSG to exactly the same value it
 already has, would there be a performance hit?

I regularly use uniforms that change every frame, and I can't say I've
noticed any performance penalty (in the context of an app that is doing
real work).

 I have uniforms that might not change values very often. Some are simply
 boolean flags. Can I have different shaders and somehow switch between them?

Yes. Some people use uniforms and in-shader dynamic branching to control the
operation of shaders. (Some people obviously have too much GPU power for
their own good ;). Me, I'm often stuck with older cards that can't support
dynamic branching very sensibly, and so this kind of shader kills me.  ).

An alternative is to have a switch node, whose children all have different
shaders on them, and who are all the parent of your object. There's a
separate thread on this topic going on at this exact time with very similar
issues to this approach, bearing in mind that shaders form part of the

Can I recompile the shader on-the-fly (i.e. defining these boolean using

Bad idea in general, for performance reasons.

 In some cases, I have variables which can change within the shader, but I
 know these values when I create the scene graph so I currently use #define
 instead of passing them as uniforms (which will ever only have one
 value). Does this gain me much in performance?

In general, yes - it's certainly better for performance (the extent is GPU
dependent, obviously) and this solution is better for older hardware that
doesn't support dynamic branching.

 Also, is there a difference in performance in using four float uniforms
 versus setting a Vec4?

Off the top of my head, I'm not sure - this is probably driver dependent
(good GLSL compilers might pack this up automatically). You might try the
OpenGL / GLSL forums for this.

Hope that helps,

osg-users mailing list

Re: [osg-users] Transparency and lighting off compatible ?

2008-11-19 Thread David Spilling
Hi Vincent,

If you don't want lighting to affect your skysphere, you should turn it off.

skysphere-getOrCreateStateSet-setMode( GL_LIGHTING,
osg::StateAttribute::OFF );

To make it transparent, you need to enable a blend function, as well as tell
OSG to put it in the transparent bin so that it plays nicely with the rest
of the scenegraph - something like:

skysphere-getOrCreateStateSet-setMode( GL_BLEND, osg::StateAttribute::ON

(Might be wrong - this was from memory).

Hope that helps,

osg-users mailing list

Re: [osg-users] Shader Uniform Performance Question(s)

2008-11-19 Thread David Spilling

Sorry to be picky, but:

Firstly, uniform variables are intented to using in rarely changing  values,
 and attributes is used while needing frequently changing values.

Not quite. Lots of the fixed-function uniforms - which are deprecated
under GL3 -  potentially change every frame ( gl_ModelViewMatrix etc. ).

Attributes are per-vertex data, like position, normals, texcoords and so on.
Uniforms are the same across a set of vertices.

osg-users mailing list

[osg-users] Stereo : Meaning of field sequential, I-visor

2008-11-14 Thread David Spilling
Dear All,

I'm a bit new to stereo modes (so take pity on me), but are any of the OSG
supported stereo modes (QUAD_BUFFER, *_INTERLACE, *_SPLIT, *_EYE,
ANAGLYPHIC) the same thing as field sequential? I guess I know that the
interlace/split/anaglyphic ones are out, but I was not sure what quad_buffer
actually supported...

Has anybody tried to get these things (, and related
models) working with OSG?

Thanks in advance,

osg-users mailing list

Re: [osg-users] Stereo : Meaning of field sequential, I-visor

2008-11-14 Thread David Spilling

Thanks for the very informative reply.

Unfortunately, what they do not tell you in their sales pitch is that you
 need at least one of the high-end Quadros (about 800 USD+ investment for
 cheapest one) to have this to work - the lower end stuff doesn't support
 stereo ...

We have it on a Quadro (Quadro FX 2500M); hence just running quad_buffer
should work?

osg-users mailing list

Re: [osg-users] Stereo : Meaning of field sequential, I-visor

2008-11-14 Thread David Spilling
Thanks for the help. That all worked, more or less out-of-the-box, using
QUAD_BUFFER. As suggested, we had to enable stereo in the OpenGL driver.

I was slightly misinformed about the actual hardware, what we are actually
using is this :, which also
uses a field sequential sort of stereo. LIke I said though, all works OK.

Out of curiosity, in a couple of our apps we set various features of the
views e.g. window sizes (via graphics context traits), projection matrix
resize policy and so on. For anaglyphic stereo mode, this was all fine, but
in quad buffer, we got lots of OpenGL warnings. Turning a big set of methods
off fixed them in the short term, but lost some functionality; in the long
term we'd like to fix this.Are there things that one shouldn't do in
quad_buffer that are OK in other stereo modes?

osg-users mailing list

Re: [osg-users] osg Collada Plugin

2008-11-12 Thread David Spilling

Are you still going via collada? Unfortunately I know nothing about the
Collada import route and how it handles lights defined in a model file.

Does the same thing happen with only one light? I suspect that OpenGL is
just (correctly) adding up all the contributions from the various lights you
have and consequently saturating the output.


2008/11/12 Steffen B. [EMAIL PROTECTED]


 i have turned the specular colour to 0 0 0 and it is a bit better.
 But i have another question. In 3ds max i can regulate the ligth intensity
 with the multiplier parameter. do you know how i can regulate the intensity
 with osg?

 thank you for your efforts

osg-users mailing list

Re: [osg-users] Dynamically changing all textures in a scene

2008-11-12 Thread David Spilling
The osg Depictions thread is here :[EMAIL PROTECTED]/msg10685.html

Ok, switching every stateset would be what I am looking to do. Composing the
 texture in the shader would mean we loose the desired ability to use image

Agreed, but if your model had (for example) tex unit 0 = daylight view, tex
unit 1 = infrared view, you could set a uniform at top level which would be
propagated down into all your shaders, where they could use it to determine
which tex lookup to do. Of course, this might mean that you need a coherent
shader management policy for all your models and shaders...

osg-users mailing list

Re: [osg-users] Writing dds textures - they're bigger than the originals?

2008-11-06 Thread David Spilling
Hi Brett,

Shots in the dark:
1)  might the incoming textures be DXT_ compressed, whereas the output is
2) might the outgoing textures include the mimaps? (DDS can include them)
(not sure you'd get 3x, though)

osg-users mailing list

Re: [osg-users] composite view and cameras question

2008-11-04 Thread David Spilling

You could have a look at the osgdepthpartition example, which does something
similar, although for different reasons - I think this was originally due to
precision issues for very large scenes, but its use of multiple z ranges
might help your problem.

osg-users mailing list

Re: [osg-users] Real size model how to??

2008-11-02 Thread David Spilling
Hi Orendavd,

What do you mean it always shows the same size? What are you doing

Firstly, if you are just using osgviewer to view your 3ds model, be aware
that it will place the camera such that the loaded model always looks like
its the same size.

You need to load it up with another model whose scale you know is not
changing in order to confirm that nothing is actually happening.

osg-users mailing list

Re: [osg-users] Disable self shadows (ShadowMap)

2008-11-01 Thread David Spilling

I'm sure that Jean-Sebastien or Wojciech can answer better and more
correctly than me, but in the meantime...

Firstly, I notice that you are using OSG 2.6. The versatility of the API
into the shadowing stuff is much better in the 2.7 versions, and allows you
better control over what is being applied, so I recommend you move to that
if you can.

Secondly, in 2.6 at least, the only flag (castsShadow or receivesShadow)
that is honoured by the shadowedScene's subgraph is the castsShadow one.
Hence if things cast shadows they will also receive shadows.

However, in your example, it is only the default fragment shader applied by
the shadow map, in the end, which is shading your cars, so one way around
it is to reset/force the fragment shader on your cars to one that doesn't
try to do the lookup in the shadow texture. You can't do this at the
shadowMap level (as this would then be applied to everything including your
buildings), but you can do it to the cars individually. Create a shader -
you can use the GLSL in ShadowMap.cpp as a starting point - and then bind it
to your car node stateset with the usual methods

Hope that helps,

osg-users mailing list

Re: [osg-users] problem with image

2008-10-31 Thread David Spilling

Just on the off chance that your problem is really simple:

in my application I am applying a texture (. bmp) to a sphere,


 osg::Image* img = osgDB::readImageFile(rfi.TGA

You say your applying a .bmp, but you're trying to load a .tga. Could that
be it?

osg-users mailing list

Re: [osg-users] choppiness on 2nd graphics card with 4 monitors

2008-10-24 Thread David Spilling

Just out of interest, do you get the same issue with just one monitor
connected to each graphics card? ( I'm interested because I _think_ I get
something similar in this configuration, but it might be a separate issue).
I have the same setup as you (dual 8800, VIsta, etc.)

Are you running dwm/Aero?

osg-users mailing list

Re: [osg-users] Advice on interacting with osgShadow

2008-10-24 Thread David Spilling
Dear Wojciech, J-S,

 All classes belonging to ViewDependentShadow group derive shaders from
 StandardShadowMap. If you look at these shaders you will notice that they
 are split into main vertex and fragment shader and shadow vertex and
 fragment shader.

I saw this. Maybe I'm being daft, but presumably this only helps when you
have the SAME shader for the entire shadowed scene? I have a situation where
the shadowed objects have a variety of different shaders. If I'm missing
something obvious, my apologies.

LODs should be properly handled by ViewDependentshadows. Shadow camera uses
 the same viewpoint as main camera.  Shadow Camera ref frame is set to

Ah yes. Thanks for pointing that out.

osg-users mailing list

Re: [osg-users] Advice on interacting with osgShadow

2008-10-24 Thread David Spilling

Just grab them from StandardShadowMap via getShadowVertexShader()

Ah. I missed that. Perils of browsing source via the website, I guess...


osg-users mailing list

Re: [osg-users] The best way to make some object in a scene render after and infront everything else.

2008-10-23 Thread David Spilling

You can achieve this affect several ways...

(aside : )

Another way is to use osg::Depth to force the z value of your overlaid stuff
to zero, hence ensuring it is always there.

osg-users mailing list

Re: [osg-users] Advice on interacting with osgShadow

2008-10-23 Thread David Spilling
Hi J-S, Wojciech,

Thanks for the help. I've got shadow maps working (I'm on 2.6.1) and when I
get round to it, I'll migrate up to the 2.7 ViewDependant stuff and see if I
can get the other techniques working as well.

 Now, an add-on library could be written that would help unify the art
 pipeline. Things like models, textures, mapping of textures to texture
 units, shader management, shader generation from chunks of shader code,
 effect management, etc. could be made much simpler and more user-friendly.
 But as I said, I don't think it's OSG's job to do this. OSG provides a
 framework, and this is a bit too domain-specific IMHO.

Regarding shader management, I think that it *would *be appropriate for OSG
to provide something here. ( Obviously I realise that this is a large amount
of code - I'm arguing about the appropriateness, not actually demanding that
someone provide it ! )

The reason is basically that as fixed function falls away, shaders become
more important, and the relationship between the scenegraph and the shader
becomes much more like the relationship between the scenegraph and the

For example, the 2.7 shadow library does a fantastic out-of-the-box job of
providing a sort of uber-shader that tries to handle a variety of effects
(various sorts of lights, multiple lights, diffuse texture etc.). However,
throw that at a model that already has shaders on it, and you end up having
to copy-and-paste the shadow shader into the models' shader. This gets more
complicated if the various shadowed objects have different shaders, and this
is the situation I have. I fear that bugfixes and so on to the complex
shaders in osgShadow will end up with my object shaders being out-of-step,
and therefore producing hard to pin-down errors in the result. (shader
debugging is awful).

Thinking aloud, I wonder if some sort of text-based accumulation of
functions that are passed down the scenegraph, with assembly and binding at
the level they are actually required, would be appropriate. For example, the
shadow shader could just be a stub e.g. a  float CalculateShadow()
function that objects that are shadowed need to call if they want.

A couple of specific things:

1) I admit I haven't looked properly at the 2.7 osgShadow, but does it
handle LOD nodes within objects?

2) Although not relevant to me right now, in case the depth-only vertex
shader gets implemented (at the moment it appears from the comments that
fixed function has been found to be faster) it might be useful to be able to
override osgShadow's depth-only vertex shader, - the object might in some
cases be being dynamically transformed during shadowing (e.g. character

Anyway, just my 2p. Thanks (all involved) for all the work on the shadow

osg-users mailing list

[osg-users] Performance penalty from non-contiguous texture units in multitextured model?

2008-10-22 Thread David Spilling
Dear All,

Probably more an OpenGL question, but is there a penalty (performance,
memory etc.) incurred from using non-contiguous texture units in a
multitextured model?

For example, instead of binding a diffuse texture in unit 0 and a
bump/normal/shadow map in texture 1, one could bind the diffuse to 0 and the
bump/normal/shadow map to (say) texture unit 7. It's assumed that theres a
shader which respects the right texture units.

I have tried to test this, but my test context hides any performance
difference, so was wondering if there was any theoretical difference, and
therefore reason not to do it.

Thanks in advance,

osg-users mailing list

Re: [osg-users] Opposing techniques for rendering a reflective floor

2008-10-21 Thread David Spilling
Hi Alex,

Note that doing (1) doesn't preclude RTT - e.g. you can RTT the whole
inverted scene to a texture and then generate appropriate texcoords on the
mirror, either by hand since your camera is fixed, or in a shader.

Plus I'm not sure why you think that (1) is quite a few additional render
passes. Obviously with RTT it's only one more pass - the inverted scene -
but without RTT, and with a stencil buffer, you only have to draw (a) the
mirror, with stencil buffer set, (b) the inverted scene respecting the
stencil buffer, (c) the rest of the scene without the mirror. Good control
of your renderbins makes this fairly easy.

Additional issues with (1) might include the fact that you have to manage
the lighting (i.e. you have to invert the scene lighting as well, otherwise
it looks pretty odd). Also, an unlikely but possible issues is that If you
have some special requirements for poly winding - e.g. models with special
shaders - note that if you just mirror the scene, all the winding inverts as

FWIW, I would do either technique as RTT, as then effects like blur, or
mirror surface non uniformity, or things like this become a lot easier  (in

Given theat, I'm not sure I have a strong favorite. I guess I'd do (2) since
your camera(s) are fixed; you need a new camera to do RTT anyway so setting
it up inverted is easy. The only reason I might not is if I want to play
with complex camera settings later which might be a pain to implement e.g.
stereo, strange projections, etc.

Rendering performance should be near identical (assuming RTT for both).

Hope that helps,

osg-users mailing list

Re: [osg-users] Shader issue

2008-10-19 Thread David Spilling
Hiya Sajjad,

Perhaps an easy way is to have a (multi)switch node whose value is triggered
by the GUI event. The switch would have several children, each of which
would be a separate group node. Each group node would have your target model
as it's only child.

Then you would load up a variety of shaders on the stateset of each group

Not sure what you meant by a different material, though.

Hope that helps,

osg-users mailing list

Re: [osg-users] Strips on AnimatedTexturing on Terrain by GLSL

2008-10-17 Thread David Spilling

It's a well known gotcha with texture repeats.

Each vertex has increasing causticsCoord.x. Imagine the vertices with coord
0.0, 0.1, 0.2, 0.3 etc. The fragments in between any of these coords have
_interpolated_ texcoords i.e. a fragment halfway betwen 0.0 and 0.1 vertex
has a texcoord of 0.05.

For the end vertices, you have 0.8,0.9,1.0, 1.1. The fract function turns
this into 0.8, 0.9, 0.0, 0.1

Hence a fragment half way between 0.9 and 1.0 interpolates the texcoord
between 0.9 and 0.0, giving 0.45, not 0.95 as you would want. Between 0.9
and 1.0, then the actual texcoord used decreases rapidly from 0.9 to 0.0.

The artifact you see is all of your caustics texture, mirrored, and shoved
into the last vertex gap.

To get rid if it, try losing the fract instruction. There may also be
dependencies on your texture mode (REPEAT, MIRROR, CLAMP) that you might
need to play with depending on what your texture looks like.

Hope that helps,

osg-users mailing list

Re: [osg-users] Another case for extendable plugin loaders... Was Re:DDS textures flipped on flt files

2008-10-17 Thread David Spilling
Ah! I learn something every day...

Is there any system-wide check (other than by eye, at checkin) that makes
sure that all of the options are unique to each loader? e.g. there isn't a
dds_flip option in, say, the .ac3d loader?

osg-users mailing list

[osg-users] Advice on interacting with osgShadow

2008-10-15 Thread David Spilling
Dear All,

I'm just picking up osgShadow, and have a couple of questions that I would
appreciate some advice on. The intention is to use one of the ShadowMap type

Case1 : Shadow casters already have expensive fragment shaders

In the case where the objects that are casting shadows have expensive
fragment shaders, I would prefer to turn these shaders off so that we just
get a quick depth only pass. Fortunately my expensive shaders sit above the
object's scenegraph, and so I can envisage an approach in which my unshaded
object has two parent nodes (say A and B), which are both children of the
shadow node. A would have the castsShadow mask and a cheap shader, and B
would have the receivesShadow mask and the expensive shader.

Does this sound like an acceptable approach? I have read things about using
uniforms passed into the shader to control whether it is operating or not,
but the above seemed a little simpler. Does this already happen via the
shadow camera's use of GL_DEPTH

Case2 : Shadow receivers musn't be children of the shadow node.

For good architectural reasons, I can not place one of my desired shadow
receiving items underneath the shadow node. Hence I would like access to the
shadow texture, so that it can bind it explicitly. Would allowing the
ShadowTechnique classes to expose their RTT shadow texture so that something
else can also bind it be a problem in general? At the moment its protected.
Is this something worth submitting or should I just subclass/rewrite?

Case3 : Multitextured shadow receivers

I realise that multitextured items aren't really supported by the
out-of-the-box shadow techniques, but at the moment the handling of the
shadow texture units and base texture units seems a little clumsy, and
difficult to extend. Does anybody else have any experience of using shadows
with objects that are already multitextured (e.g. diffuse and normal
mapped)? It looks to me that if you have any other shaders knocking around
your scenegraph, you need to subclass ShadowTechnique to support your usage
model. Is that what people have done in the past?


osg-users mailing list

Re: [osg-users] Advice on interacting with osgShadow

2008-10-15 Thread David Spilling
Dear J-S,

Thanks for the help.

If you're using the ViewDependentShadow techniques, they already disable
 shaders while rendering the shadow pass.

Ah - I hadn't spotted that in the code yet. So no problems there then.

While I'm here, is there any reason why the shaded objects shouldn't do the
TexGen in a vertex shader?

I would subclass and just add a public getter for the texture since it's
 protected. Pretty simple.

Sounds like a good idea.

 I wonder what the good architectural reasons are, but let's not get into
 that discussion :-)

No, lets, as you might spot something I hadn't considered. Lets say I have a
node representing a mirror, which uses an RTT technique to draw reflections.
In a similar way to the shadows, the objects to be reflected are placed as
children of the mirror, in addition to their normal position in the SG.  The
mirror puts various transforms and clips on the reflection, so that the
reflected image is correct.

Now, the mirror is itself a shadowCaster, as are the objects it is trying to
reflect. If I put the objects and the mirror as children of the
shadowTechnique node, then the non-mirror objects cast shadows twice - once
in their normal orientation, and once in their reflected-child-of-mirror
orientation, which is wrong.

Thinking about it though, I suppose that setting shadowCast to false on an
intermediate node between the mirror and the reflected objects prevents the
cull traverse from ever finding them, so they don't cast shadows twice.
Sounds right? Although I'm still a bit uneasy about what the mirror might
make of the repeated cull traverse attempts... I suppose I had just better
try it out rather than blather on and on.


Your comments agreed.

soapbox mode
I think that the issue is a little bit more far ranging to manage in a
generic application way. For example, the set of incoming models I have to
draw unfortunately comes in with texture units all over the place. (e.g.
some have diffuse = 0, normal = 1, specular = 2, some have diffuse = 0,
specular = 1, no normal). This is in part because file formats that do
specify texture types (diffuse, normal, gloss etc.) like obj and 3ds, don't
specify which texture unit they should occupy. Openflight, in contrast,
specifies which textureUnit the incoming map actually is. Hence we end up
with a bit of a mishmash, thanks to not having an art path that is
historically well controlled, and receiving assets from various sources.

Therefore it is tricky for me to say that texture unit 0 = shadows and 1 =
diffuse, because it is hard to guarantee that in the general case. The
solution I'm likely to resort to is to say shadow = 7, and then if the model
comes in with anything in tex unit 7 then...well.. it has been warned. Can't
say that's happened often though. [Although one might be using tangentspace
generators to push normals into attributes 6,7,15, which could conceivably
get mangled with shadows's texgen (in the case of shadow=7) which means that
7 is a bad choice... and so on.] Not ideal.

I would prefer an OSG wide setting that would be user controlled that set up
some sort of relationship between default texture types and units(e.g.
0=DIFFUSE, 1=SHADOW, 2=NORMAL, 3=SPECULAR etc.) , so that all the loaders
that cared would be consistent (e.g. 3ds, obj), and that the shadow map
could refer to, and that any shader set could read sampler2D values from,
etc. No idea where this would go, though.
/soapbox mode

Thanks again for the pointers.

osg-users mailing list

Re: [osg-users] Shaders, osg::notify and IP

2008-09-24 Thread David Spilling

At last! A genuine requirement for obfuscated C skills!  (

Time to start misusing the GLSL preprocessor...

osg-users mailing list

Re: [osg-users] Question about plane intersections

2008-09-24 Thread David Spilling

If you just want it visually in a map style format, you can texture the hill
with a banded texture, generate the texcoordinates based on a texgen node,
and then view it in ortho from the top.

osg-users mailing list

Re: [osg-users] OSG 2.6 API Documentation in HTML, HTML Help and PDF format

2008-09-18 Thread David Spilling

Your 1.2 docs have been a permanent shortcut on my desktop for several years
now, so many thanks for doing the same for 2.6!

osg-users mailing list

Re: [osg-users] Floating Point Texture Format

2008-09-18 Thread David Spilling
OSG also writes Radiance format (.hdr)

osg-users mailing list

Re: [osg-users] Which File Formats / Plugins support multi-texture?

2008-09-16 Thread David Spilling
Dear All,

Current observations:

1) The OSG 2.6 .obj loader loads two textures : a diffuse map, into texture
unit 0, and an opacity map into texture unit 1. The OBJ format supports a
variety of other texture maps (e.g bump, map_Ks, etc.). This
map-to-texture-unit correspondence is _hardcoded_ in the loader.

2) The 3DS loader has a bunch of potential map loads commented out
(ReaderWriter3DS.cpp, lines 828-842).

General question:

How should OSG cope with map-to-texture unit correspondence? For example, I
can modify the OBJ loader to support map_Ks, bump, etc. but the texture
units will still be hardcoded, and since I don't use an opacity map, the
original author's (Bob Kuehne) map_opacity change will break. A similar
question applied to the 3DS format; I can get it to load up other maps
(specular, opacity, bump etc.) but equally, end up hardcoding against
texture units.

Altenatively, the loader could increment texture units as it finds them in
the input file - this will work for OBJ; I'm not so familiar with 3DS for
this. Then it would be up to the shader to sort things out.

Even more alternatively, one could pass options into the loader that
dictated the correspondence.

Is there a general OSG wide recommended approach for this, or do people just
end up with their own personal customised loaders?

Advice appreciated,


(PS : shouldn't map_opacity be map_d in the obj loader?)
osg-users mailing list

Re: [osg-users] Drawables not drawn in debug build

2008-08-31 Thread David Spilling

I guess that you are using the prebuilt 3rd party binaries. If this is
right, note that these have been built with VS2005 and are probably
incompatible with VS2008.

Do all the examples all run OK both in debug and release?

osg-users mailing list

Re: [osg-users] Segfaults in osg::State::applyTextureAttributes when working with osgText::Text

2008-08-22 Thread David Spilling

There was a whole load of message traffic on this topic a while ago. From
what I remember, the upshot was that the freetype library wasn't thread
safe. I don't know whether it all got finally resolved or not; my advice
would be to check the archives.

osg-users mailing list

Re: [osg-users] sky model tracking the camera...

2008-07-30 Thread David Spilling
Look in the osghandglide example for MoveEarthySkyWithEyePointTransform;
you will need to add the z-coordinate transform as well (currently 0.0 in
the code).

osg-users mailing list

Re: [osg-users] A problem related to the ive file size and loading speed

2008-07-30 Thread David Spilling

Can't comment about your file size, but you could save yourself a step by
doing osgconv My3DSFile.3ds MyIVEFile.ive directly...

osg-users mailing list

[osg-users] Using SSE within OSG

2008-07-29 Thread David Spilling
Dear All,

There's a discussion going on at the moment over in osg-submissions, and it
has been raised that this ought to be opened up to the non-submissions
community for feedback. Note that the following is my reading of the issues,
and certainly doesn't represent the consensus view of the osg-submissions
crowd, so feel free to challenge what I'm saying!

Several people already use SSE instructions ( alongside OSG to
obtain speed improvements through parallelising math operations. The general
point that has been raised is that under-the-hood, OSG does quite a lot that
could benefit from the potential performance boost given by SSE operations.
Obvious targets include some of the Vec/Matrix routines, for example. SSE is
now sufficiently mainstream that the risk of processor incompatibility is
felt to be low.

*Question 1 : Where could the core OSG include SSE?*
Most people follow the sensible approach of profiling to determine their
bottlenecks, and then optimising particular methods in order to gain
speed-up. This would be a sensible approach to follow, as SSEing all methods
would probably be a waste of effort.  It would therefore be instructive
firstly to know if anybody is using SSE with OSG, and where. Secondly, for
those who have profiling data and know how much time they spend in
Vec/Matrix/whatever methods, it would be useful to know which methods the
community considered good targets for SSEing. Any other maths heavy
lifting going on? (e.g. Intersection testing? Delauney triangulation? etc.)

*Question 2 : How could the core OSG include SSE?*
SSE code benefits from aligned data.  Hence there are several ways in which
OSG could include SSE:

a) Provide an aligned Vec4f and aligned Matrix4f class, which support SSE
operations. This would appear (to me) to be the least intrusive.

b) Provide branching code within the existing Vec4/Matrix4 methods for
detecting whether data is aligned, and performing the correct operations.
This would appear to me to be the most user-transparent. Although it would
appear to be a performance hit, testing so far on some specific code would
support the argument that the speed gains from SSE outweigh the branch cost;
more testing needed, I guess.

c) Robert suggested that SSE enabled array operators (e.g. providing a
cross-product operator for Vec3Array) might be appropriate and provide the
best speed improvement for those who want it. Certainly using SSE on large
array type data sets is where one gains the most performance improvement.

This question includes the possibility of linking out to, or pulling source
code our of, an external optimised math library.

Any other suggestions?

*Question 3 : (possibly the biggest) Should the core OSG include SSE?*
There are several downsides to including SSE. Firstly, x-platform provision
of SSE may be tricky due to the way different compilers define aligned data,
and how SSE instructions are used within the code. I personally don't have
much experience here, so any feedback on x-plaform issues is useful.

Secondly, the code readability drops, and the use the source argument may
be trickier when many might not know much SSE.

So - your opinion, experience and suggestions welcome!

osg-users mailing list

Re: [osg-users] Using SSE within OSG

2008-07-29 Thread David Spilling

 may I suggest that you check the assembler code that the compilers create
 compiling the OSG code?

 ... g++ with -march=core2 -O3 (see man page for description
 of parameters) the compiler automatically uses SSE

I don't have much recent Linux/gcc experience, but can certainly attest that
the MS compilers don't do a good job of spotting SSE vectorisation
possibilities, even when you tell them to optimise with them (and this is
from reading the generated ssembler). In MS you can insert SSE intrinsics ,
which still allow the compiler to optimise the execution order and
memory/register usage e.g. based on cycle counts.

I understand (from other sources) that the Intel vectorising compilers are
much better at this, naturally.

Perhaps this is then all only aMS/Windows thing?

osg-users mailing list

Re: [osg-users] Using SSE within OSG

2008-07-29 Thread David Spilling

 And please do not get me wrong. I do not want to stop your efforts to
 the performance of OSG; far from it!

Not necessarily my efforts - I'm just being the messenger...!

But putting assembler code into the
 project decrease the readability and serviceability of the code.


 it might be that it does not improve the speed at all.

I agree, and this is an oft quoted issue. Here, I think, only testing (and
experience) will help. For example, is it worth performing a single Vec3f
cross product in SSE? Probably not. But as a counter example, over on
osg-submissions (EDIT - and now here), one user (James) is getting large
performance gains from having SSE'd the invert_4x4 function.

I just want to suggest
 that you try to exhaust the possibility of modern compilers as much as
 possible. If you see any bottlenecks after that, it might make sense to
 include manual performance tuning.

I agree. This call-for-ideas was motivated by an understanding that several
people are pushing in the same direction, and it would be perhaps beneficial
to make use of this push.

osg-users mailing list

Re: [osg-users] Using SSE within OSG

2008-07-29 Thread David Spilling

 I have to disagree, using VS 7 and up to VS 9.

Just to clarify - what are you disagreeing with? Do you find that MS
compilers will produce SSE vectorised code _without_ use of intrinsics or
raw __asm?

osg-users mailing list

Re: [osg-users] [osg-submissions] Matrixf multiply Optimization

2008-07-27 Thread David Spilling
I think that this general question (of SSE integration) ought to be pushed
out onto the osg-users mailing list. For example, I can't see any reason why
all Vec4f and Matrix4f can't always be aligned anyway, although I realise
that my range of apps might be limited. Even Vec4d and Matrix 4d might
benefit from SSE2, for example.

From my experience, SSE doesn't hurt performance. I agree with Robert that
the most benefit comes from array operations; using SSE to perform a single
vector x-product (i.e. horizontal operations) doesn't help _that_ much,
but it does help a bit. My main issue with going in the direction of array
operations is that I don't think we could offer sufficient operators to be
useful in the general case - people do all kinds of maths things specific to
their problem - but SSEing the simple operations where the maths is obvious,
e.g. James' attack on the Vec/Matrix libraries does seem to be appropriate.
Supporting the general SSE case with aligned vectors and things would be
good (e.g. the osgsharedarray example class is very useful to provide
aligned wrappers).

osg-users mailing list

Re: [osg-users] [osg-submissions] Matrixf multiply Optimization

2008-07-27 Thread David Spilling
MS uses _aligned_malloc (and _aligned_free), _declspec(align(16)).

I think gcc uses something like __attribute__((__aligned__(16))), but I'm
not sure whether that's OK for dynamic allocation.

Intel's MKL, and others, provide cross-platform aligned mallocs, so we might
be able to find something similar. Or just create a new Vec4f / Matrix4f
type with an overriden new operator.

osg-users mailing list

Re: [osg-users] glPointSize no longer working for me?

2008-07-24 Thread David Spilling
I had something possibly similar a while ago - search the archives for GLSL
Shaders and Points (repost), or go to

It might be related to what you are doing.

osg-users mailing list

Re: [osg-users] About Changing Parental Nodes?

2008-07-24 Thread David Spilling
If I understand your problem correctly, the general approach would be to
compute the coordinate system matrix local to a group prior to a move (with
computeWorldToLocal) - call it A - then to move the node, recompute the
coordinate system in the new location - call this B - and apply the correct
matrix - i.e. inverse(B) * A -  to the immediate parent transform to get the
correct positioning. In this way, when moving the object around in the
scenegraph, it won't move in the world frame.

In this case, the yaw group doesn't have a parent transform, so the initial
matrix is the identity. The destination matrix will be the transform of
planetary's MatrixTransform node, so you will need to apply the inverse of
this matrix to your yaw specific MatrixTransform.

I can't help but think that you must be able to persuade your 3DS model to
be in a much more useful structure - all this programmatic model hacking
(although useful for understanding OSG) seems quite clumsy...


osg-users mailing list

Re: [osg-users] Render to Texture

2008-07-16 Thread David Spilling

FYI, the HDR (Radiance format) plugin also supports writing 32Fs, which can
be viewed with a number of applications. I think I also saw a recent
submission that allowed the TIFF plugin to write floats, but I might be

osg-users mailing list

Re: [osg-users] Problem setting a skydome

2008-07-16 Thread David Spilling

I presume that your skydome has some sort of camera centred transform over
it (as per osghangglide's example use); your code doesn't show it.

osg::ClearNode* clearNode = new osg::ClearNode;


This is odd. If your camera is the first thing to draw (implied by
PRE_RENDER) then something needs to be clearing the colour and depth buffer.
In any case, you can use camera's setClearMask method to control this
without needing a clearNode. For example
camera-setClearMask(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT) clears
everyhing, setClearMask(0) clears nothing.

   osg::TexEnv* te = new osg::TexEnv;
stateset-setTextureAttributeAndModes(0, te, osg::StateAttribute::ON);

Slightly surprised to see this, but if your skydome needs it, then OK.

stateset-setMode( GL_CULL_FACE, osg::StateAttribute::ON );

osg::Depth* depth = new osg::Depth;
stateset-setAttributeAndModes(depth, osg::StateAttribute::ON );

Again, not wrong (as the depth testing is always passed) but not really
necessary, as you are ensuring that your skydome is drawn first. I would
tend to prevent depth writing with

osg::Depth* depth = new osg::Depth;
depth-setWriteMask(false); // don't bother writing the depth buffer
stateset-setAttributeAndModes(depth, osg::StateAttribute::ON );

and also disable depth testing with
stateSet-setMode(GL_DEPTH_TEST, osg::StateAttribute::OFF)

I looked at my code and noticed that I also do
camera-setCullingMode(osg::CullSettings::NO_CULLING), but can't remember at
the moment whether it's relevant.

Lastly, what OSG version are you using?

Hope that helps,

osg-users mailing list

Re: [osg-users] Problem setting a skydome

2008-07-15 Thread David Spilling


  a class osg::Camera inherits from

Sorry - missed a step. Put a Camera in above your skydome.

A solution that comes to my mind is to use a pair of cameras, one rendering
 the skydome with the setting you said, DO_NOT_COMPUTE_NEAR_FAR, and the
 rendering the rest of the scene.

Exactly. That's what I do (although I control a bunch of other stuff in the
camera, like projection matrix, in order to avoid the later issues).

osg-users mailing list

Re: [osg-users] FindNodeVisitor Operation?

2008-07-15 Thread David Spilling
Hi Ümit,

osg::DOFTransform is a subclass of the more general osg::MatrixTransform.

If I'm reading the intention of the model right, you have 2 MatrixTransform
nodes - named *3DSPIVOTPOINT: Rotate* and *3DSPIVOTPOINT: Translate
pivotpoint to (world) origin* above some geometry *1_planetar*. (Although
your top level group has 6 other unlisted children as well).

If you want to move/rotate/translate 1_planetar, use the NodeVisitor to find
one of your two MatrixTransform nodes, and then set the transform's matrix
yourself (via setMatrix(osg::Matrix myMatrix)). You will need to fill in
the values of the matrix yourself based on what you want to do, but there
are many many ways of doing this (makeRotate, makeTranslate, makeLookAt

Alternatively, if you really want to use DOFTransform type methods, you
could dynamic_cast the found MatrixTransform to a DOFTransform.

Hope that helps,

osg-users mailing list

Re: [osg-users] FindNodeVisitor Operation?

2008-07-15 Thread David Spilling

Firstly, do you need to add MatrixTransforms above all your geodes, or just
the ones that have them now?

You have a couple of strategies.

The first one is to modify your model so it has uniquely named
_MatrixTransforms_ above every geode. At the moment they are all called the
same thing, so you end up finding 8 nodes when you look for a particular
name. I don't know much about 3DS, so if it were up to me I would probably
do it by hand editing the OSG file. However, this clearly impacts your art
path (because updates to your model will always involve some hand editing).
There is probably a way of persuading 3DS to name transforms... anybody?

The second one is more programmatic. Your geodes are, at least, all uniquely
named. You can search fo a geode, and then look for it's parent. If it's
parent is a MatrixTransform, then you have your node. e.g. (lots of checks

   findNodeVisitor findNode(1_planetar);
   std::vectorosg::Node* nodeList = findNode.getNodeList();
   osg::Node* node = nodeList[0]; // no check on size of list
   osg::MatrixTransform* tf =
dynamic_castosg::MatrixTransform(node-getParent(0)); // no check on
whether there is more than one parent
   if(tf!=NULL) // do stuff

This route is hard to generalise. For example, there's no check on whether
any geodes share a MatrixTransform as a parent - in which case strange
things will happen later. Also if the parent isn't a MatrixTransform, you'll
want to add one as a child to said parent, and move the geode across to be a
child of the MatrixTransform via addChild, removeChild etc., which is a bit

It will come down to how 3DStudio works in the end, I think, and what you
can persuade it to output so that your approach can be generalised.

osg-users mailing list

Re: [osg-users] Problem setting a skydome

2008-07-13 Thread David Spilling

Firstly, you need to prevent the CullVisitor from considering your skydome
in it's autonear/far calculation. You can do this with

You will also need to mimic being a long way away via, most simply by
drawing first (e.g. via a pre draw camera, or good control of your render
bins) with depth checking/writing disabled.

If you have expensive shaders on your skydome, you might want to draw last
to z depth = 1 (via stateset-setAttributeAndModes(new
osg::Depth(osg::Depth::ALWAYS, 1.0,1.0))). NB - you might need LEQUAL,
depending on how you cleared your zbuffer. Depending on what you are dong,
you might still also run into a problem in which your skydome is clipped, so
you'll need to control the projective camera; look at the DepthPartition
example if you need a way forward.

Hope that helps,

osg-users mailing list

Re: [osg-users] get a Texel from an osg::Image

2008-07-11 Thread David Spilling

So long as you know that the image format is GL_RGBA8, and 2D, you can do
something like:

osg::Vec4 returnColour(int row, int col)
unsigned char*  pSrc  = (unsigned char*)  image-data(row,col);
float r = (float) *pSrc++/255.0f;
float g = (float) *pSrc++/255.0f;
float b = (float) *pSrc++/255.0f;
float a = (float) *pSrc++/255.0f;
return osg::Vec4(r,g,b,a);

If you are not in control of the image's format then you should check
whether it is RGB or RGBA before trying to get alpha, and also whether it is
RGBA8 and not (for example) RGBA16F, or RGBA32F, or something slighly more

Hope that helps,

osg-users mailing list

[osg-users] How to byte-align Vec4Array

2008-07-08 Thread David Spilling
Dear All,

Is there an obvious way of aligning the contents of the Vec4Array to 16 byte
boundaries? Can I also guarantee that each std::vector entry will be
contiguous in memory? i.e. I would like to make sure that array[0].x(),
array[1].x() etc. are all on consecutive 16 byte boundaries.

(I'm using MS VC++ 9, so would natively use __declspec(align(16)) but am not
sure how to get at the vector).

If not, can I declare a big array of floats, that is aligned as per
requirements, and pass it into a geometry as a vertex array direclty,
bypassing use of Vec4Array (and Vec4 for that matter)?

I guess if neither of these works, then I'm down to subclassing Vec4Array...


osg-users mailing list

Re: [osg-users] How to byte-align Vec4Array

2008-07-08 Thread David Spilling
Hi Gordon, Thibault,

Thanks for the replies regarding the contiguity of the memory in a
std::vector. That at least solves half of the problem.

 Use yourvector[0] to get a float* pointer to the beginning of the

How do I define the vec4array so that yourvector[0] is absolutely aligned,
i.e. a multiple of (in my case) 16?

With a float array, on MS compilers, I would do __declspec(align(16)) float*
myarray = new float[4 * MY_ARRAY_SIZE]. From what I understand, and from the
MSDN documentation (
this is the correct way to guarantee this. #pragma pack 16 doesn't always do
what you expect...

If I do __declspec(align(16)) Vec4Array* myarray = new Vec4Array, I'm
guaranteed that myarray is 16 byte aligned, but what about the contents of
the array? Especially if I do lots of -push_back so that the std::vector
resizes and goes somewhere else. Even if I do __declspec(align(16))
Vec4Array* myarary = new Vec4Array(MY_ARRAY_SIZE) I don't think I'm
guaranteed 16 byte alignment of the first vector entry...

(I think the gcc equivalent to __declspec(align(X)) is __aligned__(X), by
the way).


osg-users mailing list

Re: [osg-users] How to byte-align Vec4Array

2008-07-08 Thread David Spilling

I've been OSGing for long enough that perhaps I shouldn't be quite so
surprised, but I'm still always a bit amazed about the ready availability of

Q: I need to defroogle my impfusculator. Can I do this in OSG?
A: Yes - see examples/defroogleFusculator.cpp.

(Although perhaps its an indication that I'm no longer quite so up to date
as I though I was!)

Thanks a lot,

osg-users mailing list

Re: [osg-users] how to disable zbuffer

2008-07-08 Thread David Spilling
If you just disable depth testing, or make the fragments always pass depth
testing via ALWAYS, you still don't get the effect that the object is always
visible, do you? It presumably will depend on its position in the scenegraph
and the relative order of drawing. That's why HUDs are often done in
post-render cameras...

If it's the stateset solution you want - rather than using a osg::Camera
with POST_RENDER - I think you might also want to force a draw to z=0, the
near clipping plane:

stateSet-setAttributeAndModes (new osg::Depth(osg::Depth::ALWAYS, 0.0,
0.0), osg::StateAttribute::ON);

osg-users mailing list

[osg-users] Some yes/no questions about VBOs

2008-07-02 Thread David Spilling
Dear All,

I have a few VBO related questions; a few quick yes/no answers would be much
appreciated to stop me going down dead ends...

I attach a vertex array, texcoord array and normal array to a Drawable,
which is using VBOs. From the code, I can see that calling dirty() on any of
the arrays dirties the entire VBO.

1) I can't quite understand all of the BufferObject code, but just to check,
it looks like all three arrays are dumped into one large contiguous buffer
(i.e. one single bufferID). Is that correct?

If(1) is true, and it's just one large buffer:

2) Am I right in thinking that limiting the upload to one of the arrays
would involve extending BufferObject to use glBufferSubData, as it isn't
currently supported?

3) Can you set the stride of each buffer object (i.e. to support
interleaving the arrays)? I can't see anything like glVertexPointer

4) More generally, is there any (easy) way to attach several BufferObjects
to a Drawable, such that you could separately dirty() vertex, texcoord or
normal without having to upload the others?

If (1) is false, and each array has a different buffer object/ID,

5) Is there an obvious way that I've missed for dirtying just one of the
arrays in the VBO space?


osg-users mailing list

Re: [osg-users] Some yes/no questions about VBOs

2008-07-02 Thread David Spilling

  2) Am I right in thinking that limiting the upload to one of the arrays
  would involve extending BufferObject to use glBufferSubData, as it isn't
  currently supported?

 I should already work in 2.4 onwards.

Fabulous!  I'm on 2.2 at the moment; I'll upgrade immediately!

Thanks for the help,

osg-users mailing list

  1   2   >