[osg-users] shader in OSG

2021-01-29 Thread 杨光
*I want to pass a two-dimensional array into the shader. Is this difficult 
to do? If this is not possible, in what form should this information be 
passed to the shader? Can this information be stored in a picture as a 
texture and passed to the shader, if so, how can the information in the 
texture be read out in the shader? thank you very much*

-- 
You received this message because you are subscribed to the Google Groups 
"OpenSceneGraph Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to osg-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/osg-users/4e597cf0-6d40-457b-ba69-01bb7bd20673n%40googlegroups.com.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Shader Program stops working after Changing Viewer GraphcsContext or Viewer

2018-05-21 Thread Eran Cohen
Hi,

I implemented a 'PausableViewer', to enable dynamically 'hiding' and 'showing' 
the window. I did it in the following way:
On pause, I set done to true, create a new GraphicsContext with the same traits 
of the previous GraphicsContext, stop the viewer threading and switch them - 

Code:
pause()
{
auto traits = new 
GraphicsContext::Traits(*getCamera()->getGraphicsContext()->getTraits());
auto gc = GraphicsContext::create(traits);

stopThreading();
getCamera()->setGraphicsContext(gc);
setDone(true);
}



On resume, I resume the threading and set done to false - 

Code:
resume()
{
setDone(false);
startThreading();
}




This works well, but when using with osgEarth's SkyNode as the scene data of 
the viewer, after the second context switch and onward, I get these errors in a 
loop:

glValidateProgram  FAILED  ""  id=4  contextID=0
glValidateProgram  FAILED  "SimpleSky Atmosphere"  id=7  contextID=0
Warning: detected OpenGL error 'invalid operation' at RenderBin::draw(..)


I can also recreate the error using a regular Viewer, by closing a Viewer but 
saving the scene data and giving it to a new Viewer:


Code:
ref_ptr root = new Group;
auto mapNode = new MapNode;
auto skyNode = SkyNode::create(mapNode);

skyNode->addChild(mapNode);
root->addChild(skyNode);

while (true)
{
Viewer viewer;
viewer.setCameraManipulator(new EarthManipulator);
viewer.setSceneData(root);
viewer.run();
}



I know that this problem might stem from the osgEarth side of things, and I've 
asked there, but I'm wondering if anyone might have a clue to why this happens.

Cheers,
Eran

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=73696#73696





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition with multiple function injection

2018-05-13 Thread Robert Osfield
Hi Hartwig,



On 12 May 2018 at 22:59, Hartwig Wiesmann  wrote:
> In the example code one directional light computation can be injected. What 
> is the best solution if I like to inject 0 to N directional lights where N is 
> runtime dependent?
>
> I could do something like:
>
> Code:
>
> #ifdef LIGHTING0
> directionalLight( 0, gl_Normal.xyz, basecolor);
> #endif
> #ifdef LIGHTING1
> directionalLight( 1, gl_Normal.xyz, basecolor);
> #endif
> #ifdef LIGHTING2
> directionalLight( 2, gl_Normal.xyz, basecolor);
> #endif
> ...
>
> but this is not very elegant. Though I do not see any other possibility. Am I 
> missing a better solution?

You could use the above approach, this kinda what the fixed function
pipeline does with needing to enable/disable GL_LIGHT0, GL_LIGHT1,
GL_LIGHT2 etc.

Another approach is pass in the number of active lights and have a for
loop iterator through the calls to directionalLight(i, gl_Normals,
basecolor);  The number of lights could be a uniform or supplied by
#pragma(tic) shader composition.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Shader composition with multiple function injection

2018-05-12 Thread Hartwig Wiesmann
Hi,

I had a look at the shader composition example. The main shader looks like this:

Code:

#pragma import_defines ( LIGHTING, TEXTURE_2D, VERTEX_FUNC(v) )

#ifdef LIGHTING
// forward declare lighting computation, provided by lighting.vert shader
void directionalLight( int lightNum, vec3 normal, inout vec4 color );
#endif

#ifdef TEXTURE_2D
varying vec2 texcoord;
#endif

#ifdef VERTEX_FUNC
uniform float osg_SimulationTime;
#endif

varying vec4 basecolor;

void main(void)
{
basecolor = gl_Color;

#ifdef LIGHTING
directionalLight( 0, gl_Normal.xyz, basecolor);
#endif

#ifdef TEXTURE_2D
// if we want texturing we need to pass on texture coords
texcoord = gl_MultiTexCoord0.xy;
#endif

#ifdef VERTEX_FUNC
gl_Position   = gl_ModelViewProjectionMatrix * VERTEX_FUNC(gl_Vertex);
#else
gl_Position   = gl_ModelViewProjectionMatrix * gl_Vertex;
#endif

}




In the example code one directional light computation can be injected. What is 
the best solution if I like to inject 0 to N directional lights where N is 
runtime dependent?

I could do something like:

Code:

#ifdef LIGHTING0
directionalLight( 0, gl_Normal.xyz, basecolor);
#endif
#ifdef LIGHTING1
directionalLight( 1, gl_Normal.xyz, basecolor);
#endif
#ifdef LIGHTING2
directionalLight( 2, gl_Normal.xyz, basecolor);
#endif
...




but this is not very elegant. Though I do not see any other possibility. Am I 
missing a better solution?

Thank you!

Cheers,
Hartwig

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=73625#73625





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] shader

2017-04-06 Thread michael kapelko
Hi.
Start with OpenSceneGraph beginner's guide:
https://www.packtpub.com/game-development/openscenegraph-30-beginners-guide

2017-04-07 8:52 GMT+07:00 崔 献军 :

> i have read opengl red bible and learnt some theory about it. next i want
> to learn more, such as shader, osg, etc.
> i was told shader is important but osg was vrey little about shader,  i
> have no other friends to ask how to do next. so, who can give any advice to
> learn osg?
>
>
> 崔献军
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] shader

2017-04-06 Thread 崔 献军
i have read opengl red bible and learnt some theory about it. next i want to 
learn more, such as shader, osg, etc.
i was told shader is important but osg was vrey little about shader,  i have no 
other friends to ask how to do next. so, who can give any advice to learn osg?


崔献军
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader Update Latency?!

2017-02-11 Thread Johny Canes
Again,

Okay this is definitely not a bug, but it seems quite a common pitfall to say 
the least.

Instead of using viewer.frame(), which is a helper function that "calls 
advance(), eventTraversal(), updateTraversal(), renderingTraversals()" use the 
following.


Code:
viewer.advance();

viewer.eventTraversal();
viewer.updateTraversal();

updateShaders();

viewer.renderingTraversals();



This fixed my case of choppy shader updating, due to the slow viewmatrices 
update.

Cheers,
Johny

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=70157#70157





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader Update Latency?!

2017-02-11 Thread Johny Canes
Hi,

This problem affects my shaders, since all my shaders rely on the correct view 
matrices.


robertosfield wrote:
> 
> By default the osgViewer::Viewer/CompositeViewer runs the update
> traversal before the camera matrices are set, this is done as camera
> manipulators might be tracking movement of nodes in the scene which
> are update in the update traversal so has to be run second.
> 
> One thing you could do is set the camera view matrix prior to the
> updateTraversal() method is called, or do the update of your Uniforms
> explicitly after the updateTraversal().


How can I run a function after this updateTraversal that you speak of?

Thank you!

Cheers,
Johny

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=70156#70156





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2016-12-20 Thread Julien Valentin
Oups
After few reading
it comes to me that setting binding point in the glsl layout doesn't make 
mandatory to connect the Program
But with my impl problem I'm good to debug my own stuff ...

Sorry




mp3butcher wrote:
> Errh, ..don't understand...
> I've just required a ssbo (for the purpose of summing stuff among threads).
> But when I look at the code SSBOBufferBinding and SSBO, there are nearly 
> nothing :( to connect with the program.
> 
> I don't know how the example make use of SSBO but my brain doesn't make any 
> sense out of it.
> 
> I'm not familiar yet with ssbo but I believe that:
> -The underlying bindbufferbase is not adapted to ssbo scenario, so 
> ssbobufferbinding is not a proper impelemntation
> -ssbo buffer binding is related to osg::Program like uniform block index and 
> so should be managed here
> 
> What do you think of it?


--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=69721#69721





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2016-12-20 Thread Julien Valentin
Errh, ..don't understand...
I've just required a ssbo (for the purpose of summing stuff among threads).
But when I look at the code SSBOBufferBinding and SSBO, there are nearly 
nothing :( to connect with the program.

I don't know how the example make use of SSBO but my brain doesn't make any 
sense out of it.

I'm not familiar yet with ssbo but I believe that:
-The underlying bindbufferbase is not adapted to ssbo scenario, so 
ssbobufferbinding is not a proper impelemntation
-ssbo buffer binding is related to osg::Program like uniform block index and so 
should be managed here

What do you think of it?

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=69720#69720





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader, OpenGL context and Qt

2016-09-20 Thread Valerian Merkling
Hi,

After a few more days of search, I can resume my problem in a more simple way :

I've a set of osg::Program.

Each Widgets has its own contextID, and contextID can be re-used.

The first time I load the osg::Program "my_program1" on a view with a contextID 
of 1, it's ok.

If then I load "my_program1" on a view with another contextID, it's still ok.
If I load another osg::Program on a view with a contextID of 1, it's still ok.

But if I close / reopen view enough to get another view with a contextID = 1, 
then loading "my_program1" won't work (the shaders are not used).


Does anyone knows what is happening ?

Thanks for you help !

Valerian.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68670#68670





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader, OpenGL context and Qt

2016-09-16 Thread Valerian Merkling
Hi,

I'm still stuck with my shaders problem and I ran out of ideas, but I have new 
elements, and a new question.

My shader works on the first view, so this code is ok (ie thoses two shaders 
works fine) :


Code:

osg::ref_ptr vertex=new osg::Shader(osg::Shader::Type::VERTEX, 
vertex_source);
osg::ref_ptr fragment=new osg::Shader(osg::Shader::Type::VERTEX, 
vertex_source);
osg::ref_ptr program=new osg::Program();
program->addShader(vertex);
program->addShader(fragment);





Then I tried to emulate the "close the view open a new an try again" with this :


Code:

osg::ref_ptr vertex=new osg::Shader(osg::Shader::Type::VERTEX, 
vertex_source);
osg::ref_ptr fragment=new osg::Shader(osg::Shader::Type::VERTEX, 
vertex_source);
osg::ref_ptr program=new osg::Program();
program->addShader(vertex);
program->addShader(fragment);

program = nullptr;
program->addShader(vertex);
program->addShader(fragment);




And shaders are now broken (fixed pipeline is used instead, so I can see stuff 
but no shiny effects :( )

Is it a known and wanted behavior ? Do I really have to load my shaders once 
and for all ?

Thanks for you help !

Valerian.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68629#68629





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Shader, OpenGL context and Qt

2016-09-12 Thread Valerian Merkling
Hi,

I'm working on a GIS app based on Qt and OpenGL, and I'm replacing all OpenGL 
calls with OSG 3.4.0.

I can display multiples views. Each view is independant, hold in QGLWidets, got 
only one camera and its own scenegraph, and a new instances of osg::Program and 
osg::Shader (no osg objects shader between views).

Shaders works fine in the firsts views, but I soon as I close a view, shaders 
are gone for the next opened views, although I using the same source code to 
init the shaders and use a new osg::Program for it.

I was first searching in the code for shader compilation logs, and I found this 
function  : getGLProgramInfoLog(unsigned int contextId, std::string & log).

I have a few questions :

Do I have to use this function to get info about how the shader compilation was 
?

If yes, how do I get this context ID ? I always thought that Qt was managing 
OpenGL context for me, but I may be wrong ? 

Is that Context ID a possible cause of my shader problem ? 


Thank you!

Cheers,
Valerian

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68580#68580





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] shader composition

2015-01-29 Thread Robert Osfield
Hi Nick,

On 28 January 2015 at 23:23, Trajce Nikolov NICK 
trajce.nikolov.n...@gmail.com wrote:


 I read the shader composition code and the used classes - which are really
 without any documentation :-). Can someone give a fast intro into this, the
 sample for example?


ShaderComposition in the core OSG is still experimental, the
osgshadercomposition example is main place to learn about what is supported
so far.

However, I'm currently working on tackling the problem of shader management
from a different direction, if successful it will avoid the need for the
previous ShaderoComposition functionality for a range of usage models.

I'll discuss this new approach in a separate thread.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] shader composition

2015-01-29 Thread Robert Milharcic

On 29.1.2015 0:23, Trajce Nikolov NICK wrote:

I read the shader composition code and the used classes - which are really
without any documentation:-). Can someone give a fast intro into this, the
sample for example?

Thanks a bunch as always


Hi Nick,

I once did some experimental coding on shader composition. The idea was 
to emulte all the FFP attributes with shaders and possibly extend the 
composition with custom shaders. Although the code is working and it 
produces optimal shaders with zero branching or defines, I have never 
had time to polish it up and send it for a review. I could post the 
sample I used for testing, if you are interested. The bad news is that 
it requires some osg core modifications to work properly. I can send 
those, too.


Robert Milharcic

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] shader composition

2015-01-29 Thread Trajce Nikolov NICK
Thanks Robert, Robert,

I will wait for the new code then.

Nick

On Thu, Jan 29, 2015 at 11:54 AM, Robert Milharcic 
robert.milhar...@ib-caddy.si wrote:

 On 29.1.2015 0:23, Trajce Nikolov NICK wrote:

 I read the shader composition code and the used classes - which are really
 without any documentation:-). Can someone give a fast intro into this, the
 sample for example?

 Thanks a bunch as always


 Hi Nick,

 I once did some experimental coding on shader composition. The idea was to
 emulte all the FFP attributes with shaders and possibly extend the
 composition with custom shaders. Although the code is working and it
 produces optimal shaders with zero branching or defines, I have never had
 time to polish it up and send it for a review. I could post the sample I
 used for testing, if you are interested. The bad news is that it requires
 some osg core modifications to work properly. I can send those, too.

 Robert Milharcic

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org




-- 
trajce nikolov nick
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] shader composition

2015-01-28 Thread Trajce Nikolov NICK
Hi Community,

I read the shader composition code and the used classes - which are really
without any documentation :-). Can someone give a fast intro into this, the
sample for example?

Thanks a bunch as always

Nick

-- 
trajce nikolov nick
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2014-12-10 Thread Robert Osfield
Hi Markus,

Thanks for the changes to work with svn/trunk.  I've been able to slot them
into svn/trunk pretty easily, everything compiles and runs on my system.  I
don't know yet whether I see exactly what I should be expecting - a group
of points that start in a cube then swirl outwards, particularly in the y
axis.

Changes are now checked into OpenSceneGraph svn/trunk and
OpenSceneGraph-Data svn/trunk.

Cheers,
Robert.

On 9 December 2014 at 19:33, Markus Hein mah...@frisurf.no wrote:

  Hello Robert,


  I have now added the glMemoryBarrier, glMapBufferRange and
 glBindBufferBase functions into GL2Extensions, along with the #define's.
 This is now checked into svn/trunk.

 thanks.

 I'm back from vacation, was reading this first now. Ok , I have built
 today's  trunk and want  to send my SSBO changes for trunk osg-core. With
 these changes, my example app from last zip-package will run as on my 3.2.1
 mod (OSG_FILE_PATH must be set and the shaders installed there )


 This is in the changeset attached:


- adding missing 3dtexture-include tag in GL2extensions.cpp to make it
compiling under win32
- adding ShaderStorageBufferObjectBinding in StateAttribute enum
- adding minimal implementation for ShaderStorageBufferObject
- adding minimal implementation for ShaderStorageBufferObjectBinding

 together with your changes, it is possible to use SSBO in osg applications
 based on latst trunk.

  Observations:

 zooming in/out with trackball camera manipulator probably  has some impact
 on glDispatchCompute() called by Program.cpp Don't know why, it is updating
 smooth when camera is inside the particlecloud, but flickering occuring
 if camera is outside particle cloud. I guess this is a syncing issue we
 need to handle with proper glMemoryBarrier(), but I just want to make sure
 it doesn't comes  from some osg tree traversal setting.


  Todo:

 Research, reasearch .. find the proper use of glMemoryBarrier()  - using
 the right bits  in the bitmask, usually one want to compute (read/write)
 and draw data in VBO (read) without syncing issues

 I will try to get trunk with these changes and continue with  making a
 osgSSBO example, reduced to the basic stuff . Reloading shaders on-the-fly
 is a real timesaver, so I will try to leave this in the example code



  regards, Markus




 Den 04.12.2014 18:17, skrev Robert Osfield:

 Hi Markus,

 On 4 December 2014 at 15:05, Markus Hein mah...@frisurf.no wrote:

  Really needed for SSBO/SSBB support is probably only 50% of the changes
 I have made to my osg-core sources. And it is mostly not changing existing
 osg, just new stuff, added to existing source. I only know about one
 changed osg function that is extended by a glMemoryBarrier() call., so this
 should be fairly safe to put into osg-core.


  I have now added the glMemoryBarrier, glMapBufferRange and
 glBindBufferBase functions into GL2Extensions, along with the #define's.
 This is now checked into svn/trunk.

  I'm now looking at moving the merging the GLBufferObject::Extensions
 settings into the GL2Extensions as well.

  It might be that we'd later want to separate the GL2Extensions into
 different blocks such as GL2Extentsions, GL3Extension, GL4Extensions.  I
 haven't yet decided on this.

 Robert.



 ___
 osg-users mailing 
 listosg-users@lists.openscenegraph.orghttp://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2014-12-10 Thread Markus Hein

Hello Robert,

Thanks for the changes to work with svn/trunk.  I've been able to slot 
them into svn/trunk pretty easily, everything compiles and runs on my 
system.  I don't know yet whether I see exactly what I should be 
expecting - a group of points that start in a cube then swirl 
outwards, particularly in the y axis.


Thanks, yes the example-app is just moving  a lot particles around ,  
nothing meaningful, based on some initial properties at startup , all 
per frame computation is done on GPU. Can easy be extended by adding 
some control data in (via uniforms). Everytime one of the 4 shaders 
sources is changed in editor, the new shader will be relinked and 
active, so it is possible to try something without recompiling the app.


I will  work on a better example app because I see that it still has 
some issues. A good demonstration of SSBO's advantage over CPU-based 
particlesystems would be some computing intensive task like 
collisondetection of particles against other particles or against 
obstacles, bouncing between walls , maybe deformation etc.


SSBO's seems for me to be perfect to realize particlesystems where lots 
of particles are living for a long time and need to be updated all the 
time. Usually this was a heavy task for the CPU, on GPU it should go faster.


Unfortunately I have some flickering issues , I try to find out how to 
avoid this.  I have not done any measurements if it runs faster if  
using multiple  SSBO's (one per property for example), or if it runs 
best if all the data is put in one large SSBO (as in the current 
example). Probably it is better using multiple Shader Storage Buffer's.


regards, Markus
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2014-12-04 Thread Robert Osfield
Hi Markus,

I have just done a quick first pass review against svn/trunk and there are
lots of changes between 3.2.1 and svn/trunk that result in so many
differences that it's difficult to spot all the extension specific
changes.  I will next look at the changes between your submission and 3.2.1
but thought I'd chip in a few suggestions first.

First up, this week I began work on a new scheme for managing OpenGL
extensions in the OSG to make it more streamlined and easier to
implement/maintain.  This was instigated by work to add new ColorMaski,
BlendFunci, BlendEquationi classes that wrap glColorMaski etc. that support
multiple render target usage.  The new approach cuts the number of lines of
code to manage extensions by around a 1/3rd so I'm keen to roll it out more
widely to the rest of the OSG.  This will be a bit by bit process.

Support for SSBO will get affected by these changes, and my inclination is
to add the various extensions required and classes in incrementally.

To help with my work could you port your work to svn/trunk and let me know
which bits you feel should be merged first/what would help this effort.

Once I've merged this support I'll tag the 3.3.3 dev release.

Robert.

Robert.




On 3 December 2014 at 21:06, Markus Hein mah...@frisurf.no wrote:

 Hello Steven, Robert

  I'm actively working on this as well. Could you send me the zip file so
 that I can try it out?




 here are my osg-changes for OSG-3.2.1 and a example app to play with. This
 is just what I found , able to build  run right now.
 The single threaded app is working on my laptop. Unfortunately, I had no
 luck to make a multithreaded approach working properly (compute thread ,
 renderthread sharing SSBO). The current design of osg core makes it a bit
 difficult. OpenGL 4.5 drivers promise to make life easier, maybe worth to
 wait for stable GL 4.5 driver releases and build on latest features?

 The single threaded osgSSBO example should work , but there can be
 numerical precision problems as soon as frame update is stalled or slow.
 I was a little bit confused where to place implementation for the new gl
 extensions, it is done in at different places, but this changeset is
 working for me.

 - some new extensions are basically supported, glMapBufferRange,
 glMemoryBarrier,etc. to play with SSBO and ComputeShaders

 To make it working:
 -replace the osg-3.2.1 source files with the modified files (adding SSBO,
 SSBB and support for some extensions)
 -build the example code as some osg-application, don't forget to set
 OSG_FILE_PATH so the shader files are found at the right location
 - play with SSBO and ComputeShaders , edit and save the shader sources
 on-the-fly, so they will be reloaded ...

 I hope you get it built and running .

 best regards, Markus

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2014-12-04 Thread Markus Hein


Hi Robert,

To help with my work could you port your work to svn/trunk and let me 
know which bits you feel should be merged first/what would help this 
effort.
yes, I see that you will need the changes based on latest trunk, and not 
based on release 3.2.1.  Unfortunately I can't start doing this before 
next week, but I will send my changes based on trunk if no other 
osg-user has a similar changeset and some example app ready right now.


Really needed for SSBO/SSBB support is probably only 50% of the changes 
I have made to my osg-core sources. And it is mostly not changing 
existing osg, just new stuff, added to existing source. I only know 
about one changed osg function that is extended by a glMemoryBarrier() 
call., so this should be fairly safe to put into osg-core.


To help with my work could you port your work to svn/trunk and let me 
know which bits you feel should be merged first/what would help this 
effort.


I would suggest that a minimal changeset  will be put into trunk  as 
soon as possible, so every user can run an example and take a look at 
what it is about. Later I would be happy if there are some discussions 
for future development about which implementation strategies are best 
for combining OSG and all the new GL 4 stuff . I think it would be best 
to take advantage of latest GL features in  OSG-based applications , 
maybe opening for shared GL-Objects and some multithreading,  while 
still remaining performance and principles etc. of the OSG.



regards, Markus



Den 04.12.2014 10:21, skrev Robert Osfield:

Hi Markus,

I have just done a quick first pass review against svn/trunk and there 
are lots of changes between 3.2.1 and svn/trunk that result in so many 
differences that it's difficult to spot all the extension specific 
changes.  I will next look at the changes between your submission and 
3.2.1 but thought I'd chip in a few suggestions first.


First up, this week I began work on a new scheme for managing OpenGL 
extensions in the OSG to make it more streamlined and easier to 
implement/maintain.  This was instigated by work to add new 
ColorMaski, BlendFunci, BlendEquationi classes that wrap glColorMaski 
etc. that support multiple render target usage.  The new approach cuts 
the number of lines of code to manage extensions by around a 1/3rd so 
I'm keen to roll it out more widely to the rest of the OSG.  This will 
be a bit by bit process.


Support for SSBO will get affected by these changes, and my 
inclination is to add the various extensions required and classes in 
incrementally.


To help with my work could you port your work to svn/trunk and let me 
know which bits you feel should be merged first/what would help this 
effort.


Once I've merged this support I'll tag the 3.3.3 dev release.

Robert.

Robert.




On 3 December 2014 at 21:06, Markus Hein mah...@frisurf.no 
mailto:mah...@frisurf.no wrote:


Hello Steven, Robert

I'm actively working on this as well. Could you send me the
zip file so that I can try it out?




here are my osg-changes for OSG-3.2.1 and a example app to play
with. This is just what I found , able to build  run right now.
The single threaded app is working on my laptop. Unfortunately, I
had no luck to make a multithreaded approach working properly
(compute thread , renderthread sharing SSBO). The current design
of osg core makes it a bit difficult. OpenGL 4.5 drivers promise
to make life easier, maybe worth to wait for stable GL 4.5 driver
releases and build on latest features?

The single threaded osgSSBO example should work , but there can be
numerical precision problems as soon as frame update is stalled or
slow.
I was a little bit confused where to place implementation for the
new gl extensions, it is done in at different places, but this
changeset is working for me.

- some new extensions are basically supported, glMapBufferRange,
glMemoryBarrier,etc. to play with SSBO and ComputeShaders

To make it working:
-replace the osg-3.2.1 source files with the modified files
(adding SSBO, SSBB and support for some extensions)
-build the example code as some osg-application, don't forget to
set OSG_FILE_PATH so the shader files are found at the right location
- play with SSBO and ComputeShaders , edit and save the shader
sources on-the-fly, so they will be reloaded ...

I hope you get it built and running .

best regards, Markus

___
osg-users mailing list
osg-users@lists.openscenegraph.org
mailto:osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org




___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list

Re: [osg-users] Shader storage buffer object

2014-12-04 Thread Robert Osfield
Hi Markus,

On 4 December 2014 at 15:05, Markus Hein mah...@frisurf.no wrote:

  Really needed for SSBO/SSBB support is probably only 50% of the changes I
 have made to my osg-core sources. And it is mostly not changing existing
 osg, just new stuff, added to existing source. I only know about one
 changed osg function that is extended by a glMemoryBarrier() call., so this
 should be fairly safe to put into osg-core.


I have now added the glMemoryBarrier, glMapBufferRange and glBindBufferBase
functions into GL2Extensions, along with the #define's.  This is now
checked into svn/trunk.

I'm now looking at moving the merging the GLBufferObject::Extensions
settings into the GL2Extensions as well.

It might be that we'd later want to separate the GL2Extensions into
different blocks such as GL2Extentsions, GL3Extension, GL4Extensions.  I
haven't yet decided on this.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2014-12-03 Thread Julien Valentin
Hello all
Just by curiosity, I wonder what concrete usage make mandatory the SSBO..?
in your scenerio : particle advection doesn't require R/W RAM buffer, ping pong 
feedback streams are designed for this...(haven't compare perfs but seams 
obvious) 

bye

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=61969#61969





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2014-12-03 Thread Markus Hein

Hello Julien,

Den 03.12.2014 22:27, skrev Julien Valentin:

Hello all
Just by curiosity, I wonder what concrete usage make mandatory the SSBO..?
in your scenerio : particle advection doesn't require R/W RAM buffer, ping pong 
feedback streams are designed for this...(haven't compare perfs but seams 
obvious)



hmm,  FeedbackTransformBuffers can be used for this, somehow .

 I think that SSBO's  and the very latest GL features making  sense if  
thinking about changing  the app design/architecture .


 Avoiding expensive data transfers between host and device as much as 
possible, and using the power of modern GPU's for high troughput computing.


Maybe we should ask why some combination of  OSG  ComputeShaders  SSBO 
etc. could make sense ? I think one major problem for the pure 
computing guys  is holding  overview , debugging, visualizing  and 
tracking their data??


In my eyes , OSG seems to be perfect  to handle complex scenes , but it 
has - right now -   not so good support for the very latest GL features 
(my opinion).
I like the idea to combine data computation, data monitoring/debug, and 
data visualization in one solution.  Maybe OSG could be tuned to make it ?



good night, Markus
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2014-12-03 Thread Julien Valentin

Markus Hein wrote:
 Hello Julien,
 hmm,  FeedbackTransformBuffers can be used for this, somehow .
 
 I think that SSBO's  and the very latest GL features making  sense if  
 thinking about changing  the app design/architecture .
 


Well It's my opinion, but a lot of new GL4 features make me worry about correct 
usage of the GPU ...As we can do everything now (scattered writing/inter thread 
sync,R/W ram), I think we should ask ourself what worse the cost of use these 
features.

My guess about using SSBO for particle  managment instead of streamed buffer is 
that there's an RAM access overcostbut i'm not an hardwareman

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=61971#61971





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2014-12-02 Thread Steven Powers
I'm actively working on this as well. Could you send me the zip file so that I 
can try it out?

Cheers,
Steven

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=61950#61950





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2014-11-20 Thread Robert Osfield
Hi Marcus,

On 19 November 2014 21:32, Markus Hein mah...@frisurf.no wrote:

  I'm still sitting here  with making an osgexample app for SSBO's but
 basically it is working as expected under a modified osg-3.2.1. I have
 attached some screenshots, showing  number of 512x512 particles computed
 and rendered using SSBO and ComputeShaders (using nvidia K5000 quadro
 adapter, vert sync at 60Hz).

 Robert, I wonder how to send my changeset , so we could compare it  and
 get SSBO support in OSG soon ? Easiest would be that I'm sending the zipped
 files I have added and changed, is this ok ?


Just send a zipped file containing all the modified files, or a diff
generated by svn.



 Something that still makes me headache is how we could get the computation
 traversal running stable in its own thread? Putting physics computation in
 the mainloop could easy make it numerical instable (user events like
 windowresizing, low renderrate etc.)  I did some tests with running it in
 two seperate threads so the computation thread was scheduled for running
 straight at 120 hz and wasn't toucht, while the renderthread was scheduled
 to run at 30Hz and this way all the numerical stuff was predictable, even
 if renderthread was slow or stalled for a second. Does someone has some
 good idea how computation traversal could be done  this seperate without
 changing to much osg-core code?


I am not familiar with computer shaders enough to suggest how to tackle the
implementation side of trying to thread it, how does OpenGL manage this?

In general if you want to do threading with the OSG then you'd create
either a subclass from OpenThreads::Thread or use an osg::OperationThread
and add in custom osg::Operation that do the work you want to get done.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2014-11-18 Thread Robert Osfield
HI Sverre,

The OSG-3.2 predates the shader storage buffer object extension so won't be
support.

The OSG-trunk doesn't yet have support either, but it might be possible to
add prior to creating the OSG-3.4 branch - and you'd be welcome to have a
bash.  Looking at the OpenGL extension specs:

https://www.opengl.org/registry/specs/ARB/shader_storage_buffer_object.txt

It looks like new enums will be needed, but most of the original buffer
object setup should work.  In the spec there is also use of the
glMemoryBarrier that the OSG doesn't yet wrap so one might be need to look
at adding this to GL2Extensions.

Robert.

Robert.

On 17 November 2014 16:30, Sverre Aleksandersen 
sverre.aleksander...@gmail.com wrote:

 Hi,

 I'm wondering whether shader storage buffer objects are available in OSG.
 So far I've only looked at OSG 3.2.1. As far as I can tell this buffer
 object is not available in this version, please correct me if I'm wrong.

 Is there any plans in implementing it? If not, I might be interested in
 giving it a go myself.
 As far as I can tell, changes would at least be necessary in GL2Extensions
 and maybe BufferObject.

 Thank you!

 Cheers,
 Sverre

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=61656#61656





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2014-11-18 Thread Sverre Aleksandersen
Allright thanks.

I've added support for SSBO, but haven't finished glMemoryBarrier yet.

I've looked at this (
https://github.com/openscenegraph/osg/commit/160a7a5c674615a4119b6022e31d7e4cdcb84f7e)
commit which added support for compute shaders.
I don't really understand why the functionality is so split between
GL2Extensions and Texture. Why not add the new tokens and BindImageTexture
to GL2Extensions instead of Texture?
The reason I'm conflicted is that I'm not sure if I should add
MemoryBarrier to GL2Extensions, or if it should be placed in another
location.

Any advice or guidance would be greatly appreciated,
Sverre


On Tue, Nov 18, 2014 at 10:48 AM, Robert Osfield robert.osfi...@gmail.com
wrote:

 HI Sverre,

 The OSG-3.2 predates the shader storage buffer object extension so won't
 be support.

 The OSG-trunk doesn't yet have support either, but it might be possible to
 add prior to creating the OSG-3.4 branch - and you'd be welcome to have a
 bash.  Looking at the OpenGL extension specs:

 https://www.opengl.org/registry/specs/ARB/shader_storage_buffer_object.txt

 It looks like new enums will be needed, but most of the original buffer
 object setup should work.  In the spec there is also use of the
 glMemoryBarrier that the OSG doesn't yet wrap so one might be need to look
 at adding this to GL2Extensions.

 Robert.

 Robert.

 On 17 November 2014 16:30, Sverre Aleksandersen 
 sverre.aleksander...@gmail.com wrote:

 Hi,

 I'm wondering whether shader storage buffer objects are available in OSG.
 So far I've only looked at OSG 3.2.1. As far as I can tell this buffer
 object is not available in this version, please correct me if I'm wrong.

 Is there any plans in implementing it? If not, I might be interested in
 giving it a go myself.
 As far as I can tell, changes would at least be necessary in
 GL2Extensions and maybe BufferObject.

 Thank you!

 Cheers,
 Sverre

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=61656#61656





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2014-11-18 Thread Robert Osfield
Hi Sverre,

On 18 November 2014 12:37, Sverre Aleksandersen 
sverre.aleksander...@gmail.com wrote:

 I've added support for SSBO, but haven't finished glMemoryBarrier yet.

 I've looked at this (
 https://github.com/openscenegraph/osg/commit/160a7a5c674615a4119b6022e31d7e4cdcb84f7e)
 commit which added support for compute shaders.
 I don't really understand why the functionality is so split between
 GL2Extensions and Texture. Why not add the new tokens and BindImageTexture
 to GL2Extensions instead of Texture?
 The reason I'm conflicted is that I'm not sure if I should add
 MemoryBarrier to GL2Extensions, or if it should be placed in another
 location.


Differences like this are down to history and a bit of differences in
coding approach used by different major contributors.  Support for
extensions is an area that could do with a refactor, but this will have to
wait as I have plenty other tasks to tackle.

In the case of MemoryBarrier it probably would fit best with
GL2Extensions.  The enum's will belong in the headers most relevant to the
features associated with the enum's, so likely the BuffeObject header.

Any advice or guidance would be greatly appreciated,


 Any advice? For good long term health exercise at 20minutes a day - even
20 minutes of walking will do :-)

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2014-11-18 Thread Sverre Aleksandersen
haha thanks I've already walked 30 minutes today :)

Thanks for the tip - will post here when and if I get something working

On Tue, Nov 18, 2014 at 2:51 PM, Robert Osfield robert.osfi...@gmail.com
wrote:

 Hi Sverre,

 On 18 November 2014 12:37, Sverre Aleksandersen 
 sverre.aleksander...@gmail.com wrote:

 I've added support for SSBO, but haven't finished glMemoryBarrier yet.

 I've looked at this (
 https://github.com/openscenegraph/osg/commit/160a7a5c674615a4119b6022e31d7e4cdcb84f7e)
 commit which added support for compute shaders.
 I don't really understand why the functionality is so split between
 GL2Extensions and Texture. Why not add the new tokens and BindImageTexture
 to GL2Extensions instead of Texture?
 The reason I'm conflicted is that I'm not sure if I should add
 MemoryBarrier to GL2Extensions, or if it should be placed in another
 location.


 Differences like this are down to history and a bit of differences in
 coding approach used by different major contributors.  Support for
 extensions is an area that could do with a refactor, but this will have to
 wait as I have plenty other tasks to tackle.

 In the case of MemoryBarrier it probably would fit best with
 GL2Extensions.  The enum's will belong in the headers most relevant to the
 features associated with the enum's, so likely the BuffeObject header.

 Any advice or guidance would be greatly appreciated,


  Any advice? For good long term health exercise at 20minutes a day - even
 20 minutes of walking will do :-)

 Robert.

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader storage buffer object

2014-11-18 Thread Markus Hein

Hi,

regarding SSBO and OSG , I will try to contibute by adding my changeset  
later the day or tomorrow for comparison, review and test.  From what I 
have seen  until now, it is important to implement and use 
glMemoryBarrier()  properly , avoiding sync issues when rendering data 
stored in SSBO computed by ComputeShader.


I have got computation and rendering working without sync issues when 
calling :


glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT|GL_VERTEX_ARRAY_BARRIER_BIT) 
right before calling glBindBuffer() for the VBO (inline func in 
BufferObject headerfile)


Any advice? For good long term health exercise at 20minutes a day - 
even 20 minutes of walking will do :-)


Working SSBO's  would make OSG and the osg-community looking at least 5 
years younger, I guess..



If we could get SSBO and possibilites for heavy-computing in OSG very 
soon..  would be like christmas.


regards, Markus





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Shader storage buffer object

2014-11-17 Thread Sverre Aleksandersen
Hi,

I'm wondering whether shader storage buffer objects are available in OSG.
So far I've only looked at OSG 3.2.1. As far as I can tell this buffer object 
is not available in this version, please correct me if I'm wrong.

Is there any plans in implementing it? If not, I might be interested in giving 
it a go myself.
As far as I can tell, changes would at least be necessary in GL2Extensions and 
maybe BufferObject.

Thank you!

Cheers,
Sverre

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=61656#61656





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader Problems, AMD is Fine, NVIDIA is not

2014-06-17 Thread Sebastian Messerschmidt

Hi Fitz,



Hi,
seems like it was part of the problem, but not all of it.
Intel HD4400 and my AMD Cards still work fine, but the Quadro 600 is missing 
all light it seems, it s much darker then it shoud be, it seems like not all 
lights are used.
Furthermore, trying to get the shader work on the GTX 660 TI or 670 or 
something from NVIDIA is still not working, or it's messed up. (No Bingo, but 
still false light or all white).
In my experience the NVIDIA drivers are far better than the Intel or AMD 
OpenGL drivers. But I agree, sometimes they seem to silently get strict, 
which can cause some hefty amount of wall banging your head.
Once I cased a bug, having an uninitialized vec3 outside a loop in a 
blur shader, causing similiar artifacts.
The only suggestion I can give you here: try different driver versions, 
check OpenGL errors, step back to a minimal example and debug what is 
causing the error.

Furthermore, the CommandLine is giving constantly:
warning: detected opengl eror 'invalid enumerant' at after 
RenderBin:.draw(...)
Could you attach gDebugger or a similar tool to check what is causing 
the error?



It should maybe maintain, that the working AMDs are running on Windows, while 
the Quadro 600 runs on Linux. But the non-working GTX ones are runnning 
windows, too.
Does anyone have an idea or an suggestion? Or would at least be so kind to 
answer me and tell me, that he has no idea? It is kind of depressing, posting 
questions and never get any answers. Seems not very noob friendly here to me...
Your question is extremely and seems to be rather shader/driver related 
than being an OSG question. People simply don't have that much time to 
dive into a rather open ended topic like this.
Next time it might help to provide a complete compiling example which 
shows the problem, so people might test on different platforms (which 
you didn't mention as far as I see).


Cheers
Sebastian


Thank you!

Cheers,
Fitz

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=59763#59763





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader Problems, AMD is Fine, NVIDIA is not

2014-06-17 Thread Fitz Chivalrik
Hi SMesserschmidt,

many thanks for your answer!
It really helps for a newbie like me if he reads the confirmation, that it is 
not related to his OSG Code but to Shader. I was struggling with this a whole 
week and did not know where to look.
I did not initialized the vec4 color var in my method, which could be part of 
the problem. After I did this, it still did not work on every maschine, but in 
some more. What finally was the answer was a piece of false code in the shader, 
which existed there since the beginning and which I overlooked for a whole 
week. Shame on me, I dunno how i did that
Anyway, these two lines were faulty:
color += gl_FrontMaterial.ambient * 
gl_LightSource[lightIndex].ambient[lightIndex]; 
color += gl_BackMaterial.ambient * 
gl_LightSource[lightIndex].ambient[lightIndex]; 
ambient is of course not an array!
I really dunno why my AMD drivers did let that error pass...
Again, many thanks for taking your time to answer me!

Thank you!

Cheers,
Fitz

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=59790#59790





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader Problems, AMD is Fine, NVIDIA is not

2014-06-16 Thread Fitz Chivalrik
Hi,
seems like it was part of the problem, but not all of it.
Intel HD4400 and my AMD Cards still work fine, but the Quadro 600 is missing 
all light it seems, it s much darker then it shoud be, it seems like not all 
lights are used.
Furthermore, trying to get the shader work on the GTX 660 TI or 670 or 
something from NVIDIA is still not working, or it's messed up. (No Bingo, but 
still false light or all white).
Furthermore, the CommandLine is giving constantly:
warning: detected opengl eror 'invalid enumerant' at after 
RenderBin:.draw(...)
It should maybe maintain, that the working AMDs are running on Windows, while 
the Quadro 600 runs on Linux. But the non-working GTX ones are runnning 
windows, too.
Does anyone have an idea or an suggestion? Or would at least be so kind to 
answer me and tell me, that he has no idea? It is kind of depressing, posting 
questions and never get any answers. Seems not very noob friendly here to me...

Thank you!

Cheers,
Fitz

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=59763#59763





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Shader Problems, AMD is Fine, NIVIDA is not

2014-06-14 Thread Fitz Chivalrik
Hi,

currently I am working on a OSG Project as part of my study. I wrote a Toon 
Shader and a Fog Shader (as Post-Process), which work fine on my (and others) 
AMD Graphic Cards. However, on almost every NVIDIA Card (the only one it worked 
very slow was a Quadro one, unfortunately I do not know which one) the picture 
is totally messed up (look at the provided screenshots below)!. I was sure that 
I was only using non-manufacter specific shader code, so I am totally lost what 
could be the cause of the error.
I did though tracked it down to _maybe_, just maybe the TextureMapping of the 
toonTex. If I remove this part of the shader, it looks a bit different, still a 
mess, but the crowbar is fine (the rest is still color bingo).
Maybe the light code is wrong to? If I remove everything and only passing the 
texture then it looks just fine. Of course, no Lightning and so, but no color 
bingo and texture are fine.
Maybe one of you sees my mistake? Do I use platform depended code? Is my 
Texture Mapping faulty? Or what else could be wrong? The GLSL Version?
The TexSize for the Toon Texture ist 128x128 (sampler2D toonTex).
I am using the osgFX Effect class for the CelShading, the Technique works as 
that:
First pass my shader code, second pass is copy/pasted from osgFX::Cartoon for 
the Oulines.
The output of this I render to Texture, along with a render-to-texture depth 
buffer and this two texture are used in my FogShader for FogCalculation, which 
is outputed on a screensize-quad.
The models are created and UV-Mapped with Blender, then exported with 
osgexporter-Plugin, which is why I do not explicitly set the Texture Mode for 
sampler2D texture0. As far as I unterstand, GLSL then just use the 0-Texture, 
with seems to work on booth, AMD and NVIDIA.
Some Screenies:
Working Intel i5 Third Generation, AMD Radeon HD 7950.
also working on: ATI Mobility HD3400 Series (do no which one)
Non-Working AMD Phenom II X4 850, NVIDIA GTX 560TI
Also non-working on: NVIDIA GTX 660 TI (video: 
https://www.youtube.com/watch?v=1FvQ6tnGmXs)

Shader Code:
celShading.vert

Code:

#version 120
varying vec3 normalModelView;
varying vec4 vertexModelView;
void main()
{   
  gl_Position = ftransform();   
  normalModelView = gl_NormalMatrix * gl_Normal;
  vertexModelView = gl_ModelViewMatrix * gl_Vertex;
  gl_TexCoord[0] = gl_MultiTexCoord0;
}



celShading.frag

Code:

#version 120 
#define NUM_LIGHTS 4
uniform sampler2D texture0;
uniform sampler2D toonTex;
uniform float osg_FrameTime;
varying vec3 normalModelView;
varying vec4 vertexModelView;

vec4 calculateLightFromLightSource(int lightIndex, bool front){
vec3 lightDir;
vec3 eye = normalize(-vertexModelView.xyz);
vec4 curLightPos = gl_LightSource[lightIndex].position;
//curLightPos.z = sin(10*osg_FrameTime)*4+curLightPos.z;
//check if directional or point
if(curLightPos.w == 0)
lightDir = normalize(curLightPos.xyz);
else
lightDir = normalize(curLightPos.xyz - vertexModelView.xyz);

float dist = distance( gl_LightSource[lightIndex].position, 
vertexModelView );
float attenuation =  1.0 / 
(gl_LightSource[lightIndex].constantAttenuation
 + gl_LightSource[lightIndex].linearAttenuation 
* dist 
 + 
gl_LightSource[lightIndex].quadraticAttenuation * dist * dist);

float z = length(vertexModelView);
vec4 color;
vec4 toonColor;
if(front){
vec3 n = normalize(normalModelView);
float intensity = dot(n,lightDir); //NdotL, Lambert
vec3 reflected = normalize(reflect( -lightDir, n));
float specular = pow(max(dot(reflected, eye), 0.0), 
gl_FrontMaterial.shininess);
//Toon-Shading
//2D Toon 
http://www.cs.rpi.edu/~cutler/classes/advancedgraphics/S12/final_projects/hutchins_kim.pdf
float d = specular;
toonColor = texture2D(toonTex,vec2(intensity,d));

color += gl_FrontMaterial.ambient * 
gl_LightSource[lightIndex].ambient[lightIndex];
 //+gl_FrontMaterial.ambient * gl_LightModel.ambient
// +  gl_FrontLightModelProduct.sceneColor;
if(intensity  0.0){
color += gl_FrontMaterial.diffuse * 
gl_LightSource[lightIndex].diffuse * intensity * attenuation ;

//-Phong Modell

color += gl_FrontMaterial.specular * 
gl_LightSource[lightIndex].specular * specular *attenuation ;
}
} else {//back
vec3 n = normalize(-normalModelView);
float intensity = dot(n,lightDir); //NdotL, Lambert

//Toon-Shading
vec3 reflected = 

Re: [osg-users] Shader Problems, AMD is Fine, NVIDIA is not

2014-06-14 Thread Fitz Chivalrik
Okay, after hours after hours of debugging and research only pure luck gave me 
the final hint:
Non-uniform flow control and, as a result, undefined behavior.
Apperantly one cannot access texture coordinates savely, if the point of access 
in the code is not uniform, e.g. after an if-statement, which was exactly the 
point in my shader code:
I accessed the toonTex in the if(front) condition and, even bigger, after the 
condition block I accessed the texture0. These access was not uniform and 
therefore I was pure coincidence that it worked on my graphics card.
I changed my shader like this (now more unused vars are generated each turn, 
but it works!):

Code:
#version 120 
#define NUM_LIGHTS 8
uniform sampler2D texture0;
uniform sampler2D toonTex;
uniform float osg_FrameTime;
uniform bool tex;
varying vec3 normalModelView;
varying vec4 vertexModelView;

vec4 calculateLightFromLightSource(int lightIndex, bool front){
vec3 lightDir;
vec3 eye = normalize(-vertexModelView.xyz);
vec4 curLightPos = gl_LightSource[lightIndex].position;
//curLightPos.z = sin(10*osg_FrameTime)*4+curLightPos.z;
lightDir = normalize(curLightPos.xyz - vertexModelView.xyz);

float dist = distance( gl_LightSource[lightIndex].position, 
vertexModelView );
float attenuation =  1.0 / 
(gl_LightSource[lightIndex].constantAttenuation
 + gl_LightSource[lightIndex].linearAttenuation 
* dist 
 + 
gl_LightSource[lightIndex].quadraticAttenuation * dist * dist);

float z = length(vertexModelView);
vec4 color;
vec3 n = normalize(normalModelView);
vec3 nBack = normalize(-normalModelView);
float intensity = dot(n,lightDir); //NdotL, Lambert
float intensityBack = dot(nBack,lightDir); //NdotL, Lambert
//-Phong Modell
vec3 reflected = normalize(reflect( -lightDir, n));
float specular = pow(max(dot(reflected, eye), 0.0), 
gl_FrontMaterial.shininess);
vec3 reflectedBack = normalize(reflect( -lightDir, nBack));
float specularBack = pow(max(dot(reflectedBack, eye), 0.0), 
gl_BackMaterial.shininess);
//Toon-Shading
//2D Toon 
http://www.cs.rpi.edu/~cutler/classes/advancedgraphics/S12/final_projects/hutchins_kim.pdf

vec4 toonColor = texture2D(toonTex,vec2(intensity,specular));
vec4 toonColorBack = 
texture2D(toonTex,vec2(intensityBack,specularBack));
if(front){  
color += gl_FrontMaterial.ambient * 
gl_LightSource[lightIndex].ambient[lightIndex];
if(intensity  0.0){
color += gl_FrontMaterial.diffuse * 
gl_LightSource[lightIndex].diffuse * intensity * attenuation ;
color += gl_FrontMaterial.specular * 
gl_LightSource[lightIndex].specular * specular *attenuation ;
}
return color  * toonColor;
} else {//back  
color += gl_BackMaterial.ambient * 
gl_LightSource[lightIndex].ambient[lightIndex];
if(intensity  0.0){
color += gl_BackMaterial.diffuse * 
gl_LightSource[lightIndex].diffuse * intensityBack * attenuation ;
color += gl_BackMaterial.specular * 
gl_LightSource[lightIndex].specular * specularBack *attenuation ;
}
return color  * toonColorBack;
}   
}

void main(void) {
vec4 color = vec4(0.0);
bool front = true;
//non-uniform-flow error correction
//happens sometome (mostyl on NVIDIA)
//see more here: 
http://www.opengl.org/wiki/GLSL_Sampler#Non-uniform_flow_control
//and here: 
http://gamedev.stackexchange.com/questions/32543/glsl-if-else-statement-unexpected-behaviour
vec4 texColor = texture2D(texture0,gl_TexCoord[0].xy);
if(!gl_FrontFacing)
front = false;
for(int i = 0; i NUM_LIGHTS; i++){
color += calculateLightFromLightSource(i,front);
}
if(tex) 
gl_FragColor =color * texColor;
else
gl_FragColor = color;
  }



The final info I got from here:
http://gamedev.stackexchange.com/questions/32543/glsl-if-else-statement-unexpected-behaviour
and here:
http://www.opengl.org/wiki/GLSL_Sampler#Non-uniform_flow_control

Thank you!

Cheers,
Fitz

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=59738#59738





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composer and position state

2013-09-05 Thread Robert Milharcic

On 4.9.2013 15:39, Robert Osfield wrote:

Hi Robert,

When implementing the osg::ShaderComposer functionality I had in mind that
ability to have a custom StateAttribute/ShaderAttribute::apply() compute
the view dependent uniform state and apply this to the osg::State.  The
apply() method would get the modelview matrix from the osg::State and apply
this to the local uniform before passing it on to osg::State.


Might this work for you?


No, I don't think so. I'm basically trying to implement a special 
uniform or maybe custom shader attribute, having a global scope, but to 
be set localy at the node or stateset. Special uniform would then be 
multiplied with a local modelview matrix and applied to all programs.  
This is analogus to position state handling already implemented for FFP 
(LightSource/Light for example).


Best Regards
Robert Milharcic


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Shader composer and position state

2013-09-04 Thread Robert Milharcic

Hi all,

My goal is to replace shader generator with shader composer, so I spent 
a few days playing with it. It proved to be very flexible and powerful 
tool but unfortunately, it lacks the support for a key feature I would 
consider necessary, and that is support for the position state. 
Putting it differently, shader composer is blind for position state 
attributes (Light, ClipPlane, TexGen) stored within LightSource, 
ClipNode and TexGenNode. I was able to workaround the problem with cull 
calbacks or with custom cull visitor where I'm able to identify 
positioned uniform multiply it with model-view matrix and then force 
shader composer to accept the uniform and ShaderComponent attached to 
the state attribure. Unfortnatelly, this workaround introduced some 
unnecessary code complications and problems with uniform management and 
that is why I strongly feel that the solution should be integrated into 
OSG core.


I believe the implementation of such a feature should be straightforward 
task and since I might be able to post the core solution @submissions 
for a review, I am asking you for advice on how user interface should 
look. I have two ideas for now:


1. easy to implement solution dedicated to LightSource, ClipNode and 
TexGenNode. In addition to LightSource::setLight we could introduce two 
new methods: LightSource::addPositionedUniform(name, VecX) that will be 
collected into PositionalStateContainer and get special treatment with 
model-view matrix before applied at RenderStage and 
LightSource::addUniform(osg::Uniform*) to set other non-poisitioned 
uniforms that won't be multiplied but applied in a same way before any 
other attribure.
2. general soution by introducing two new types of uniforms: 
osg::PositionedUniform and global? scene? uniform, Both can be set at 
any StateSet and will be applied at RenderStage before any other attribure.
3. both solutions: First dedicated and after that general solution which 
will replace internal logic of LightSource and others leaving the 
interface as is.

4. ?

Best Regards
Robert Milharcic
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composer and position state

2013-09-04 Thread Robert Osfield
Hi Robert,

When implementing the osg::ShaderComposer functionality I had in mind that
ability to have a custom StateAttribute/ShaderAttribute::apply() compute
the view dependent uniform state and apply this to the osg::State.  The
apply() method would get the modelview matrix from the osg::State and apply
this to the local uniform before passing it on to osg::State.


Might this work for you?

Robert.


On 4 September 2013 09:56, Robert Milharcic robert.milhar...@ib-caddy.siwrote:

 Hi all,

 My goal is to replace shader generator with shader composer, so I spent a
 few days playing with it. It proved to be very flexible and powerful tool
 but unfortunately, it lacks the support for a key feature I would consider
 necessary, and that is support for the position state. Putting it
 differently, shader composer is blind for position state attributes
 (Light, ClipPlane, TexGen) stored within LightSource, ClipNode and
 TexGenNode. I was able to workaround the problem with cull calbacks or with
 custom cull visitor where I'm able to identify positioned uniform
 multiply it with model-view matrix and then force shader composer to accept
 the uniform and ShaderComponent attached to the state attribure.
 Unfortnatelly, this workaround introduced some unnecessary code
 complications and problems with uniform management and that is why I
 strongly feel that the solution should be integrated into OSG core.

 I believe the implementation of such a feature should be straightforward
 task and since I might be able to post the core solution @submissions for a
 review, I am asking you for advice on how user interface should look. I
 have two ideas for now:

 1. easy to implement solution dedicated to LightSource, ClipNode and
 TexGenNode. In addition to LightSource::setLight we could introduce two new
 methods: LightSource::**addPositionedUniform(name, VecX) that will be
 collected into PositionalStateContainer and get special treatment with
 model-view matrix before applied at RenderStage and
 LightSource::addUniform(osg::**Uniform*) to set other non-poisitioned
 uniforms that won't be multiplied but applied in a same way before any
 other attribure.
 2. general soution by introducing two new types of uniforms:
 osg::PositionedUniform and global? scene? uniform, Both can be set at any
 StateSet and will be applied at RenderStage before any other attribure.
 3. both solutions: First dedicated and after that general solution which
 will replace internal logic of LightSource and others leaving the interface
 as is.
 4. ?

 Best Regards
 Robert Milharcic
 __**_
 osg-users mailing list
 osg-users@lists.**openscenegraph.org osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.**org/listinfo.cgi/osg-users-**
 openscenegraph.orghttp://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] shader composing related link

2013-08-05 Thread Sergey Kurdakov
Hi All,

just to share,

searching for shader composing stumbled across

https://github.com/tlorach/nvFX

https://developer.nvidia.com/sites/default/files/akamai/gamedev/docs/nvFX%20A%20New%20Shader-Effect%20Framework.pdf

it is open sourced so maybe it might be of some interest to look at.

Best regards
Sergey
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] shader composing related link

2013-08-05 Thread Robert Milharcic

On 5.8.2013 15:23, Sergey Kurdakov wrote:

en sourced so maybe it might be of some interest to look at.

Best regards
Sergey
Very Interesting indeed.  One more thing is needed though, the osg 
implementation of the nvFX interfaces (currently implemented for GL, 
D3D, CUDA and OPTIX) . On the other hand, Wang Rui's effect compositor 
looks cleaner, more osg-ish and maybe easier to extend... In any case, a 
decent effect framework is needed now more then ever.


Robert Milharcic
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Shader reloading

2012-09-24 Thread Christian Rumpf
hello!

I'm trying to simulate noise effects in real time using OSG and GLSL.

I rendered to texture a scene to get a sampler2D. Mu and Sigma for noise 
simulation are static. My idea is to reload fragment shader in a while loop 
with a random number as uniform because GLSL's noise() function doesn't seem to 
work on my computer. Therefore I wrote a function randomNumber(int from, int 
to) which returns a random int between from and to.


Code:

void loadNoiseShader(int width, int height, osg::Vec3d mue, osg::Vec3d sigma, 
osg::Geode* geode)
{
osg::ref_ptrosg::Shader vertShader = 
osgDB::readShaderFile(osg::Shader::VERTEX, noise.vert);
osg::ref_ptrosg::Shader fragShader = 
osgDB::readShaderFile(osg::Shader::FRAGMENT, noise.frag);

osg::ref_ptrosg::Program program = new osg::Program;
program-addShader(vertShader.get());
program-addShader(fragShader.get());

osg::StateSet* ss = geode-getOrCreateStateSet();
ss-setAttributeAndModes(program.get());

osg::ref_ptrosg::Uniform texUniform = new osg::Uniform(RTScene, 0);
osg::ref_ptrosg::Uniform windowSize = new osg::Uniform(screen, 
osg::Vec2(width, height));
osg::ref_ptrosg::Uniform mueUniform = new osg::Uniform(mue, mue);
osg::ref_ptrosg::Uniform sigmaUniform = new osg::Uniform(sigma, 
sigma);
osg::ref_ptrosg::Uniform randomUniform = new osg::Uniform(random, 
randomNumber(-255, 255));
ss-addUniform(texUniform.get());
ss-addUniform(windowSize.get());
ss-addUniform(sigmaUniform.get());
ss-addUniform(mueUniform.get());
ss-addUniform(randomUniform.get());
}

/*...*/

while(!viewer.done())
{
   loadNoiseShader(width, height, mu, sigma, geode.get());
   viewer.frame();
}




I thought that this would work, but unfortunately the executive crashes at the 
beginning. Can someone explain me that please?

lg Chris

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=50258#50258





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader reloading

2012-09-24 Thread Sergey Polischuk
Hi, Christian

Can't help you with crash, but i think you can find this usefull:
1) noise functions for glsl not implemented in hardware\drivers of most 
vendors, so dont even bother to use them.
2) why do you ever need to recreate program and all that stuff, if all you need 
is to change randomUniform value like randomUniform-set(randomNumber(-255, 
255)) in a frame loop?
3) i think you are just chosen wrong way to do it. Most common approach is to 
use one or several pre-generated noise textures and mix and match them in 
shader with various texture coordinate sets (if you need some kind of animation 
- change texcoords by function of time or some uniform value).

Cheers,
Sergey.

24.09.2012, 18:15, Christian Rumpf ru...@student.tugraz.at:
 hello!

 I'm trying to simulate noise effects in real time using OSG and GLSL.

 I rendered to texture a scene to get a sampler2D. Mu and Sigma for noise 
 simulation are static. My idea is to reload fragment shader in a while loop 
 with a random number as uniform because GLSL's noise() function doesn't seem 
 to work on my computer. Therefore I wrote a function randomNumber(int from, 
 int to) which returns a random int between from and to.

 Code:

 void loadNoiseShader(int width, int height, osg::Vec3d mue, osg::Vec3d 
 sigma, osg::Geode* geode)
 {
 osg::ref_ptrosg::Shader vertShader = 
 osgDB::readShaderFile(osg::Shader::VERTEX, noise.vert);
 osg::ref_ptrosg::Shader fragShader = 
 osgDB::readShaderFile(osg::Shader::FRAGMENT, noise.frag);

 osg::ref_ptrosg::Program program = new osg::Program;
 program-addShader(vertShader.get());
 program-addShader(fragShader.get());

 osg::StateSet* ss = geode-getOrCreateStateSet();
 ss-setAttributeAndModes(program.get());

 osg::ref_ptrosg::Uniform texUniform = new osg::Uniform(RTScene, 
 0);
 osg::ref_ptrosg::Uniform windowSize = new osg::Uniform(screen, 
 osg::Vec2(width, height));
 osg::ref_ptrosg::Uniform mueUniform = new osg::Uniform(mue, mue);
 osg::ref_ptrosg::Uniform sigmaUniform = new osg::Uniform(sigma, 
 sigma);
 osg::ref_ptrosg::Uniform randomUniform = new osg::Uniform(random, 
 randomNumber(-255, 255));
 ss-addUniform(texUniform.get());
 ss-addUniform(windowSize.get());
 ss-addUniform(sigmaUniform.get());
 ss-addUniform(mueUniform.get());
 ss-addUniform(randomUniform.get());
 }

 /*...*/

 while(!viewer.done())
 {
    loadNoiseShader(width, height, mu, sigma, geode.get());
    viewer.frame();
 }

 I thought that this would work, but unfortunately the executive crashes at 
 the beginning. Can someone explain me that please?

 lg Chris

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=50258#50258

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader lookup table method: does lookup texture have to be bound to a separate geometry?

2012-02-21 Thread Ethan Fahy
Jean-Sebastien and Sergey,

Thanks very much to you both.  Sergey, you were spot-on about the unsigned int 
vs int thing causing my OpenGL error and Jean-Sebastien you were correct about 
the texture clamping default.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=45635#45635





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader lookup table method: does lookup texture have to be bound to a separate geometry?

2012-02-21 Thread Ethan Fahy
By the way I had originally put the TEXTURE_UNIT as a const unsigned int 
because of the 10th post in this thread:
http://forum.openscenegraph.org/viewtopic.php?t=8273
as it was suggested by Jean-Sebastien.  Not sure if there is some disconnect 
that I am still not getting but changing it to a regular int seems to have 
solved my problems so I will just move forward using a regular int.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=45637#45637





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader lookup table method: does lookup texture have to be bound to a separate geometry?

2012-02-21 Thread Sergey Polischuk
From opengl docs:

glUniform1i and glUniform1iv are the only two functions
that may be used to load uniform variables defined as sampler
types. Loading samplers with any other function will result in a
GL_INVALID_OPERATION error.

And osg uses this calls only for int uniforms or sampler uniforms, for unsigned 
it calls glUniformui* funcitons.

Cheers.

21.02.2012, 17:45, Ethan Fahy ethanf...@gmail.com:
 By the way I had originally put the TEXTURE_UNIT as a const unsigned int 
 because of the 10th post in this thread:
 http://forum.openscenegraph.org/viewtopic.php?t=8273
 as it was suggested by Jean-Sebastien.  Not sure if there is some disconnect 
 that I am still not getting but changing it to a regular int seems to have 
 solved my problems so I will just move forward using a regular int.

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=45637#45637

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader lookup table method: does lookup texture have to be bound to a separate geometry?

2012-02-21 Thread Ethan Fahy
Thanks Sergey, that clears it up for me.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=45643#45643





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader lookup table method: does lookup texture have to be bound to a separate geometry?

2012-02-21 Thread Jean-Sébastien Guay

Hi Ethan,


Thanks Sergey, that clears it up for me.


Yes, Sergey has it right, that post was written from memory and 
obviously I got it backwards. I always cast the variable to int, or 
explicitly create a uniform of type SAMPLER_2D which avoids this issue 
altogether but is more lines of code.


Sorry for the confusion,

J-S
--
__
Jean-Sebastien Guay  jean_...@videotron.ca
http://whitestar02.dyndns-web.com/
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader lookup table method: does lookup texture have to be bound to a separate geometry?

2012-02-21 Thread Ethan Fahy
Gotcha, thanks.  No need for apologies, I never would have got as far as I did 
to begin with if it weren't for that post.  Thanks again everyone for the 
assistance on this one, I have the LUT method working perfectly now with 
floating point values.


Jean-Sébastien Guay wrote:
 Hi Ethan,
 
 
  Thanks Sergey, that clears it up for me.
  
 
 Yes, Sergey has it right, that post was written from memory and 
 obviously I got it backwards. I always cast the variable to int, or 
 explicitly create a uniform of type SAMPLER_2D which avoids this issue 
 altogether but is more lines of code.
 
 Sorry for the confusion,
 
 J-S
 -- 
 __
 Jean-Sebastien Guay  
 http://whitestar02.dyndns-web.com/
 ___
 osg-users mailing list
 
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
  --
 Post generated by Mail2Forum


--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=45663#45663





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader lookup table method: does lookup texture have to be bound to a separate geometry?

2012-02-20 Thread Sergey Polischuk
Hi there

I dont read all messages here, but if problem still there - your uniform for 
sampler initialized wrong
You should use  state-addUniform(new osg::Uniform(lut, int(TEXTURE_UNIT)));
notice int cast there
or you should create uniform with explicit type specification of 
osg::Uniform::INT, or use int TEXTURE_UNIT declaration.
 
in your case osg uniform detect uniform type from constructor which is unsigned 
and opengl use INT uniforms for samplers, so osg trying to set INT uniform with 
opengl call which works only with unsigned uniforms so here goes your error.

Cheers,
Sergey.

17.02.2012, 18:30, Ethan Fahy ethanf...@gmail.com:
 Thanks for the reply Alex.  My setup is exactly as you describe but I'm still 
 having troubles.  I've taken the essential bits of my code to create a sample 
 main() to illustrate exactly what I'm doing:

 Code:

 int main(int argc, char* argv[])
 {
 //INITIAL SETUP
 //create root
 osg::ref_ptrosg::Group root = new osg::Group();
 //load osg-dem generated ive terrain complex
 osg::ref_ptrosg::Node terrainNode = 
 osgDB::readNodeFile(terrain.ive);
 //attach terrain node to root
 root-addChild(terrainNode);
 //create stateSet from terrainNode
 osg::StateSet *state = terrainNode-getOrCreateStateSet();
 //create program
 osg::Program *program = new osg::Program;
 //create shaders
 osg::Shader *vertObj = new osg::Shader( osg::Shader::VERTEX );
 osg::Shader *fragObj = new osg::Shader( osg::Shader::FRAGMENT );
 //add shaders to program
 program-addShader( fragObj );
 program-addShader( vertObj );
 //load shader src files into shaders
 vertObj-loadShaderSourceFromFile( shader.vert );
 fragObj-loadShaderSourceFromFile( shader.frag );

 //CREATE LOOKUP TABLE AND IMAGE
 //allocate memory for 4 by 4 lookup table
 int height=4;
 int width=4;
 const int size = width*height*4;//*4 for rgba channels
 unsigned char* data = (unsigned char*)calloc(size, sizeof(unsigned 
 char));
 //Store arbitrary value of unsigned char 101 in each rgba channel in 
 a flattened 1D data array
 int dataIndex;
 for( int i=0 ; i  height ; i++ ){
 for( int j=0 ; j  width ; j++ ){
 dataIndex = i*width*4 + j*4;
 data[dataIndex] = 101;//red
 data[dataIndex+1] = 101;//green
 data[dataIndex+2] = 101;//blue
 data[dataIndex+3] = 101;//alpha
 }
 }
 //create image
 osg::ref_ptrosg::Image image = new osg::Image;
 image-setOrigin(osg::Image::BOTTOM_LEFT);
 image-setImage(width, height, 1 ,GL_RGBA,GL_RGBA,GL_UNSIGNED_BYTE, 
 (unsigned char*)data,osg::Image::NO_DELETE);

 //create texture2D and add image to it
 osg::ref_ptrosg::Texture2D lutTexture = new osg::Texture2D;
 lutTexture-setTextureSize(width, height);//unsure if this is needed 
 or if it's inherited from the setImage function below
 lutTexture-setInternalFormat(GL_RGBA);//unsure if this is needed or 
 if it's inherited from the setImage function below
 lutTexture-setFilter(osg::Texture::MIN_FILTER, 
 osg::Texture::NEAREST);
 lutTexture-setFilter(osg::Texture::MAG_FILTER, osg::Texture::NEAREST);
 lutTexture-setImage(image);

 //assign texture to hardcoded texture unit 1 so that it can be 
 accessed in the shader
 const unsigned int TEXTURE_UNIT = 1;
 state-setTextureAttributeAndModes(TEXTURE_UNIT, lutTexture, 
 osg::StateAttribute::ON);
 state-addUniform(new osg::Uniform(lut, TEXTURE_UNIT));

 //attach the program to the stateSet
 state-setAttributeAndModes(program, osg::StateAttribute::ON);

 //open an osgviewer to see the root
 osgViewer::Viewer viewer;
 viewer.setSceneData(root);
 viewer.setLightingMode(osg::View::NO_LIGHT);
 viewer.setCameraManipulator(new osgGA::TrackballManipulator);
 viewer.home();
 return viewer.run();
 }

 and here are the vert and frag shaders:
 shader.vert

 Code:

 void main(void)
 {
 gl_TexCoord[0] = gl_MultiTexCoord0;
 gl_TexCoord[1] = gl_MultiTexCoord1; //unsure if this is needed?
 gl_FrontColor = gl_Color;
 gl_Position = ftransform();
 }

 shader.frag

 Code:

 uniform sampler2D lut;
 uniform sampler2D baseTexture;
 void main(void)
 {
 //get color of the terrain's texture
 vec4 color = texture2D(baseTexture, gl_TexCoord[0].st);
 //find index value from data in the rgb channels of the terrain's 
 texture. (left out complicated equations since they are relevant, but I've 
 tested that their values are between 0-1
 float index1 = some indexing logic here;
 float index2 = some more index logic here;
 //Look up 

Re: [osg-users] Shader lookup table method: does lookup texture have to be bound to a separate geometry?

2012-02-17 Thread Ethan Fahy
Thanks for the reply Alex.  My setup is exactly as you describe but I'm still 
having troubles.  I've taken the essential bits of my code to create a sample 
main() to illustrate exactly what I'm doing:


Code:

int main(int argc, char* argv[])
{
//INITIAL SETUP
//create root
osg::ref_ptrosg::Group root = new osg::Group();
//load osg-dem generated ive terrain complex
osg::ref_ptrosg::Node terrainNode = 
osgDB::readNodeFile(terrain.ive);
//attach terrain node to root
root-addChild(terrainNode);
//create stateSet from terrainNode
osg::StateSet *state = terrainNode-getOrCreateStateSet();
//create program
osg::Program *program = new osg::Program;
//create shaders
osg::Shader *vertObj = new osg::Shader( osg::Shader::VERTEX );
osg::Shader *fragObj = new osg::Shader( osg::Shader::FRAGMENT );
//add shaders to program
program-addShader( fragObj );
program-addShader( vertObj );
//load shader src files into shaders
vertObj-loadShaderSourceFromFile( shader.vert );
fragObj-loadShaderSourceFromFile( shader.frag );

//CREATE LOOKUP TABLE AND IMAGE
//allocate memory for 4 by 4 lookup table
int height=4;
int width=4;
const int size = width*height*4;//*4 for rgba channels
unsigned char* data = (unsigned char*)calloc(size, sizeof(unsigned 
char));
//Store arbitrary value of unsigned char 101 in each rgba channel in a 
flattened 1D data array
int dataIndex;
for( int i=0 ; i  height ; i++ ){
for( int j=0 ; j  width ; j++ ){
dataIndex = i*width*4 + j*4;
data[dataIndex] = 101;//red
data[dataIndex+1] = 101;//green
data[dataIndex+2] = 101;//blue
data[dataIndex+3] = 101;//alpha
}
}
//create image
osg::ref_ptrosg::Image image = new osg::Image;
image-setOrigin(osg::Image::BOTTOM_LEFT);
image-setImage(width, height, 1 ,GL_RGBA,GL_RGBA,GL_UNSIGNED_BYTE, 
(unsigned char*)data,osg::Image::NO_DELETE);

//create texture2D and add image to it
osg::ref_ptrosg::Texture2D lutTexture = new osg::Texture2D;
lutTexture-setTextureSize(width, height);//unsure if this is needed or 
if it's inherited from the setImage function below
lutTexture-setInternalFormat(GL_RGBA);//unsure if this is needed or if 
it's inherited from the setImage function below
lutTexture-setFilter(osg::Texture::MIN_FILTER, osg::Texture::NEAREST);
lutTexture-setFilter(osg::Texture::MAG_FILTER, osg::Texture::NEAREST);
lutTexture-setImage(image);

//assign texture to hardcoded texture unit 1 so that it can be accessed 
in the shader
const unsigned int TEXTURE_UNIT = 1;
state-setTextureAttributeAndModes(TEXTURE_UNIT, lutTexture, 
osg::StateAttribute::ON);
state-addUniform(new osg::Uniform(lut, TEXTURE_UNIT));

//attach the program to the stateSet
state-setAttributeAndModes(program, osg::StateAttribute::ON);

//open an osgviewer to see the root
osgViewer::Viewer viewer;
viewer.setSceneData(root);
viewer.setLightingMode(osg::View::NO_LIGHT);
viewer.setCameraManipulator(new osgGA::TrackballManipulator);
viewer.home();
return viewer.run();
}




and here are the vert and frag shaders:
shader.vert

Code:

void main(void)
{
gl_TexCoord[0] = gl_MultiTexCoord0;   
gl_TexCoord[1] = gl_MultiTexCoord1; //unsure if this is needed?
gl_FrontColor = gl_Color; 
gl_Position = ftransform(); 
}



shader.frag

Code:

uniform sampler2D lut; 
uniform sampler2D baseTexture; 
void main(void)
{
//get color of the terrain's texture
vec4 color = texture2D(baseTexture, gl_TexCoord[0].st);
//find index value from data in the rgb channels of the terrain's 
texture. (left out complicated equations since they are relevant, but I've 
tested that their values are between 0-1
float index1 = some indexing logic here;
 
float index2 = some more index logic here;
//Look up new color values from lookup texture based on indices
vec2 lutCoord = vec2(index1, index2);
gl_FragColor = texture2D(lut, lutCoord);
}




So, if I create an image that has all values of rgba set to unsigned char=101, 
I would expect that if I lookup values in that texture that they should always 
be equal to rgba=(101,101,101,101) as long as my lutCoords are both between 
0.0-1.0.  However when I actually run this code the terrain texture is not all 
the same color and I get this error on the command line:
Warning: detected OpenGL error 'invalid operation' at After Renderer:compile
By adding/removing things 

Re: [osg-users] Shader lookup table method: does lookup texture have to be bound to a separate geometry?

2012-02-17 Thread Alex Pecoraro
I think the internal format of your osg::Image should be set to GL_RGBA8 not 
GL_RGBA, which is a pixel format not an internal format. I find the OpenGL 2.1 
documentation very confusing on the use of internal format and pixel format, 
but I think the 3.3 and 4.2 documentation is much easier to understand. So 
check it out to verify what I'm saying.

http://www.opengl.org/sdk/docs/

Also, I insetad of setting the think you osg::Texture2D lut should set its 
internal format mode to osg::Texture::USE_IMAGE_DATA_FORMAT (actually that is 
the default, but when you call setInternalFormat it changes the mode to 
osg::Texture::USE_USER_DEFINED_FORMAT). So I think just removing the call to 
setInternalFormat on the texture will suffice.

Change this:

Code:

image-setImage(width, height, 1 ,GL_RGBA,GL_RGBA,GL_UNSIGNED_BYTE, (unsigned 
char*)data,osg::Image::NO_DELETE);




to this:

Code:

image-setImage(width, height, 1 ,GL_RGBA8,GL_RGBA,GL_UNSIGNED_BYTE, (unsigned 
char*)data,osg::Image::NO_DELETE);




and this:

Code:

lutTexture-setInternalFormat(GL_RGBA);




to this:

Code:

lutTexture-setInternalFormatMode(osg::Texture::USE_IMAGE_DATA_FORMAT);




Alex

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=45542#45542





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader lookup table method: does lookup texture have to be bound to a separate geometry?

2012-02-17 Thread Ethan Fahy
Thanks for the reply Alex.  I tried those changes (I think I've tried just 
about every pixel/internal format that I could think of) and the behavior of my 
program hasn't changed unfortunately.  


apecoraro wrote:
 I think the internal format of your osg::Image should be set to GL_RGBA8 not 
 GL_RGBA, which is a pixel format not an internal format. I find the OpenGL 
 2.1 documentation very confusing on the use of internal format and pixel 
 format, but I think the 3.3 and 4.2 documentation is much easier to 
 understand. So check it out to verify what I'm saying.
 
 http://www.opengl.org/sdk/docs/
 
 Also, insetad of setting the osg::Texture2D internal format to GL_RGBA you 
 should set its internal format mode to osg::Texture::USE_IMAGE_DATA_FORMAT 
 (actually that is the default, but when you call setInternalFormat it changes 
 the mode to osg::Texture::USE_USER_DEFINED_FORMAT). So I think just removing 
 the call to setInternalFormat on the texture will suffice.
 
 Change this:
 
 Code:
 
 image-setImage(width, height, 1 ,GL_RGBA,GL_RGBA,GL_UNSIGNED_BYTE, (unsigned 
 char*)data,osg::Image::NO_DELETE);
 
 
 
 
 to this:
 
 Code:
 
 image-setImage(width, height, 1 ,GL_RGBA8,GL_RGBA,GL_UNSIGNED_BYTE, 
 (unsigned char*)data,osg::Image::NO_DELETE);
 
 
 
 
 and this:
 
 Code:
 
 lutTexture-setInternalFormat(GL_RGBA);
 
 
 
 
 to this:
 
 Code:
 
 lutTexture-setInternalFormatMode(osg::Texture::USE_IMAGE_DATA_FORMAT);
 
 
 
 
 Alex


--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=45543#45543





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader lookup table method: does lookup texture have to be bound to a separate geometry?

2012-02-17 Thread Alex Pecoraro
I honestly don't know why it doesn't work. Your code looks correct to me. You 
might try removing the shader and setting your lut to use texture unit 0 on 
your terrain node. Then you can at least verify that the texture is created 
correct by viewing it on the terrain using the fixed function pipeline.

Alex

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=45547#45547





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader lookup table method: does lookup texture have to be bound to a separate geometry?

2012-02-17 Thread Ethan Fahy
I tried setting the texture that I created to texture unit = 0 and found that 
the resultant object in osgviewer was unchanged from its original texture.  I'm 
not if this means that I have created my texture incorrectly or if the texture 
attachment process is not correct.


apecoraro wrote:
 I honestly don't know why it doesn't work. Your code looks correct to me. You 
 might try removing the shader and setting your lut to use texture unit 0 on 
 your terrain node. Then you can at least verify that the texture is created 
 correct by viewing it on the terrain using the fixed function pipeline.
 
 Alex


--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=45548#45548





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader lookup table method: does lookup texture have to be bound to a separate geometry?

2012-02-17 Thread Jean-Sébastien Guay

Hello Ethan,


Is a border being added to my texture somehow?  If so then this could explain 
why my shader is pulling non-uniform values from my uniform lookup table 
texture...


Set the wrap mode to CLAMP_TO_EDGE instead of CLAMP_TO_BORDER (which is 
the default, and the default border color is black).


It always seemed counterintuitive to me that OpenGL's default wrap mode 
is clamp to border, as this is a relic of times past and no one sets an 
appropriate border color anyways... Most of the time, you want either 
clamp to edge or repeat mode...


Hope this helps,

J-S
--
__
Jean-Sebastien Guay  jean_...@videotron.ca
http://whitestar02.dyndns-web.com/
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Shader lookup table method: does lookup texture have to be bound to a separate geometry?

2012-02-16 Thread Ethan Fahy
Hi all,

I'm having some difficulty setting up a GLSL lookup table shader with my osg 
program and am hoping for some help.  I've looked at every opengl, glsl, osg 
and other forum posting I could find online and think I am pretty close to 
getting things working.  I could put a ton of code here but I think I'll just 
ask one question and provide more code if need be:

Do I have to attach my lookup table (stored in an osg::Texture2d) to a simple 
rectangular geometry in order to be able to perform a GLSL texture2d method on 
it?  Currently I am putting my lookup texture in texturecoordinate 1 of the 
object I am trying to apply my shader to.

The reason I ask is that whenever I try to pull out a value from my lookup 
texture using a vec2 with values from 0.0-1.0 in my shader and perform any sort 
of operation using that value, the osg command line spits out:

Warning: detected OpenGL error 'invalid operation' at After Renderer::compile

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=45515#45515





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader lookup table method: does lookup texture have to be bound to a separate geometry?

2012-02-16 Thread Alex Pecoraro
Hi Ethan,

All you should need to do is attach your osg::Texture to the osg::Stateset that 
you are attaching your osg::Program to and then you also need to add an 
osg::Uniform to the Stateset for the texture sampler that you use in your 
shader program to access the texture:


Code:
osg::ref_ptrosg::Geometry spGeometry = new osg::Geometry();
osg::ref_ptrosg::Texture2D spTexture = new osg::Texture2D();

osg::StateSet* pStateSet = sp-getOrCreateStateSet();
pStateSet-setTextureAttributeAndModes(0, spTexture.get(), 
osg::StateAttribute::ON);

//bind the sampler to texture unit zero
osg::Uniform* pSampler = new osg::Uniform(TextureSampler, 0);
pStateSet-addUniform(pSampler);



Thank you!

Cheers,
Alex

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=45521#45521





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader that can calculate pixel depth in meters

2012-01-04 Thread Ethan Fahy
Thanks GMan for the reply and the source code, much appreciated.  I was able to 
tweak your shaders for my needs and I think I am off to a good start.  However 
I'm still not sure exactly what coordinate system I'm dealing with.  In my 
case, I have an osg terrain loaded in in geocentric coordinates with various 
trees and other objects placed on the terrain.  I will then set my camera 
position.  When I visualize he scene, I have to apply a shader that is able to 
tell me, in meters, the distance between the camera and whatever object is 
being viewed in each pixel.  For example, for a given camera angle, I may have 
a leaf of a tree in the upper left pixel of the frame.  I need to be able to 
get the distance in meters from the camera to that leaf so that I can apply 
some scientific equations and recolor that pixel.  This is important because I 
have a modified osg viewer that reads in the red channel of the pixels from the 
framebuffer and does further calculations based on the value
  of that red channel.  Most GLSL examples are geared at creating visual 
effects, whereas I am encoding real scientific data in the color bands and then 
decoding them at various steps so I can't just scale values according to what 
looks best.  Any advice on how I might get the distance in meters (assuming 
that every osg object in my scene has been loaded in already in meters)?

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=44591#44591





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader that can calculate pixel depth in meters

2012-01-04 Thread Paul Martz
Your issue is a relatively straightforward linear algebra problem. If you didn't 
pick up this skill set in grade school or a university-level course, It's 
unlikely someone will teach you linear algebra on this mail list. (Not trying to 
sound like a know-it-all; it's simply a fact that software developers attempting 
to code 3D graphics should have a strong background in linear algebra.)


The OpenGL red book contains a good overview of OpenGL's coordinate spaces, plus 
there are many good linear algebra books that focus on 3D graphics. You could 
also hire a consultant to code the solution for you.


Does the distance computation have to run on the GPU? For example, you could 
read back the depth buffer and then do the distance computation on the CPU. CPU 
code to do this already exists; see SceneView::projectWindowIntoObject(), in 
SceneView.cpp at line 1481 as one example. (Note this back-transforms to object 
space, not world space, so modify it if necessary to suit your situation.)


If the code has to run on the GPU, certainly you can use the SceneView function 
as inspiration.

   -Paul


On 1/4/2012 12:41 PM, Ethan Fahy wrote:

Thanks GMan for the reply and the source code, much appreciated.  I was able to 
tweak your shaders for my needs and I think I am off to a good start.  However 
I'm still not sure exactly what coordinate system I'm dealing with.  In my 
case, I have an osg terrain loaded in in geocentric coordinates with various 
trees and other objects placed on the terrain.  I will then set my camera 
position.  When I visualize he scene, I have to apply a shader that is able to 
tell me, in meters, the distance between the camera and whatever object is 
being viewed in each pixel.  For example, for a given camera angle, I may have 
a leaf of a tree in the upper left pixel of the frame.  I need to be able to 
get the distance in meters from the camera to that leaf so that I can apply 
some scientific equations and recolor that pixel.  This is important because I 
have a modified osg viewer that reads in the red channel of the pixels from the 
framebuffer and does further calculations based on the val

ue

   of that red channel.  Most GLSL examples are geared at creating visual 
effects, whereas I am encoding real scientific data in the color bands and then 
decoding them at various steps so I can't just scale values according to what 
looks best.  Any advice on how I might get the distance in meters (assuming 
that every osg object in my scene has been loaded in already in meters)?




___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader that can calculate pixel depth in meters

2012-01-03 Thread Michael Guerrero
Hi Ethan, I did this a little while back using information from that page as 
well.  I'm guessing real depth here just means that it's in whatever units 
you've modeled your world in.  For instance, if the near plane is at 1.0, what 
are the units of 1.0?
My purpose was to make sure that clouds fade out before they hit the far plane 
so that it isn't obvious that my clouds are just a flat plane.
 
Here are my shaders:

.vert

Code:
varying float eyeDistance;

//This vertex shader is meant to perform the per vertex operations of per pixel 
lighting
//using a single directional light source.
void main()
{
   //Pass the texture coordinate on through.
   gl_TexCoord[0] = gl_MultiTexCoord0;   
   gl_FrontColor = gl_Color;

   eyeDistance = -(gl_ModelViewMatrix * gl_Vertex).z;

   //Compute the final vertex position in clip space.
   gl_Position = ftransform(); 
}


.frag

Code:
uniform sampler2D baseTexture;

varying float eyeDistance;

void main(void)
{  
   vec4 alphaColor = texture2D(baseTexture, gl_TexCoord[0].st);
   
   vec4 color = gl_Fog.color;
   color.a = gl_Color.a * alphaColor.a;

   float A = gl_ProjectionMatrix[2].z;
   float B = gl_ProjectionMatrix[3].z;  
   float zFar  =   B / (1.0 + A);
   float zNear = - B / (1.0 - A);

   A  = -(zFar + zNear) / (zFar - zNear);
   B  = -2.0 * zFar * zNear / (zFar - zNear);
   
// scale eyeDistance to a value in [0, 1]  
   float normDepth = (eyeDistance - zNear) / (zFar - zNear);  
   
   // Start fading out at 70% of the way there
   normDepth = max(normDepth - 0.7, 0.0) / 0.3;
   normDepth = clamp(normDepth, 0.0, 1.0);   

   gl_FragColor = vec4(color.rgb, (1.0 - normDepth) * color.a);//color;
}




--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=44565#44565





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Shader that can calculate pixel depth in meters

2011-12-30 Thread Ethan Fahy
I need to write a GLSL shader that will be able to calculate the distance 
between the camera and the object being rendered by each pixel.  The catch is 
that I need those distances to be linear and they need to be in meters, 
reflecting the real geometry of my scene.  There are a number of guides out 
there describing the GLSL gl_FragCoord.z as giving the pixel depth in some 
non-linear coordinate eye-space, but as I am not a graphics programmer by trade 
I'm finding these guides very confusing.  This one looks like my best bet:
http://olivers.posterous.com/linear-depth-in-glsl-for-real
but I'm still not sure what the author means by real depth.  Any advice on 
this problem?  Thanks!

-Ethan

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=44503#44503





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Shader question

2011-12-22 Thread Werner Modenbach
High all,

I'm using LightSpacePerspectiveShadowMapCB in my project and I am quite 
satisfied by the results.

Now I'd like to extend the features of the shaders by bump mapping and 
hopefully displacement mapping.
So I have to add some additional code to the shaders coming from 
SoftShadowMap.
Has someone does this allready? Maybe he can share the code. Otherwise I have 
to invest some time to make it run. But I'm not very familiar with all the 
coordinate spaces used in the shaders.

Thanks in advance for any help.

- Werner -
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Shader and attributes binding

2011-11-12 Thread Aurelien Albert
Hi,

I'm writing some shaders to use with OSG and some use extra vertex attribute 
(such as tangent for a bump shader)

I think the OpenGL way of vertex attribute binding is :

- compile and link the shader program
- get the attribute location by calling glGetAttribLocationARB(program, 
attributeName)
- bind the data buffer to this location

So, this way, you always use a free attribute location

In OSG, it seems that I have to do :

- compile and link the shader program
- bind the data buffer to a chosen location
- set the attribute location by calling 
program-addBindAttribLocation(attributeName, location)


What is the correct way to do this ?
How can I be sure to not overwrite an attribute location in OSG (for example, 
attribute locations 0, 2, 3, and 8 are often reserved for vertex, normals, 
colors, and texture coordinates by video drivers) ?

Thank you!

Cheers,
Aurelien

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=43842#43842





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader Composition

2011-09-14 Thread Robert Osfield
Hi Bo,

ShaderComposer isn't complete, and should be viewed as experimental,
but it's able to do a few composition tasks.  The osgshadercomposition
example is a test bed for this feature, unfortunately I've been
swamped with other work so haven't been able to complete the work on
ShaderComposition functionality.

Robert.

On Tue, Sep 13, 2011 at 8:47 PM, Bo Jiang jb4...@gmail.com wrote:
 Hi All,

 I have a node, and I want to shade it using different effect, i.e. I have 
 several groups of shaders (vertex+geometry+fragment), and each group 
 implements one effect.

 Now I want to combine all the effects together when display the node. I do 
 not know how to achieve it. I searched the forum and it seems that the 
 osg::ShaderComposer class is not finished yet? Or shall I need to use the 
 two-passes effect like class in osgFX?

 Thank you!

 Cheers,
 Bo

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=42723#42723





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Shader Composition

2011-09-13 Thread Bo Jiang
Hi All,

I have a node, and I want to shade it using different effect, i.e. I have 
several groups of shaders (vertex+geometry+fragment), and each group implements 
one effect.

Now I want to combine all the effects together when display the node. I do not 
know how to achieve it. I searched the forum and it seems that the 
osg::ShaderComposer class is not finished yet? Or shall I need to use the 
two-passes effect like class in osgFX?

Thank you!

Cheers,
Bo

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=42723#42723





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader Update Latency?!

2011-01-12 Thread Robert Osfield
Hi Guo,

On Wed, Jan 12, 2011 at 7:04 AM, Guo Chow guo.c...@gmail.com wrote:
 I encounter a similar latency problem when I try to update a uniform using
 camera's view matrix in the uniform's callback. Since this uniform is needed
 to compute only once per frame, I decide to compute it on CPU before it's
 submitted to GPU.

 It seems that when the uniform's callback is called, the camera has not been
 updated yet, right?

 Currently I solve this problem by updating the uniform in a PreDrawCallback of
 the camera. But is this a graceful way to achieve it?

By default the osgViewer::Viewer/CompositeViewer runs the update
traversal before the camera matrices are set, this is done as camera
manipulators might be tracking movement of nodes in the scene which
are update in the update traversal so has to be run second.

One thing you could do is set the camera view matrix prior to the
updateTraversal() method is called, or do the update of your Uniforms
explicitly after the updateTraversal().

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader Update Latency?!

2011-01-11 Thread Guo Chow
Robert Osfield robert.osfi...@... writes:

 
 Hi Thorsten,
 
 By default the OSG computes the near/far planes on the fly during the
 cull traversal on every single frame.  You can disable this.
 Alternatively you could just use the gl_ProjectionMatrix directly on
 the GPU to get the near/far planes - this is how I'd do it, far more
 flexible and never needs any additional uniforms or callbacks.
 
 Robert.
 
 On Wed, Dec 1, 2010 at 6:15 PM, Thorsten Roth
 thorsten.r...@... wrote:
  Hi,
 
  I currently have a problem with a shader update callback I do not
  understand. I have a vertex and fragment shader which calculate linear 
depth
  in [0,1] for me, also respecting dynamic clipping planes. To achieve this, 
I
  pass zNear and zFar as uniform parameters to the shader. To have them
  updated, I have the following callback methods (zFar is looking
  accordingly):
 
  class UpdateShaderZNear: public osg::Uniform::Callback {
  public:
         virtual void operator()(osg::Uniform* uniform, osg::NodeVisitor* nv)
  {
                 double x, zNear;
         viewer-getCamera()-getProjectionMatrixAsPerspective(x,x,zNear,x);
                 uniform-set((float)zFar);
         }
  };
 
  Now when I move my camera towards and away from the object, it seems like
  the shader update is one frame (or so) too late, as I get values that do
  not correspond to the [0,1]-normalization and the problem disappears as 
soon
  as the camera stops.
 
  Is there any reason for that/does anyone have an idea what I'm doing wrong?
  If more information or code is necessary, just tell me 
 
  -Thorsten
  ___
  osg-users mailing list
  osg-us...@...
  http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
 

Hi Robert,

I encounter a similar latency problem when I try to update a uniform using 
camera's view matrix in the uniform's callback. Since this uniform is needed 
to compute only once per frame, I decide to compute it on CPU before it's 
submitted to GPU. 

It seems that when the uniform's callback is called, the camera has not been 
updated yet, right?

Currently I solve this problem by updating the uniform in a PreDrawCallback of 
the camera. But is this a graceful way to achieve it?

Thanks in advance.

Guo

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Shader problem on nvidia card

2010-12-22 Thread Aitor Ardanza
Hi,

I have a problem when I apply a shader to an object in osg. I tried it on two 
machines. One with an intel G41 graphics card, which does not give me any 
problem, and the other is a NVidia GTX 480, which gives me the problem. When 
osg try to compile the shader, skip the following error:

FRAGMENT glCompileShader  FAILED
VERTEX glCompileShader  FAILED
glLinkProgram  FAILED
Error: In Texture::Extensions::setupGLExtensions(..) OpenGL version test 
failed, requires valid graphics context.

Vertex program:

Code:

uniform mat4 boneMatrices[2];

attribute vec4 weights;
attribute vec4 matrixIndices;
attribute float numBones;

varying vec2 Texcoord;
varying vec3 ViewDirection;
varying vec3 LightDirection;
varying vec3 Normal;

void main( void )
{
vec4 normal  = vec4( gl_Normal.xyz, 0.0 );
vec4 tempPosition= vec4( 0.0, 0.0, 0.0, 0.0 );
vec4 tempNormal  = vec4( 0.0, 0.0, 0.0, 0.0 );
for(int i = 0; i  int(numBones); i++  )
{
// Apply influence of bone i
tempPosition += vec4((boneMatrices[int(matrixIndices[i])] * 
gl_Vertex).xyz,1.0) * weights[i];

// Transform normal by bone i
tempNormal += (boneMatrices[int(matrixIndices[i])] * normal) * 
weights[i];
}

gl_Position = gl_ModelViewProjectionMatrix * tempPosition;
Texcoord= gl_MultiTexCoord0.xy;

vec4 fvObjectPosition =  gl_ModelViewMatrix * gl_Vertex;
   
ViewDirection  = normalize(-fvObjectPosition.xyz);
LightDirection = normalize(gl_LightSource[0].position.xyz - 
fvObjectPosition.xyz);
Normal = normalize(gl_NormalMatrix * tempNormal.xyz);
}



Fragment program:

Code:

uniform sampler2D baseMap;

varying vec3 ViewDirection;
varying vec3 LightDirection;
varying vec3 Normal;
varying vec2 Texcoord;

void main( void )
{
float NdotL = max(dot(Normal, LightDirection), 0.0);
vec4 diffuse = NdotL * gl_FrontMaterial.diffuse * gl_LightSource[0].diffuse;

vec4 ambient = gl_FrontMaterial.ambient * gl_LightSource[0].ambient;
vec4 globalAmbient = gl_LightModel.ambient * gl_FrontMaterial.ambient;

vec3  fvReflection = normalize( ( ( 2.0 * Normal ) * NdotL ) - 
LightDirection ); 
float fRDotV   = max( 0.0, dot( fvReflection, ViewDirection ) );
vec4 specular = gl_FrontMaterial.specular * gl_LightSource[0].specular * 
pow(fRDotV,200.0);

vec4 color =  diffuse + globalAmbient + ambient + specular;

gl_FragColor = texture2D( baseMap, Texcoord ) * color;
}



I fail to understand what may be the problem... any idea?

Thank you!

Cheers,
Aitor

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=35085#35085





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader problem on nvidia card

2010-12-22 Thread Robert Osfield
Hi Aitor,

The warning you are getting OpenGL version test failed, requires
valid graphics context. suggests that you are trying to do rendering
from a thread that doesn't have a graphics context current.

I know nothing about how you are setting up your graphics context or
how you manage you frame loop so I can do little to advice beyond
suggesting that you try the same scene graph with a standard OSG
example that is known to set up graphics contexts correctly and then
see how you can fix your application to match this.

Robert.

On Wed, Dec 22, 2010 at 2:56 PM, Aitor Ardanza aitoralt...@terra.es wrote:
 Hi,

 I have a problem when I apply a shader to an object in osg. I tried it on two 
 machines. One with an intel G41 graphics card, which does not give me any 
 problem, and the other is a NVidia GTX 480, which gives me the problem. When 
 osg try to compile the shader, skip the following error:

 FRAGMENT glCompileShader  FAILED
 VERTEX glCompileShader  FAILED
 glLinkProgram  FAILED
 Error: In Texture::Extensions::setupGLExtensions(..) OpenGL version test 
 failed, requires valid graphics context.

 Vertex program:

 Code:

 uniform mat4 boneMatrices[2];

 attribute vec4 weights;
 attribute vec4 matrixIndices;
 attribute float numBones;

 varying vec2 Texcoord;
 varying vec3 ViewDirection;
 varying vec3 LightDirection;
 varying vec3 Normal;

 void main( void )
 {
    vec4 normal      = vec4( gl_Normal.xyz, 0.0 );
    vec4 tempPosition    = vec4( 0.0, 0.0, 0.0, 0.0 );
    vec4 tempNormal  = vec4( 0.0, 0.0, 0.0, 0.0 );
    for(int i = 0; i  int(numBones); i++  )
    {
        // Apply influence of bone i
        tempPosition += vec4((boneMatrices[int(matrixIndices[i])] * 
 gl_Vertex).xyz,1.0) * weights[i];

        // Transform normal by bone i
        tempNormal += (boneMatrices[int(matrixIndices[i])] * normal) * 
 weights[i];
    }

    gl_Position = gl_ModelViewProjectionMatrix * tempPosition;
    Texcoord    = gl_MultiTexCoord0.xy;

    vec4 fvObjectPosition =  gl_ModelViewMatrix * gl_Vertex;

    ViewDirection  = normalize(-fvObjectPosition.xyz);
    LightDirection = normalize(gl_LightSource[0].position.xyz - 
 fvObjectPosition.xyz);
    Normal         = normalize(gl_NormalMatrix * tempNormal.xyz);
 }



 Fragment program:

 Code:

 uniform sampler2D baseMap;

 varying vec3 ViewDirection;
 varying vec3 LightDirection;
 varying vec3 Normal;
 varying vec2 Texcoord;

 void main( void )
 {
    float NdotL = max(dot(Normal, LightDirection), 0.0);
    vec4 diffuse = NdotL * gl_FrontMaterial.diffuse * 
 gl_LightSource[0].diffuse;

    vec4 ambient = gl_FrontMaterial.ambient * gl_LightSource[0].ambient;
    vec4 globalAmbient = gl_LightModel.ambient * gl_FrontMaterial.ambient;

    vec3  fvReflection     = normalize( ( ( 2.0 * Normal ) * NdotL ) - 
 LightDirection );
    float fRDotV           = max( 0.0, dot( fvReflection, ViewDirection ) );
    vec4 specular = gl_FrontMaterial.specular * gl_LightSource[0].specular * 
 pow(fRDotV,200.0);

    vec4 color =  diffuse + globalAmbient + ambient + specular;

    gl_FragColor = texture2D( baseMap, Texcoord ) * color;
 }



 I fail to understand what may be the problem... any idea?

 Thank you!

 Cheers,
 Aitor

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=35085#35085





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Shader Update Latency?!

2010-12-01 Thread Thorsten Roth

Hi,

I currently have a problem with a shader update callback I do not 
understand. I have a vertex and fragment shader which calculate linear 
depth in [0,1] for me, also respecting dynamic clipping planes. To 
achieve this, I pass zNear and zFar as uniform parameters to the shader. 
To have them updated, I have the following callback methods (zFar is 
looking accordingly):


class UpdateShaderZNear: public osg::Uniform::Callback {
public:
virtual void operator()(osg::Uniform* uniform, osg::NodeVisitor* nv) {
double x, zNear;
viewer-getCamera()-getProjectionMatrixAsPerspective(x,x,zNear,x);
uniform-set((float)zFar);
}
};

Now when I move my camera towards and away from the object, it seems 
like the shader update is one frame (or so) too late, as I get values 
that do not correspond to the [0,1]-normalization and the problem 
disappears as soon as the camera stops.


Is there any reason for that/does anyone have an idea what I'm doing 
wrong? If more information or code is necessary, just tell me :-)


-Thorsten
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader Update Latency?!

2010-12-01 Thread Tim Moore
Have you set the data variance of the Uniform object -- and the containing
StateSet object -- to Object::DYNAMIC?

Tim

On Wed, Dec 1, 2010 at 7:15 PM, Thorsten Roth thorsten.r...@alsvartr.dewrote:

 Hi,

 I currently have a problem with a shader update callback I do not
 understand. I have a vertex and fragment shader which calculate linear depth
 in [0,1] for me, also respecting dynamic clipping planes. To achieve this, I
 pass zNear and zFar as uniform parameters to the shader. To have them
 updated, I have the following callback methods (zFar is looking
 accordingly):

 class UpdateShaderZNear: public osg::Uniform::Callback {
 public:
virtual void operator()(osg::Uniform* uniform, osg::NodeVisitor* nv)
 {
double x, zNear;
viewer-getCamera()-getProjectionMatrixAsPerspective(x,x,zNear,x);
uniform-set((float)zFar);
}
 };

 Now when I move my camera towards and away from the object, it seems like
 the shader update is one frame (or so) too late, as I get values that do
 not correspond to the [0,1]-normalization and the problem disappears as soon
 as the camera stops.

 Is there any reason for that/does anyone have an idea what I'm doing wrong?
 If more information or code is necessary, just tell me :-)

 -Thorsten
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader Update Latency?!

2010-12-01 Thread Thorsten Roth

I have actually tried it now, but it made no difference :-(

-Thorsten

Am 01.12.2010 19:22, schrieb Tim Moore:

Have you set the data variance of the Uniform object -- and the
containing StateSet object -- to Object::DYNAMIC?

Tim

On Wed, Dec 1, 2010 at 7:15 PM, Thorsten Roth thorsten.r...@alsvartr.de
mailto:thorsten.r...@alsvartr.de wrote:

Hi,

I currently have a problem with a shader update callback I do not
understand. I have a vertex and fragment shader which calculate
linear depth in [0,1] for me, also respecting dynamic clipping
planes. To achieve this, I pass zNear and zFar as uniform parameters
to the shader. To have them updated, I have the following callback
methods (zFar is looking accordingly):

class UpdateShaderZNear: public osg::Uniform::Callback {
public:
virtual void operator()(osg::Uniform* uniform,
osg::NodeVisitor* nv) {
double x, zNear;

  viewer-getCamera()-getProjectionMatrixAsPerspective(x,x,zNear,x);
uniform-set((float)zFar);
}
};

Now when I move my camera towards and away from the object, it seems
like the shader update is one frame (or so) too late, as I get
values that do not correspond to the [0,1]-normalization and the
problem disappears as soon as the camera stops.

Is there any reason for that/does anyone have an idea what I'm doing
wrong? If more information or code is necessary, just tell me :-)

-Thorsten
___
osg-users mailing list
osg-users@lists.openscenegraph.org
mailto:osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org




___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader Update Latency?!

2010-12-01 Thread Robert Osfield
Hi Thorsten,

By default the OSG computes the near/far planes on the fly during the
cull traversal on every single frame.  You can disable this.
Alternatively you could just use the gl_ProjectionMatrix directly on
the GPU to get the near/far planes - this is how I'd do it, far more
flexible and never needs any additional uniforms or callbacks.

Robert.

On Wed, Dec 1, 2010 at 6:15 PM, Thorsten Roth thorsten.r...@alsvartr.de wrote:
 Hi,

 I currently have a problem with a shader update callback I do not
 understand. I have a vertex and fragment shader which calculate linear depth
 in [0,1] for me, also respecting dynamic clipping planes. To achieve this, I
 pass zNear and zFar as uniform parameters to the shader. To have them
 updated, I have the following callback methods (zFar is looking
 accordingly):

 class UpdateShaderZNear: public osg::Uniform::Callback {
 public:
        virtual void operator()(osg::Uniform* uniform, osg::NodeVisitor* nv)
 {
                double x, zNear;
        viewer-getCamera()-getProjectionMatrixAsPerspective(x,x,zNear,x);
                uniform-set((float)zFar);
        }
 };

 Now when I move my camera towards and away from the object, it seems like
 the shader update is one frame (or so) too late, as I get values that do
 not correspond to the [0,1]-normalization and the problem disappears as soon
 as the camera stops.

 Is there any reason for that/does anyone have an idea what I'm doing wrong?
 If more information or code is necessary, just tell me :-)

 -Thorsten
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader Update Latency?!

2010-12-01 Thread Thorsten Roth

Hi Robert,

thank you for this information. I did not know that I could do this, as 
I'm an absolute newbie concerning shader stuff and was happy that I just 
got it to work somehow ;)


I will try the approach with gl_ProjectionMatrix tomorrow, thank you :)

-Thorsten

Am 01.12.2010 20:51, schrieb Robert Osfield:

Hi Thorsten,

By default the OSG computes the near/far planes on the fly during the
cull traversal on every single frame.  You can disable this.
Alternatively you could just use the gl_ProjectionMatrix directly on
the GPU to get the near/far planes - this is how I'd do it, far more
flexible and never needs any additional uniforms or callbacks.

Robert.

On Wed, Dec 1, 2010 at 6:15 PM, Thorsten Roththorsten.r...@alsvartr.de  wrote:

Hi,

I currently have a problem with a shader update callback I do not
understand. I have a vertex and fragment shader which calculate linear depth
in [0,1] for me, also respecting dynamic clipping planes. To achieve this, I
pass zNear and zFar as uniform parameters to the shader. To have them
updated, I have the following callback methods (zFar is looking
accordingly):

class UpdateShaderZNear: public osg::Uniform::Callback {
public:
virtual void operator()(osg::Uniform* uniform, osg::NodeVisitor* nv)
{
double x, zNear;
viewer-getCamera()-getProjectionMatrixAsPerspective(x,x,zNear,x);
uniform-set((float)zFar);
}
};

Now when I move my camera towards and away from the object, it seems like
the shader update is one frame (or so) too late, as I get values that do
not correspond to the [0,1]-normalization and the problem disappears as soon
as the camera stops.

Is there any reason for that/does anyone have an idea what I'm doing wrong?
If more information or code is necessary, just tell me :-)

-Thorsten
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-07-02 Thread Robert Osfield
Hi All,

I've started doing a preliminary implementation the  of the design I
discussed in previous emails.  I've introduced ShaderComponent and now
StateAttribute has a optional ShaderComponent.  I've also fleshed
out a bit more of ShaderComposer with some very simple wiring in
osg::State to invoke this.  I've also checked in the beginnings of a
osgshadercomposition example which will be the initial test bed for
the functionality as it develops.

Whilst fleshing out the ShaderComposer API I created the ShaderMain
and ShaderAssembly classes that I previously proposed but quickly
found that I was creating container class that weren't really needed -
a simple std::map was more than enough to the do the job they were
meant to fulfill.  At least that's my current impression from creating
the required Program and main Shader caches in ShaderComposer.  With
putting together the ShaderComposer and implementing the required
caches I was able to distill down the shader composition step to just
one method:

typedef std::vector const osg::Shader*   Shaders;
virtual osg::Shader* composeMain(const Shaders shaders);

This single method will take a list of Shaders of the same
Shader::Type (VERTEX, GEOMETRY or FRAGMENT) that have been sourced
from the active ShaderComponents, and then create a main shader by
assembling all the code injection details held on each of the shaders.
 It could be that one of the Shaders themselves is a main in which
case the composeMain() should just return a null as there isn't any
need to create a main for it.

I haven't started adding the required code injection API into
osg::Shader yet, or implemented the full ShaderComponent and mode
tracking that will be needed in osg::State so I'm still a couple of
working days away from having basic shader composition working.
However, progress is encouraging, the implementation of the required
classes/methods is hanging together quite naturally so far.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-07-01 Thread Robert Osfield
Hi All,

I've been thinking about the various classes and relationships that
we'll need to support both the front end API for shader composition,
and the backend implementation of it, and my current thoughts are:

1) The ShaderSet I've been discussing over my last few post should be renamed,
my current preferred name is ShaderComponent.

2) Rather than ShaderComponent (was ShaderSet) have the details of how to
inject code into the shader main for each of the vertex, geometry
and fragment
mains, I now feel it would be more managable to move injection support into
   osg::Shader.

   This would mean that ShaderComponent would then just have a list of
one or more
   osg::Shader.  These Shaders would then be grouped into ones that affect the
   vertex, geometry and fragment programs.

   The osg::Shader class would have some new API for setting up the inject code,
   this could be empty/inactive in a default constructed osg::Shader,
so wouldn't
   affect how we use osg::Shader/osg::Program right now.  It's only when shader
   composition comes into play will these extra fields be queried.

3) StateAttribute would have a ShaderComponent that implements the
   shader functionality, this ShaderComponent would typically be shared between
   the same type of StateAttribute.  The StateAttribute attribute
would also provide
   osg::Uniform that pass in the values to the associated ShaderComponent, these
   osg::Unfirom will be applied by the existing
StateAttribute::apply(..) method.

   This is approach I've been discussing before (save for the ShaderSet rename.)

4) osg::State will maintain a current list of enabled ShaderComponent's, this
list of pointers will form a key to search for the appropriate osg::Program
to apply to achieve that functionality.  The way that the osg::Program will
be wrapped up and cached is within a ShaderAssembly.   osg::State
would internally manage the creation of ShaderAssembly, and cache of these
and apply the osg::Program they contain.

Lazy state updating in osg::State will seek to minimize the times that
the state is changed between ShaderAssembly.  When the set/list of enabled
ShaderComponent's changes a the appropriate ShaderAssumbly for this set
of ShaderComponent is then lookup in the ShaderAssembly cache.  If
non apporpriate ShaderAssembly is found then the ShaderComposer is
invoked to create a new ShaderAssembly which is then cached and made
current.

 5) A ShaderAssembly is an internal implementation class so not something
 a user would normally worry about, only the ShaderComposer or
 subclasses from it would need to know about it.

 A ShaderAssembly has the final osg::Program that is applied to OpenGL,
 this osg::Program is composed on the osg::Shader's provided by the
 osg::ShaderComponent, and also an automatically created osg::Shader
 main for each of the vertex, geometry and fragment parts osg::Program.

 The automatically generated shader mains are wrapped up in a ShaderMain
 class that has a list of osg::Shader that contribute to it, these
osg::Shader
 are pulled in from the ShaderComponent's that are associated with the
 ShaderAssembly.  The individual osg::Shader that assigned to a ShaderMain
 provide the code injection details that enable the ShaderComposer to create
 the final main() code that gets placed in the ShaderMain's automatically
 generated osg::Shader.

 The ShaderAssembly contains a ShaderMain for each of the vertex, geometry
 and fragment programs.  Pulling all the Shaders, both provided by the
 ShaderComponent and the automatically generated ones in the three
 ShaderMain to create the final osg::Program.

 It will be possible to share ShaderMain between multiple ShaderAssemly, and
 this will be desirable as often we will just enable/disable a
mode that affects
 only the vertex shaders parts, or just the fragment shader parts,
so if we are
 able to share then we only need create a new ShaderMain for the part that
 changes, the rest can be reused from a cache of ShaderMain (that will be
 provided by osg::ShaderComposer).

 Like ShaderAssembly the ShaderMain is an implementation detail that most
 end users need not use directly or worry about.  It's only osg::State and
 osg::ShaderComposer (or subclasses from it) that will directly
deal with them.

 6) ShaderComposer will manage a cache of ShaderAssembly, and access to
 this cache and the automatic creation of new ShaderAssembly when a
 new combination of enabled ShaderComponent is requested.  When a
 a new ShaderAssembly is created the ShaderComposer querries the
 ShaderComponent to work out what ShaderMain it needs to create, and where
 possible to pull these in from a cache of ShaderMain.

 osg::State has a ShaderComposer, and will defer most of the shader
 composition functionality to it.  Users will be able 

Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-07-01 Thread Christiansen, Brad
Hi,

Sounding great so far. A lot to digest! I have a couple of questions.

One thing I am not clear on is how the rules for injecting a Shader into a 
ShaderMain would work. This is an area I have had difficulty with in my own 
shader assembly approach. For example, how will it be possible to change the 
order in which shaders are applied? As a contrived example, I might want to use 
the material color of a fragment as the input to my lighting shader, and then 
blend my textures with the result. Alternatively, I might want to first blend 
all my textures, then use this color as the input to the lighting shader. How 
would I specify these kind of rules?

Another area I am not clear on is how (if at all) we will be able to avoid 
doing the same calculations many times for a single vertex or fragment. Say, 
for example, we need the ec position of a vertex as an input to several of our 
vertex Shaders. Would each shader have to recalculate this value independently? 
This shouldn't be a major hit to performance in most cases but it is obviously 
less efficient than just doing the calculation once. Then again, maybe the 
compiler can figure this out for us.

Cheers,

Brad

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org 
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert Osfield
Sent: Thursday, 1 July 2010 4:30 PM
To: OpenSceneGraph Users
Subject: Re: [osg-users] Shader composition, OpenGL modes and custom modes

Hi All,

I've been thinking about the various classes and relationships that
we'll need to support both the front end API for shader composition,
and the backend implementation of it, and my current thoughts are:

1) The ShaderSet I've been discussing over my last few post should be renamed,
my current preferred name is ShaderComponent.

2) Rather than ShaderComponent (was ShaderSet) have the details of how to
inject code into the shader main for each of the vertex, geometry
and fragment
mains, I now feel it would be more managable to move injection support into
   osg::Shader.

   This would mean that ShaderComponent would then just have a list of
one or more
   osg::Shader.  These Shaders would then be grouped into ones that affect the
   vertex, geometry and fragment programs.

   The osg::Shader class would have some new API for setting up the inject code,
   this could be empty/inactive in a default constructed osg::Shader,
so wouldn't
   affect how we use osg::Shader/osg::Program right now.  It's only when shader
   composition comes into play will these extra fields be queried.

3) StateAttribute would have a ShaderComponent that implements the
   shader functionality, this ShaderComponent would typically be shared between
   the same type of StateAttribute.  The StateAttribute attribute
would also provide
   osg::Uniform that pass in the values to the associated ShaderComponent, these
   osg::Unfirom will be applied by the existing
StateAttribute::apply(..) method.

   This is approach I've been discussing before (save for the ShaderSet rename.)

4) osg::State will maintain a current list of enabled ShaderComponent's, this
list of pointers will form a key to search for the appropriate osg::Program
to apply to achieve that functionality.  The way that the osg::Program will
be wrapped up and cached is within a ShaderAssembly.   osg::State
would internally manage the creation of ShaderAssembly, and cache of these
and apply the osg::Program they contain.

Lazy state updating in osg::State will seek to minimize the times that
the state is changed between ShaderAssembly.  When the set/list of enabled
ShaderComponent's changes a the appropriate ShaderAssumbly for this set
of ShaderComponent is then lookup in the ShaderAssembly cache.  If
non apporpriate ShaderAssembly is found then the ShaderComposer is
invoked to create a new ShaderAssembly which is then cached and made
current.

 5) A ShaderAssembly is an internal implementation class so not something
 a user would normally worry about, only the ShaderComposer or
 subclasses from it would need to know about it.

 A ShaderAssembly has the final osg::Program that is applied to OpenGL,
 this osg::Program is composed on the osg::Shader's provided by the
 osg::ShaderComponent, and also an automatically created osg::Shader
 main for each of the vertex, geometry and fragment parts osg::Program.

 The automatically generated shader mains are wrapped up in a ShaderMain
 class that has a list of osg::Shader that contribute to it, these
osg::Shader
 are pulled in from the ShaderComponent's that are associated with the
 ShaderAssembly.  The individual osg::Shader that assigned to a ShaderMain
 provide the code injection details that enable the ShaderComposer to create
 the final main() code that gets placed in the ShaderMain's automatically
 generated osg::Shader.

 The ShaderAssembly contains

Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-07-01 Thread Robert Osfield
Hi Brad,

On Thu, Jul 1, 2010 at 11:18 AM, Christiansen, Brad
brad.christian...@thalesgroup.com.au wrote:
 One thing I am not clear on is how the rules for injecting a Shader into a 
 ShaderMain would work. This is an area I have had difficulty with in my own 
 shader assembly approach. For example, how will it be possible to change the 
 order in which shaders are applied? As a contrived example, I might want to 
 use the material color of a fragment as the input to my lighting shader, and 
 then blend my textures with the result. Alternatively, I might want to first 
 blend all my textures, then use this color as the input to the lighting 
 shader. How would I specify these kind of rules?


These are all very good questions, ones that I'm aware of items we'll
need to formalize and solve.  Just how I don't have an answer to yet.
The reason why I hadn't discussed these particular topics is that
without a clear direction on it yet I don't have too much to say.

Addressing these issue is something I decided to leave till later to
fully resolve.  I'm happy to do this as the high level mechanics,
class design, and implementation is something that we have to
addressed as well - but these I've been able to scope out and find a
design that I feel has promise so I'm more confident about tackling
implementation on them.  I'm also now quite comfortable that the high
level stuff is unlikely to change much with changes to the lower level
details on the shader main generation, so we needn't worry too much
about the rules right away - as long as we can get something basic
working w.r.t main generation they we'll be able to thrash out and
getting working all the high level details.

Once the high level stuff is in place and working we'll have the
framework in place to start testing various approaches to shader main
generation, and we'll also have learnt more about the problem domain
along the way, and also we can dump from our brains all the
complexities of the high level stuff.  I'm away for complexity
overload so being able to tackle this task in stages like this is
something I feel much more comfortable.

 Another area I am not clear on is how (if at all) we will be able to avoid 
 doing the same calculations many times for a single vertex or fragment. Say, 
 for example, we need the ec position of a vertex as an input to several of 
 our vertex Shaders. Would each shader have to recalculate this value 
 independently? This shouldn't be a major hit to performance in most cases but 
 it is obviously less efficient than just doing the calculation once. Then 
 again, maybe the compiler can figure this out for us.

It might be possible to have the shader generation parse the injected
code to look for variables set up that are repeated in other injected
code and use only the first instance.  Using the same variables and
code fragments for this would be required but should be possible with
a little discipline and awareness of other shaders.  Like the ordering
issue I'm happy to not worry about this for now.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-30 Thread Robert Osfield
Hi Roland,

On Tue, Jun 29, 2010 at 10:22 PM, Roland Smeenk roland.sme...@tno.nl wrote:
 1) Provide a way of encapsulating all the details required to
 contribute a single chunk of functionality that will be combined with other 
 active ShaderSet to provide final composed osg::Program.

 Isn't this what I solved with the ShaderMode class?

I believe ShaderMode and ShaderSet are pretty close in concept, the
fit a similar role and functionality granularity.  Implementation and
usage wise they differ, but they have lots of similarity, i.e. they
are both big cats, but ones a tiger and ones a lion.


 Or is it that you are trying to separate the shader mode associated with a 
 StateAttribute from the part that is responsible for adding of the piece of 
 shader functionality.

My currently thought is that the various StateAttribute subclasses
would provide a standard ShaderSet instance that implements their
functionality, but also make it possible to override this standard
ShaderSet with custom ones.  For instance we might have a standard
implementation of osg::Light, but we also want to enable users to have
their own implementation, by just providing their own
MyCustomPerPixelLightShaderSet.

There might also be cases where a osg::StateAttribute subclass might
dynamically decide which ShaderSet to use depending upon how it's
configured - this would be if we wanted to reduce the number of
uniforms by having more pre-configured shaders.

 In my contribution I implemented a single ShaderMode that implements all 
 fixed function lighting possibilities. For new types of lighting it seems 
 more logical to indeed make a more fine grained breakup of ShaderModes.

 However there's a (minor) advantage to this one size fits all lighting code 
 currently.  To enable or disable lighting you simply can override a single 
 mode. In the case where there are multiple lighting modes (directional, spot 
 etc.) overall disabling of lighting needs to be done per lighting type. It 
 might be needed that multiple ShaderSets implementing the same aspect/feature 
 can be switched on or off all together with a single override. This will also 
 be needed for instance in shadow map creation where in the first pass you 
 would like to render geometry into a shadowmap as cheaply as possible and 
 therefore without lighting.
 Perhaps multiple ShaderSets implement a the same aspect/feature in the final 
 program that need to be disabled or enabled collectively. Again in the case 
 of shadowmap creation you would like to only render geometry, no texturing, 
 no lighting, no fog, but you do want to take into account skinned mesh 
 deformation, wind deflection etc.

I don't yet have any concrete plan on how to manage the case where
multiple GL enums control particular features.  It might be that it'll
be best to map some of the controls to uniforms that enable one to
toggle features on/off such as lighting, but it might also be that we
it could be best to just pass the all modes on the StateAttribute when
asking for the appropriate ShaderSet to use and let it decide what to
use.

This type of issue I expect to fall out of experience with
implementing various fixed function functionality such as Lighting.


 3) The ShaderSet would contain list osg::Shader objects, these osg::Shader 
 objects would target which ever part of the overall program that the 
 ShaderSet effects - be in multiple osg::Shader for vertex programs, one of 
 vertex program one for fragment, or any combination.


 I guess you are more targeting shader linking instead of the code generation 
 that I implemented.

Yes, I believe preferring Shader linking over Shader code generation
will lead to better performance, more reusability and easier
management.

One of my plans will be to come up with a ShaderSet file format that
will be a subset of use the osgDB serializers to provide
.osgt/.osgb/.osg support.  This file format would contain the existing
serialization for osg::Shader objects, and add the small part of code
injection that we'll need.

I also want to be able to map this ShaderSet configuration to a C++
source file as well so we can prepare our ShaderSet just be editing a
configuration file and do quick runtime testing, then when we're happy
with the shaders setup we'd encode this ShaderSet into a .cpp which
then provides the default implementation for a particular
StateAttribute subclass.  I took this approach with the shaders in
osgVolume, being able to edit a file and then re-run the application
we really effective at cutting the time it took to perfect the
shaders.

 The individual osg::Shader objects could also be shared by multiple 
 ShaderSet where appropriate.

 It could, but is this likely to happen?

We'll find out once we start implementing the fixed function pipeline
mapping, but since sharing osg::Shader is already supported in
osg::Program providing this capability will effectively be free ;-)

 5) It might well be that some StateAttribute have unifoms but 

Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-30 Thread Adrian Egli OpenSceneGraph (3D)
Hi Robert, hi others,

I am currently working on image processing pipelines based on GLSL
shaders. Currently we have many different special effects (shaders)
but all of them work encapsulated. May we should also think on other
topic while overwork the shader composite idea. i review the
osgvirtualprogram example, and i am not sure, but believe - in this
example the whole idea is presented of the new shaders with
virtualprogram. what i am wondering is there also a concept how we can
use different shaders in a pipeline. for example (i am hard working on
Screen Space Ambient Occlusion) and it's all working based on osgFX.
But the pipeline needs different FBO (RTT steps) in a pipeline.
(1) Render the whole scene with depth/normal rendering (RTT 1)
(2) Process the RTT 1 with Ambient Occlusion Filter
(3) Render Shadow into RTT 2 (for example Parallel Split Shadow Map
with 4 Maps == 4 Renderings)
(4) 
(x) Bring all together and so on

In some case we have the same scene (object 2 render) then we can use
virtual program, right? but how can we get the pipeline controled, it
would be nice getting a rendering pipeline functionality in the shader
- render into gpu memory and store it (array) and bring it in last
shader back and blend it. We will have final rendering. Would this be
possible or is this to complex to get osg support in controling it?

/adrian


2010/6/30 Robert Osfield robert.osfi...@gmail.com:
 Hi Roland,

 On Tue, Jun 29, 2010 at 10:22 PM, Roland Smeenk roland.sme...@tno.nl wrote:
 1) Provide a way of encapsulating all the details required to
 contribute a single chunk of functionality that will be combined with other 
 active ShaderSet to provide final composed osg::Program.

 Isn't this what I solved with the ShaderMode class?

 I believe ShaderMode and ShaderSet are pretty close in concept, the
 fit a similar role and functionality granularity.  Implementation and
 usage wise they differ, but they have lots of similarity, i.e. they
 are both big cats, but ones a tiger and ones a lion.


 Or is it that you are trying to separate the shader mode associated with a 
 StateAttribute from the part that is responsible for adding of the piece of 
 shader functionality.

 My currently thought is that the various StateAttribute subclasses
 would provide a standard ShaderSet instance that implements their
 functionality, but also make it possible to override this standard
 ShaderSet with custom ones.  For instance we might have a standard
 implementation of osg::Light, but we also want to enable users to have
 their own implementation, by just providing their own
 MyCustomPerPixelLightShaderSet.

 There might also be cases where a osg::StateAttribute subclass might
 dynamically decide which ShaderSet to use depending upon how it's
 configured - this would be if we wanted to reduce the number of
 uniforms by having more pre-configured shaders.

 In my contribution I implemented a single ShaderMode that implements all 
 fixed function lighting possibilities. For new types of lighting it seems 
 more logical to indeed make a more fine grained breakup of ShaderModes.

 However there's a (minor) advantage to this one size fits all lighting 
 code currently.  To enable or disable lighting you simply can override a 
 single mode. In the case where there are multiple lighting modes 
 (directional, spot etc.) overall disabling of lighting needs to be done per 
 lighting type. It might be needed that multiple ShaderSets implementing the 
 same aspect/feature can be switched on or off all together with a single 
 override. This will also be needed for instance in shadow map creation where 
 in the first pass you would like to render geometry into a shadowmap as 
 cheaply as possible and therefore without lighting.
 Perhaps multiple ShaderSets implement a the same aspect/feature in the final 
 program that need to be disabled or enabled collectively. Again in the case 
 of shadowmap creation you would like to only render geometry, no texturing, 
 no lighting, no fog, but you do want to take into account skinned mesh 
 deformation, wind deflection etc.

 I don't yet have any concrete plan on how to manage the case where
 multiple GL enums control particular features.  It might be that it'll
 be best to map some of the controls to uniforms that enable one to
 toggle features on/off such as lighting, but it might also be that we
 it could be best to just pass the all modes on the StateAttribute when
 asking for the appropriate ShaderSet to use and let it decide what to
 use.

 This type of issue I expect to fall out of experience with
 implementing various fixed function functionality such as Lighting.


 3) The ShaderSet would contain list osg::Shader objects, these osg::Shader 
 objects would target which ever part of the overall program that the 
 ShaderSet effects - be in multiple osg::Shader for vertex programs, one of 
 vertex program one for fragment, or any combination.


 I guess you are more targeting shader 

Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-30 Thread Robert Osfield
Hi Adrian,

I don't think we need worry about high level algorithms with shader
composition, they should all just work fine.  My expectation is that
only the fine grained management of state needs to tackled with shader
composition, the coarsed grained RenderBin/RenderStage multi-pass
techniques will not be affected.

Shader composition should offer new opportunities for the high level
techniques though, as now you'll be able to break up the shaders and
managed them more flexibly rather than just being wrapped up into a
single osg::Program as we have right now.  How multi-pass techniques
will be able to take advantage of shader composition will be tested
once we look at porting osgShadow, osgVolume, osgFX and osgParticle
across from osg::Program to shader composition.

Right now I think it's best to just focus on the API and the low level
implementation details of shader composition, once this is in place
and working we can open out the testing and debate further.

Robert.

On Wed, Jun 30, 2010 at 7:57 PM, Adrian Egli OpenSceneGraph (3D)
3dh...@gmail.com wrote:
 Hi Robert, hi others,

 I am currently working on image processing pipelines based on GLSL
 shaders. Currently we have many different special effects (shaders)
 but all of them work encapsulated. May we should also think on other
 topic while overwork the shader composite idea. i review the
 osgvirtualprogram example, and i am not sure, but believe - in this
 example the whole idea is presented of the new shaders with
 virtualprogram. what i am wondering is there also a concept how we can
 use different shaders in a pipeline. for example (i am hard working on
 Screen Space Ambient Occlusion) and it's all working based on osgFX.
 But the pipeline needs different FBO (RTT steps) in a pipeline.
 (1) Render the whole scene with depth/normal rendering (RTT 1)
 (2) Process the RTT 1 with Ambient Occlusion Filter
 (3) Render Shadow into RTT 2 (for example Parallel Split Shadow Map
 with 4 Maps == 4 Renderings)
 (4) 
 (x) Bring all together and so on

 In some case we have the same scene (object 2 render) then we can use
 virtual program, right? but how can we get the pipeline controled, it
 would be nice getting a rendering pipeline functionality in the shader
 - render into gpu memory and store it (array) and bring it in last
 shader back and blend it. We will have final rendering. Would this be
 possible or is this to complex to get osg support in controling it?

 /adrian


 2010/6/30 Robert Osfield robert.osfi...@gmail.com:
 Hi Roland,

 On Tue, Jun 29, 2010 at 10:22 PM, Roland Smeenk roland.sme...@tno.nl wrote:
 1) Provide a way of encapsulating all the details required to
 contribute a single chunk of functionality that will be combined with 
 other active ShaderSet to provide final composed osg::Program.

 Isn't this what I solved with the ShaderMode class?

 I believe ShaderMode and ShaderSet are pretty close in concept, the
 fit a similar role and functionality granularity.  Implementation and
 usage wise they differ, but they have lots of similarity, i.e. they
 are both big cats, but ones a tiger and ones a lion.


 Or is it that you are trying to separate the shader mode associated with a 
 StateAttribute from the part that is responsible for adding of the piece of 
 shader functionality.

 My currently thought is that the various StateAttribute subclasses
 would provide a standard ShaderSet instance that implements their
 functionality, but also make it possible to override this standard
 ShaderSet with custom ones.  For instance we might have a standard
 implementation of osg::Light, but we also want to enable users to have
 their own implementation, by just providing their own
 MyCustomPerPixelLightShaderSet.

 There might also be cases where a osg::StateAttribute subclass might
 dynamically decide which ShaderSet to use depending upon how it's
 configured - this would be if we wanted to reduce the number of
 uniforms by having more pre-configured shaders.

 In my contribution I implemented a single ShaderMode that implements all 
 fixed function lighting possibilities. For new types of lighting it seems 
 more logical to indeed make a more fine grained breakup of ShaderModes.

 However there's a (minor) advantage to this one size fits all lighting 
 code currently.  To enable or disable lighting you simply can override a 
 single mode. In the case where there are multiple lighting modes 
 (directional, spot etc.) overall disabling of lighting needs to be done per 
 lighting type. It might be needed that multiple ShaderSets implementing the 
 same aspect/feature can be switched on or off all together with a single 
 override. This will also be needed for instance in shadow map creation 
 where in the first pass you would like to render geometry into a shadowmap 
 as cheaply as possible and therefore without lighting.
 Perhaps multiple ShaderSets implement a the same aspect/feature in the 
 final program that need to be disabled or 

Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-30 Thread Christiansen, Brad
Hi,

I would just like to comment on this:

 Ooo I hadn't thought about providing a mechanism for computing vertex 
 positions on the CPU, but this would be very very useful for the quandary of 
 using shaders to modify the vertex positions dynamically.
 The mechanism would need to know the uniform state and the ShaderSet that 
 are relevant, so the StateAttribute might be the place to couple this.  Or 
 perhaps the ??IntersectionVisitor etc. could accumulate StateSet's and pick 
 out the StateAttribute that effect the vertex position.

I have recently run into this issue when doing billboarding within a vertex 
shader. To still allow picking I have had to write a custom routine that 
operates on only this section of my scene and mask it out of the general 
picking mechanism. This is quite a painful and time consuming process. I 
believe this issue will become very common as more work is moved into vertex 
and geometry shaders. As such I would be very keen to see this sort of 
functionality incorporated into the core. There is also some danger in doing so 
though, as this could make picking operations much more expensive. After all, I 
moved the billboard calculation from the CPU to the GPU for a reason :  )

Cheers,

Brad

-Original Message-
From: osg-users-boun...@lists.openscenegraph.org 
[mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert Osfield
Sent: Wednesday, 30 June 2010 5:05 PM
To: osg-users@lists.openscenegraph.org
Subject: Re: [osg-users] Shader composition, OpenGL modes and custom modes

Hi Roland,

On Tue, Jun 29, 2010 at 10:22 PM, Roland Smeenk roland.sme...@tno.nl wrote:
 1) Provide a way of encapsulating all the details required to
 contribute a single chunk of functionality that will be combined with other 
 active ShaderSet to provide final composed osg::Program.

 Isn't this what I solved with the ShaderMode class?

I believe ShaderMode and ShaderSet are pretty close in concept, the
fit a similar role and functionality granularity.  Implementation and
usage wise they differ, but they have lots of similarity, i.e. they
are both big cats, but ones a tiger and ones a lion.


 Or is it that you are trying to separate the shader mode associated with a 
 StateAttribute from the part that is responsible for adding of the piece of 
 shader functionality.

My currently thought is that the various StateAttribute subclasses
would provide a standard ShaderSet instance that implements their
functionality, but also make it possible to override this standard
ShaderSet with custom ones.  For instance we might have a standard
implementation of osg::Light, but we also want to enable users to have
their own implementation, by just providing their own
MyCustomPerPixelLightShaderSet.

There might also be cases where a osg::StateAttribute subclass might
dynamically decide which ShaderSet to use depending upon how it's
configured - this would be if we wanted to reduce the number of
uniforms by having more pre-configured shaders.

 In my contribution I implemented a single ShaderMode that implements all 
 fixed function lighting possibilities. For new types of lighting it seems 
 more logical to indeed make a more fine grained breakup of ShaderModes.

 However there's a (minor) advantage to this one size fits all lighting code 
 currently.  To enable or disable lighting you simply can override a single 
 mode. In the case where there are multiple lighting modes (directional, spot 
 etc.) overall disabling of lighting needs to be done per lighting type. It 
 might be needed that multiple ShaderSets implementing the same aspect/feature 
 can be switched on or off all together with a single override. This will also 
 be needed for instance in shadow map creation where in the first pass you 
 would like to render geometry into a shadowmap as cheaply as possible and 
 therefore without lighting.
 Perhaps multiple ShaderSets implement a the same aspect/feature in the final 
 program that need to be disabled or enabled collectively. Again in the case 
 of shadowmap creation you would like to only render geometry, no texturing, 
 no lighting, no fog, but you do want to take into account skinned mesh 
 deformation, wind deflection etc.

I don't yet have any concrete plan on how to manage the case where
multiple GL enums control particular features.  It might be that it'll
be best to map some of the controls to uniforms that enable one to
toggle features on/off such as lighting, but it might also be that we
it could be best to just pass the all modes on the StateAttribute when
asking for the appropriate ShaderSet to use and let it decide what to
use.

This type of issue I expect to fall out of experience with
implementing various fixed function functionality such as Lighting.


 3) The ShaderSet would contain list osg::Shader objects, these osg::Shader 
 objects would target which ever part of the overall program that the 
 ShaderSet effects - be in multiple osg::Shader for vertex programs

Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-29 Thread Mathias Fröhlich

Hi Robert,

On Monday 28 June 2010, Robert Osfield wrote:
 Yes, even if we tweak the API a bit in StateSet any new elements will
 sorted virtue of being in StateSet.  Whatever we do StateSet's will
 remain the container for all things state in the scene graph, and the
 what basis of coarsed grained state sorting and lazy state updating
 that the OSG provides by default.
Ok.

  So in effect you will know the final shader combination for a render leaf
  at cull time.
  Right?

 Yes, once you've traversed down the graph to the osg::Drawable leaves
 you know all the state inheritance and can compute the final program
 and uniform combination that will be required.
Ok.

  Wouldn't it be better to match the final shaders programs at cull time
  instead of draw time?

 The only reason there might be an advantage is that if you draw
 traversal was more CPU limited than the draw.  In general I would
 expect the cost of selecting the appropriate programs during draw will
 be very low relative to the cost of dispatching the data itself.
Yes.

 The only part that might be expensive is the composition of the shader
 mains and then compiling these main shaders, but again the cost of
 composing the main shaders is likely to be much lower than the OpenGL
 side of compiling them, and you have to do the compiling in the draw
 traversal any so the saving again is likely to be be pretty small if
 detectable.
True, compiling is expensive.
But that is exactly what should be better done just once when a new shader 
combination appears. The final program should just be stored and reused once 
it is compiled *and* linked.

I am sure that compiling a shader is expensive. But I am also sure that 
linking is *not* about for free.
I expect that drivers will again do some optimizations once linking happens. 
For example there are GPU's out there which do not have jump instructions. In 
this case, linking must at least do inlining again.
Also there are plenty of optimization opportunities that start to be available 
when the whole program is known. So, I expect that at link time the shader 
optimizer is started again and again improoves the shader code.

So, what I think is that we need minimize shader compilation as well as shader 
*linking* as much as possible. If the implementation you have in mind really 
does this, then fine.
If you intent to relink on about every StateAttribute::compose(osg::State) 
call, I expect to run into problems.

 Doing more work in the cull will mean we'll have to pass more data to
 draw which itself may result in a performance overhead as the extra
 data structures will need to be dynamically allocated/destroyed.
Sure, malloc is to be avoided for real time.
Anyway, I am thinking more about something that computes a probably scalar key 
or pointer to an already existing object that is able to identify the final 
shader program combination in the cull stage and sets that key into the 
render leaf.
On apply this key is taken to apply the opengl shader object containing the 
already finished shader program. May be this key could be the to be cached 
final shader program.
So, no per draw dynamic memory management with this kind of approach.

 Right I want to get things working, the API and the backend are going
 to be new and complex enough that I want to take the route of least
 resistance and not worry to much about small possible optimization.
 Moving more work into cull might be an optimization that would be
 worth doing, but then it could just as easily be worse for
 performance.
Ok.

The only thing that might then turn out problematic is that then there is the 
StateAttribute::compose(State) call which directly operates on the State 
object. Once this is in place, backward compatibility will never make that go 
away. In effect this will make a switch to a 
StateAttribute::compose(CullTimeShaderComposer) call impossible...
That is the reason I am thinking about that current proposal for such a long 
time. May be the State should provide such a 'ShaderCompositon' object as a 
member and you let the state attribute work on this member of the State?
Then this could be moved easier if this turns out to be critical?

So, again the critical thing from my point of view:
If you can make sure that we never relink during draw once no new shader 
combination appears, I believe we are fine.

Greetings

Mathias

-- 
Dr. Mathias Fröhlich, science + computing ag, Software Solutions
Hagellocher Weg 71-75, D-72070 Tuebingen, Germany
Phone: +49 7071 9457-268, Fax: +49 7071 9457-511
-- 
Vorstand/Board of Management:
Dr. Bernd Finkbeiner, Dr. Roland Niemeier, 
Dr. Arno Steitz, Dr. Ingrid Zech
Vorsitzender des Aufsichtsrats/
Chairman of the Supervisory Board:
Michel Lepert
Sitz/Registered Office: Tuebingen
Registergericht/Registration Court: Stuttgart
Registernummer/Commercial Register No.: HRB 382196 


___
osg-users mailing list
osg-users@lists.openscenegraph.org

Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-29 Thread Robert Osfield
Hi Mathias,

2010/6/29 Mathias Fröhlich m.froehl...@science-computing.de:
 So, what I think is that we need minimize shader compilation as well as shader
 *linking* as much as possible. If the implementation you have in mind really
 does this, then fine.  If you intent to relink on about every
 StateAttribute::compose(osg::State) call, I expect to run into problems.

I think you've mis-understood my intentions.  My plan has been to
cache the osg::Program and main shaders once they are generated,
Roland's implementation also takes the route.  Calls to
StateAttribute::compose(osg::State) or it equivalent would only been
done when a new shader combination is required.


 The only thing that might then turn out problematic is that then there is the
 StateAttribute::compose(State) call which directly operates on the State
 object. Once this is in place, backward compatibility will never make that go
 away. In effect this will make a switch to a
 StateAttribute::compose(CullTimeShaderComposer) call impossible...

I don't think a CullTimeShaderComposer is a good idea, but a subclass
from ShaderComposer is fine, so I don't think there is a tightly
coupling issue at all.  We just need to get the design right so we can
alter subclass from ShaderComposer as well as write our own sets of
shaders.

 That is the reason I am thinking about that current proposal for such a long
 time. May be the State should provide such a 'ShaderCompositon' object as a
 member and you let the state attribute work on this member of the State?

As for the end of last week osg::State already has a ShaderComposer*
get/setShaderComposer method :-)

ShaderComposer is just a placeholder right now.

 Then this could be moved easier if this turns out to be critical?

I think we'll need to be pretty easy going with what the final design
is until we've played with actually implementing the code.  Is pretty
rare for a design to work perfectly once you start trying to solve
real problems with it.  For me the real test will be how well we can
implement the fixed function pipeline within shader composition - if
this works neatly and conveniently then we'll be well on track.

 So, again the critical thing from my point of view:
 If you can make sure that we never relink during draw once no new shader
 combination appears, I believe we are fine.

I believe we'll all be in agreement here.  This is certainly what I'm
aiming for, all the designs I'm working on use a program cache as a
fundamental part of the design, with state changes minimized.  I don't
see it all as much different to what we do know with creating texture
objects, VBO's, display lists, it's just we'll be operating on a
slightly higher level, it should all be the same principle - the OSG
worries about doing things efficiently, and the scene graph developer
just concentrates on composing the scene graph and state classes to
get the result they want.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-29 Thread Robert Osfield
I've not at the computer full time for the last couple of days so
haven't been able to push the shader composition implementation on
yet.  Design wise I've been doing bits here and there.  My latest
thought is that we probably need to encapsulate the component that
will go to make up shader composition as a single class/object.  This
class would contain a list of required shaders, the list of shader
modes, and the shader main code injection rules.  What to call this
class is a problem... perhaps ShaderSet?  ShaderComponent?
ShaderChunk?  Thoughts?

For my own design doodling I've run with the name ShaderSet for this
class, so will follow this in rest of the email, if we go for
something different later then so be it.  The role of the ShaderSet
would be:

  1) Provide a way of encapsulating all the details required to
contribute a single chunk
 of functionality that will be combined with other active
ShaderSet to provide final
 composed osg::Program.

  2) Provide a convenient mapping of a single ShaderSet representing a single
  implementation that can be shared by multiple osg::StateAttribute objects,
  for instance all osg::Light objects would shader a single
ShaderSet for all
  directional lights, another ShaderSet for all positional lights
etc.  If different
  state attribute objects shader the same ShaderSet then there it
will be easily
  to test whether shaders are the same simply be testing the pointer value,
  we needn't look into the contents.

  This approach works well for the OSG right now with how we manage
  osg::StateSet or the StateAttribute within them, so I'm pretty
comfortable that
  this approach will be useful when doing shader composition and keeping the
  costs of lazy state updating down.

  3) The ShaderSet would contain list osg::Shader objects, these osg::Shader
  objects would target which ever part of the overall program that
the ShaderSet
  effects - be in multiple osg::Shader for vertex programs, one of
vertex program
  one for fragment, or any combination.  The individual
osg::Shader objects could
  also be shared by multiple ShaderSet where appropriate.

   4) The StateAttribute would have a ShaderSet which defines all the shader
  setup required, this ShaderSet would typically be shared by all
instances of
  that type of StateAttribute.   The StateAttribute also have
zero more uniforms
  that pass in values to the shaders in the ShaderSet, these
uniforms wouldn't
  typically be shared, but would be solely owned by each separate
StateAttribute
  object.

  The StateAttribute's ShaderSet would only ever be queried if the
shader modes
   associated with that ShaderSet and hence StateAttribute are enabled, and
   no cached osg::Program for the current configuration of enabled
shader modes
   is available.

   5) It might well be that some StateAttribute have unifoms but no
ShaderSet, it
   might also be appropriate for some StateAttribute to have no uniforms and
   a ShaderSet.  It might even be appropriate for a single
StateAttribute to have
   multiple ShaderSet depending upon what mode it is in - for
instance osg::Light
   might be implemented in such a way that you have one ShaderSet for a
   directional light, and one ShaderSet for position light,
another for spot lights,
   another for per pixel lighting etc, and select the appropriate
one depending
   upon the set up of the osg::Light.

   6) Given osg::StateSet already has support for managing osg::Uniform directly
   perhaps we might even want to make it possible to assign
ShaderSet directly
   along side them rather than nesting them within StateAttribute.  A pure
   shader composition approach could do this, but for seamless
support of existing
   fixed function StateAttribute based scene graph we won't be
able to just this.
   However, have two ways of doing the same thing might just lead
to confusion...
   so it's something I think we just need to implement and test to
see how we get
   on.

Anyway these are my latests thoughts.  I will be out for much of
today, but later in the week should get more time to start
implementing some code to start experimenting with basic shader
composition.

Cheers,
Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-29 Thread Mathias Fröhlich

Hi Robert,

On Tuesday 29 June 2010, Robert Osfield wrote:
 I believe we'll all be in agreement here.  This is certainly what I'm
 aiming for, all the designs I'm working on use a program cache as a
 fundamental part of the design, with state changes minimized.  I don't
 see it all as much different to what we do know with creating texture
 objects, VBO's, display lists, it's just we'll be operating on a
 slightly higher level, it should all be the same principle - the OSG
 worries about doing things efficiently, and the scene graph developer
 just concentrates on composing the scene graph and state classes to
 get the result they want.
Fine then.

Greetings

Mathias

-- 
Dr. Mathias Fröhlich, science + computing ag, Software Solutions
Hagellocher Weg 71-75, D-72070 Tuebingen, Germany
Phone: +49 7071 9457-268, Fax: +49 7071 9457-511
-- 
Vorstand/Board of Management:
Dr. Bernd Finkbeiner, Dr. Roland Niemeier, 
Dr. Arno Steitz, Dr. Ingrid Zech
Vorsitzender des Aufsichtsrats/
Chairman of the Supervisory Board:
Michel Lepert
Sitz/Registered Office: Tuebingen
Registergericht/Registration Court: Stuttgart
Registernummer/Commercial Register No.: HRB 382196 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-29 Thread Bruce Wheaton
 
Very excited about the new features Robert, sounds like you're really cracking 
on.

I had two thoughts I hope you'll consider integrating:

One is that GLSL is quite poor for initialization. The main problem with this 
is if you were to enable a shader with, say, 8 uniforms, there wasn't, until 
very recently, a way to initialize them, and the stateset enable calls 
generally are single parameter. So in with the definitions of a new possible 
shader mode, and the attributes/uniforms it needs, we will probably need a way 
to initialize any uniforms that the shader needs, but the user may not 
specifically set. Default values, in essence, but also just non-random values.

Second is that, although your point about trying linking on the fly and then 
seeing what happens is a good one, I'm worried about the fact that different 
GPUs, drivers, OS's etc have very different costs for this, (all of which would 
be too much in some circumstances).

I was thinking that we could maybe build in the ability to use a conditional in 
each shader as a matter of course. That way the program object can be left as 
is, and sorted correctly, and the conditional uniform set instead of 
re-linking. The comparable mechanism to me is the 'dynamic' setting of nodes, 
or the 'useDisplayLists' setting.

So for example, instead of just:

vec4 applyBlur (vec4 inPixel
{
return inPixel;
}

or 

vec4 applyBlur (vec4 inPixel)
{
 vec4 outPixel;
// do costly blur
return outPixel;
}

the non-placeholder version might have:

vec4 applyBlur (vec4 inPixel)
{
if (!blurEnabled)
return inPixel;

 vec4 outPixel;
// do costly blur
return outPixel;
}

I hope you see the advantages - far more shader sharing and less compilation in 
a complex situation.

Bruce
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-29 Thread Roland Smeenk
Hi Robert and others,

unfortunately I am experiencing some stressfull time at work which does not 
really allow me to contribute to this thread as much as I would like to. I am 
trying to keep up with the posts, but not all posts are clear to me.


robertosfield wrote:
 
 1) Provide a way of encapsulating all the details required to
 contribute a single chunk of functionality that will be combined with other 
 active ShaderSet to provide final composed osg::Program.
 


Isn't this what I solved with the ShaderMode class? 
Or is it that you are trying to separate the shader mode associated with a 
StateAttribute from the part that is responsible for adding of the piece of 
shader functionality. 


robertosfield wrote:
 
 2) Provide a convenient mapping of a single ShaderSet representing a single 
 implementation that can be shared by multiple osg::StateAttribute objects, 
 for instance all osg::Light objects would share a single
 ShaderSet for all directional lights, another ShaderSet for all positional 
 lights etc.  If different state attribute objects share the same ShaderSet 
 then there it will be easily to test whether shaders are the same simply be 
 testing the pointer value, we needn't look into the contents.
 


In my contribution I implemented a single ShaderMode that implements all fixed 
function lighting possibilities. For new types of lighting it seems more 
logical to indeed make a more fine grained breakup of ShaderModes.

However there's a (minor) advantage to this one size fits all lighting code 
currently.  To enable or disable lighting you simply can override a single 
mode. In the case where there are multiple lighting modes (directional, spot 
etc.) overall disabling of lighting needs to be done per lighting type. It 
might be needed that multiple ShaderSets implementing the same aspect/feature 
can be switched on or off all together with a single override. This will also 
be needed for instance in shadow map creation where in the first pass you would 
like to render geometry into a shadowmap as cheaply as possible and therefore 
without lighting.
Perhaps multiple ShaderSets implement a the same aspect/feature in the final 
program that need to be disabled or enabled collectively. Again in the case of 
shadowmap creation you would like to only render geometry, no texturing, no 
lighting, no fog, but you do want to take into account skinned mesh 
deformation, wind deflection etc.


robertosfield wrote:
 
 3) The ShaderSet would contain list osg::Shader objects, these osg::Shader 
 objects would target which ever part of the overall program that the 
 ShaderSet effects - be in multiple osg::Shader for vertex programs, one of 
 vertex program one for fragment, or any combination.  
 


I guess you are more targeting shader linking instead of the code generation 
that I implemented.


robertosfield wrote:
 
 The individual osg::Shader objects could also be shared by multiple ShaderSet 
 where appropriate.
 


It could, but is this likely to happen?


robertosfield wrote:
 
 5) It might well be that some StateAttribute have unifoms but no ShaderSet,
 


I see no logical use case for this. What need is a uniform if there's no 
ShaderSet using it?


robertosfield wrote:
 
  it might also be appropriate for some StateAttribute to have no uniforms and 
 a ShaderSet.  
 


Like the normalize in the fixed function pipeline?


robertosfield wrote:
 
 It might even be appropriate for a single
 StateAttribute to have multiple ShaderSet depending upon what mode it is in - 
 for instance osg::Light might be implemented in such a way that you have one 
 ShaderSet for a directional light, and one ShaderSet for position light, 
 another for spot lights, another for per pixel lighting etc, and select the 
 appropriate one depending upon the set up of the osg::Light.
 


OK, understandable if you are targetting shader linking and taking into account 
what I wrote above.

Some other notes:

Like I discussed before I still would like to emphasize the need to be able to 
tune performance by balancing between shader program size and number of program 
switches. There is no optimal solution before knowing the actual usage scenario 
and therefore a user or internal OSG balancing system should be able to adapt 
this balance to improve rendering performance.

A thing not solved in my ShaderModes based implementation is support for CPU 
side calculations needed by the simulation or renderer. This includes 
intersection testing and bounding box calculation. When implementing a new 
piece of shader functionality (for instance wind deflection) is typically is 
not hard to implement the same on the CPU side. If there exists a mechanism to 
make the intersection visitor walk through the same ShaderMode tree (or linked 
ShaderSets) it could made be possible to actually intersect a tree swaying in 
the wind.

To clearify the aspect/feature override part mentioned above here's a different 
use case I have in mind. In a 

Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-28 Thread Mathias Fröhlich

Hi Robert,

On Saturday 26 June 2010, Robert Osfield wrote:
 The plan I have wouldn't affect state sorting, and lazy state updating
 of the composed shader programs would be utilized, and caching of the
 composed shader programs would be used once it has been composed.
 Only when you change modes will you need to go look for a new composed
 shader program, and only when you can't find an already cached one for
 the current set of enabled modes will you have to compose an new
 shader program.   At least this is what I have in, and believe it'll
 be essential.
Yep, I believe too that this is important to essential!

So, ok, we will get state sorting for the fixed function pipline state 
attributes because of these being traditionally sorted in the state graph.
Ok, and the additional non fixed function shader snipets will be sorted 
because of being such state attributes too.
Right?

So in effect you will know the final shader combination for a render leaf at 
cull time.
Right?

Wouldn't it be better to match the final shaders programs at cull time instead 
of draw time?
... which would only possible if the state attributes compose call might not 
work on the State object itself, but on some to be defined object that is 
used by the cull visitor to combine/collect/map/hash the shaders. Once we 
emit a render leaf to the render graph, the final shader program is attached 
to the render leaf and just used during call.
Sure, if a new combination appears this one needs to be compiled/linked first.

What is missing in my picture is the backward compatibility. I am not sure how 
this fits here.

So the final key point is that I would move the caching of the shaders away 
from the state object to some new object that is held in the cull visitor or 
somewhere there.
The render leafs will just reference the finally compiled and linked shader 
open gl objects then. The State would only need to avoid reloading the same 
final shader program twice in a series.

My two cents ...

Greetings

Mathias

-- 
Dr. Mathias Fröhlich, science + computing ag, Software Solutions
Hagellocher Weg 71-75, D-72070 Tuebingen, Germany
Phone: +49 7071 9457-268, Fax: +49 7071 9457-511
-- 
Vorstand/Board of Management:
Dr. Bernd Finkbeiner, Dr. Roland Niemeier, 
Dr. Arno Steitz, Dr. Ingrid Zech
Vorsitzender des Aufsichtsrats/
Chairman of the Supervisory Board:
Michel Lepert
Sitz/Registered Office: Tuebingen
Registergericht/Registration Court: Stuttgart
Registernummer/Commercial Register No.: HRB 382196 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-28 Thread Robert Osfield
Hi Tim,

On Sat, Jun 26, 2010 at 6:05 PM, Tim Moore timoor...@gmail.com wrote:
 OK, so the shader generation or fixed function emulation provides more
 shaders that might be linked or combined by shader composition, and the
 shader candidate functions are controlled by existing OSG state
 attributes?

Our traditional StateAttribute like osg::TexGen or osg::Light etc.
will all provide shaders, uniforms and the rules for injecting there
use into the main's of the vertex, geometry and fragment programs.
This will be exactly the same mechanism as when we'll be providing our
own custom shader, uniforms and rule for use when utilizing shader
composition.  At least this is what I'm gunning for in my design work.

The existing StateAttribute subclasses will just be convenience
wrappers for a block of functionality.  You might use them entirely,
or ignore them completely and use your own subclasses or
ShaderAttribute objects, or mix and match.  It will be possible to
override the shaders and rules of the existing StateAttribute classes
as well.


 It's also worth mentioning that all our plugins target the fixed
 function StateAttributes, right now under GL3./GL4/GLES2 you have to
 do you own mapping from these StateAttribute to your own custom
 shaders.

 Right; I'm not personally doing core OpenGL 3.3 or 4.0 programming, but my
 impression was that the compatibility profile was going to be with us for a
 long, long time. Are a significant number of OSG users working with the core
 profiles?

Currently when you build the OSG specifically against GL3 it does so
with the assumption that you don't use the compatibility layer, it
does it using all shaders.  I would like to get to the point when even
using GL2 we can run with using all shaders as well so the transition
between the GL2, GL3, GL4 and GLES2 is pretty seamless.

 Many users are effectively stuck at GL1.1 + hundreds of extensions,
 including ones that implement the capabilities of OpenGL 3.3 and 4.0.

That's fine, right now for the end user it doesn't make much
difference whether you build with GL1.1 with extensions or use GL2/GL3
headers directly, so I don't see any change here.  The shader
composition functionality is really just a new way arranging shaders
in a more convenient way, the actual OSG code passing the uniforms,
shaders and programs to OpenGL shouldn't need be touched, save for a
few tweaks for flexibility and performance if needed.   The new shader
composition support will naturally support GL1.1 + extensions.

 I can't argue with that, just wondering if the fixed function emulation via
 shaders is vital problem to solve at the moment.

The fixed function emulation It's vital for OSG-3.0, but for the first
steps of shader composition is not required.   My current plan is:

 1) Come up with a preliminary design approach that looks like it'll support
 fixed function emulation and custom shader composition in a way
that minimizes
 API changes and maximizes the ease at which different implementations can
 be maxed.

 2) Come up with a concrete but modest design (a subset of 1) for doing shader
 composition using user design shaders, uniforms and rules for
injecting code
 into the vertex, geometry and mains.

 3) Implement 2, test and debug it, and once the basics work start test more
 complex arrangements of shader composition such that these complex
 arrangements start approach what we'll need to do with fixed funciton
 emulation.

 4) This stage we'd be going back to the preliminary design of 1) but using
 the knowledge gained in stages 2  3 to shape it into something that is
 practical.  Start implementing fixed function emulation bit by
bit, refactoring
 the design as we go to make sure that new and old functionality all mesh.

 5) On mass get the community to start implementing the fixed function emulation
 for all the remain StateAttribute in the core OSG so we can load
all our models
 and visualize them correctly whether using old fixed function code paths or
 the new shader composition.

 6) Modify the shader codes in osgVolume, osgParticle, osgShader and osgFX
 so that they all use shader composition rather than osg::Program.

  7) Test, debug, optimize

  8) Make 3.0 release.

--

Right now I'm at stage 1, and this week plan to dive into stage 2 and
perhaps even stage 3.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-28 Thread Robert Osfield
Hi Mathias,

2010/6/28 Mathias Fröhlich m.froehl...@science-computing.de:
 So, ok, we will get state sorting for the fixed function pipline state
 attributes because of these being traditionally sorted in the state graph.
 Ok, and the additional non fixed function shader snipets will be sorted
 because of being such state attributes too.
 Right?

Yes, even if we tweak the API a bit in StateSet any new elements will
sorted virtue of being in StateSet.  Whatever we do StateSet's will
remain the container for all things state in the scene graph, and the
what basis of coarsed grained state sorting and lazy state updating
that the OSG provides by default.

 So in effect you will know the final shader combination for a render leaf at
 cull time.
 Right?

Yes, once you've traversed down the graph to the osg::Drawable leaves
you know all the state inheritance and can compute the final program
and uniform combination that will be required.

 Wouldn't it be better to match the final shaders programs at cull time instead
 of draw time?

The only reason there might be an advantage is that if you draw
traversal was more CPU limited than the draw.  In general I would
expect the cost of selecting the appropriate programs during draw will
be very low relative to the cost of dispatching the data itself.

The only part that might be expensive is the composition of the shader
mains and then compiling these main shaders, but again the cost of
composing the main shaders is likely to be much lower than the OpenGL
side of compiling them, and you have to do the compiling in the draw
traversal any so the saving again is likely to be be pretty small if
detectable.

Doing more work in the cull will mean we'll have to pass more data to
draw which itself may result in a performance overhead as the extra
data structures will need to be dynamically allocated/destroyed.

Right I want to get things working, the API and the backend are going
to be new and complex enough that I want to take the route of least
resistance and not worry to much about small possible optimization.
Moving more work into cull might be an optimization that would be
worth doing, but then it could just as easily be worse for
performance.

A more clear cut optimization would be to have a NodeVisitor that we
can run on the scene graph after it's loaded and then work out the
likely shaders that will be combined and from this what shader mains
we'll need to compose during rendering, from this we could populate
the cache and precompile both the user/fixed function emulation
shaders and the newly created shader mains.  The draw traversal would
then just select from this cache.

 ... which would only possible if the state attributes compose call might not
 work on the State object itself, but on some to be defined object that is
 used by the cull visitor to combine/collect/map/hash the shaders. Once we
 emit a render leaf to the render graph, the final shader program is attached
 to the render leaf and just used during call.
 Sure, if a new combination appears this one needs to be compiled/linked first.

We'll once we have a working shader composition API and implementation
checked in you can start experimenting with different back-end
implementations to your hearts content, if you find something useful
then you'll be able to provide nice benchmarks to illustrate it's
value :-)

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-26 Thread Robert Osfield
Hi Mathias,

2010/6/25 Mathias Fröhlich m.froehl...@science-computing.de:
 With that proposal - espcially the StateAttribute::compose call working
 directly on the osg::State - I conclude that we need to relink during draw:
 * we will probably loose state sorting for the shader combinations
 * shaders need to be at least relinked on every state change which will have
 some runtime (drawtime) overhead.

The plan I have wouldn't affect state sorting, and lazy state updating
of the composed shader programs would be utilized, and caching of the
composed shader programs would be used once it has been composed.
Only when you change modes will you need to go look for a new composed
shader program, and only when you can't find an already cached one for
the current set of enabled modes will you have to compose an new
shader program.   At least this is what I have in, and believe it'll
be essential.

The OSG's existing state sorting and lazy state management for modes
and attributes will all work as is, we shouldn't need to modify it at
all, so this will all happen automatically during cull as it builds up
the rendering graphs and state graphs in the rendering backend.   The
only thing that will change will be what happen different will be
internally in osg::State during the draw traversal, the rest of the
draw will actually be identical.

I'm expecting all the hardwork over the next few weeks to happen in
osg::State, in the new shader composer classes and eventually in the
various subclasses of osg::StateAttribute to provide the shader
versions of the fixed function pipeline.  The rest of the OSG is
likely to remain pretty well untouched, and hopefully this will apply
to end user applications as well - unless they want to leverage the
new features that is :-)

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-26 Thread Robert Osfield
Hi Johannes,

I don't believe VirtualProgram is sufficiently sophisticated for our
purposes, I agree about linking existing osg::Shader rather than
composing these on the fly like in Roland's implementation.

However we still have to inject code into the main for each of the
vertex, geometry and fragment programs, the alternative would to have
massing main functions that contain all the possible variations and
then using uniforms to switch between the different parts.  I don't
believe the later approach is at all suitable for performance or for
practicality.  Right now I think keeping the bulk of the code in
osg::Shader functions, and then composing the mains on demand and
caching these main osg::Shader will be both the most flexible and the
best performing solution.

One could look at it as a bit of tacking the best bits from Roland
approach and the best bits of VirtualProgram, and then mixing in a
means of extending the existing fixed function StateAttribute in a way
that provides a pretty seamless support for shader composition.   At
least that's my hope...

Robert.

On Fri, Jun 25, 2010 at 1:14 PM, Johannes Maier ver...@gmx.de wrote:
 Hi,
 as I mentioned somewhere else, we are also thinking about a shader 
 composition concept.
 After some consideration we came to the conclusion, that shader linking is 
 probably the most time-effective approach.
 So we would go with Wojtek and use his VirtualProgram - extending and 
 implementing it.
 In combination with a clean shader implementation of the FFP you can 
 activate/deactivate FFP-functionality by linking an empty dummy-function or a 
 fully implemented function.
 These FFP-functionalities could also be used in own shadercode - just use the 
 (ie osg_fog-)function in your own code.
 OSG will link it, when fog is activated and unlink (read: link empty 
 function), when fog is deactivated.
 The linking itself should be much faster than generating some shadercode on 
 runtime (even with caching).
 Using empty functions as not-active-dummys may sound like overhead, but 
 calling a (empty) function didn't cost any measurable time when we tested it.
 Linking/Unlinking and updating needed uniforms could be implemented in 
 osg::State ie through the mentioned shadercompositor-interface.
 ...

 Cheers,
 Johannes

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=29409#29409





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-26 Thread Robert Osfield
Hi Tim,

On Fri, Jun 25, 2010 at 8:37 AM, Tim Moore timoor...@gmail.com wrote:
 Is it fair to say that the major interest in shader generation is in porting
 existing OSG apps to OpenGL ES 2.0, which doesn't have any fixed pipeline?

I wouldn't agree with this statement.  It's not about GLES 2.0 and
it's not about shader generation relating to just fixed function
pipeline.

First up, the phrase shader generation is possible not too helpful
though in the way that Wojtek phrased it - i.e. relating to just fixed
function mapping.  I believe it's better to think of doing shader
composition as mainly linking shaders with a tiny amount of generation
of shader mains, something that applies applies as much to the fixed
function pipeline mapping as it does shader composition which is
entirely user created.

It's also not just about GLES 2.0, as OpenGL 3.x and OpenGL 4.x are
also like OpenGL ES 2.0, once we drop the fixed function support than
OpenGL provides we have to provide out own equivalent support or
require application developers to roll their own shaders for
everything, even just viewing something a simple as cow.osg.

It's also worth mentioning that all our plugins target the fixed
function StateAttributes, right now under GL3./GL4/GLES2 you have to
do you own mapping from these StateAttribute to your own custom
shaders.

I think right now you go it alone with all shader scene graphs, and
build all your own scene graphs accordingly and ignore all the
NodeKits and Plugins that the OSG provides, but it really won't be
productive for most applications.  Until we have a credible way for
mapping fixed function StateAttribute and associated modes to shaders
most application developers will have to stick with GL2.

I also believe that once we get going with shader composition we'll be
mixing and matching lots of different StateAttribute, some provided by
the core OSG, some custom one from applications or 3rd party
libraries, and sometime just overriding the standard shaders providing
by the built ins.  If we continue to think about fixed function
pipeline vs shader pipeline as two separate things, and that then I
think we've lost something really important in understanding of the
problem domain, both right now and going forward.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-26 Thread Tim Moore
On Sat, Jun 26, 2010 at 12:30 PM, Robert Osfield
robert.osfi...@gmail.comwrote:

 Hi Tim,

 On Fri, Jun 25, 2010 at 8:37 AM, Tim Moore timoor...@gmail.com wrote:
  Is it fair to say that the major interest in shader generation is in
 porting
  existing OSG apps to OpenGL ES 2.0, which doesn't have any fixed
 pipeline?

 I wouldn't agree with this statement.  It's not about GLES 2.0 and
 it's not about shader generation relating to just fixed function
 pipeline.

 First up, the phrase shader generation is possible not too helpful
 though in the way that Wojtek phrased it - i.e. relating to just fixed
 function mapping.  I believe it's better to think of doing shader
 composition as mainly linking shaders with a tiny amount of generation
 of shader mains, something that applies applies as much to the fixed
 function pipeline mapping as it does shader composition which is
 entirely user created.

OK, so the shader generation or fixed function emulation provides more
shaders that might be linked or combined by shader composition, and the
shader candidate functions are controlled by existing OSG state
attributes?


 It's also not just about GLES 2.0, as OpenGL 3.x and OpenGL 4.x are
 also like OpenGL ES 2.0, once we drop the fixed function support than
 OpenGL provides we have to provide out own equivalent support or
 require application developers to roll their own shaders for
 everything, even just viewing something a simple as cow.osg.



 It's also worth mentioning that all our plugins target the fixed
 function StateAttributes, right now under GL3./GL4/GLES2 you have to
 do you own mapping from these StateAttribute to your own custom
 shaders.

 Right; I'm not personally doing core OpenGL 3.3 or 4.0 programming, but my
impression was that the compatibility profile was going to be with us for a
long, long time. Are a significant number of OSG users working with the core
profiles?


 I think right now you go it alone with all shader scene graphs, and
 build all your own scene graphs accordingly and ignore all the
 NodeKits and Plugins that the OSG provides, but it really won't be
 productive for most applications.  Until we have a credible way for
 mapping fixed function StateAttribute and associated modes to shaders
 most application developers will have to stick with GL2.

 Many users are effectively stuck at GL1.1 + hundreds of extensions,
including ones that implement the capabilities of OpenGL 3.3 and 4.0.

 I also believe that once we get going with shader composition we'll be
 mixing and matching lots of different StateAttribute, some provided by
 the core OSG, some custom one from applications or 3rd party
 libraries, and sometime just overriding the standard shaders providing
 by the built ins.  If we continue to think about fixed function
 pipeline vs shader pipeline as two separate things, and that then I
 think we've lost something really important in understanding of the
 problem domain, both right now and going forward.

I can't argue with that, just wondering if the fixed function emulation via
shaders is vital problem to solve at the moment.

Tim
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-25 Thread Tim Moore
On Fri, Jun 25, 2010 at 12:25 AM, Wojciech Lewandowski 
lewandow...@ai.com.pl wrote:

 Hi Guys,

 We all seem to agree that the topic actually consists from two subtopics:
 ShaderGeneration and ShaderComposition. Shader generation would be mostly
 used to emulate Fixed Pipeline and replace GL Attributes/Modes with
 generated uniforms and shader pieces. ShaderComposition would allow mixing
 and linking of various shader pieces gathered during cull traversal. Shader
 Generation seems to be the current focus of the discussion.
 ShaderComposition seems to be of a lower priority. I have the felling that
 these two subjects could be addressed together but they also could be
 addressed independently. ShaderComposition is needed to implement
 ShaderGeneration but not the opposite. So wouldn't it be better to first
 implement flexible  ShaderCompositor that at first would only work with
 programmable pipeline in OSG ?. And later build ShaderGenerator on top of
 ShaderCompositor ?

 ...
Is it fair to say that the major interest in shader generation is in porting
existing OSG apps to OpenGL ES 2.0, which doesn't have any fixed pipeline?

Tim
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader composition, OpenGL modes and custom modes

2010-06-25 Thread Robert Osfield
Hi Wojtek,

I don't really see a strong separation between emulation of the fixed
function pipeline and shader composition.  My current view is that
we'll have StateAttribute's that provide batches of uniforms, any
shaders that are required, and guidance on how into inject support for
them into the vertex/geometry/fragment/main shaders.  The fact that a
StateAttribute might be come from a fixed function pipeline background
won't make any difference to the actual shader composition.  At least
this is what I'm gunning for.

I would also like to make it quite easy to find out what uniforms
various StateAttribute subclasses use and what Shaders they provide.
I really don't want us to end up with black boxes that users either
use without knowing, or ignoring completely because they aren't sure
what they do.  The situation you describe is really what I'm trying to
avoid - I want the community to be able to freely mix and match where
they find each component useful.

For high level techniques such as deferred shading I see it as natural
that you'd happily mix the two.  The first pass rendering the scene to
a depth and colour buffers could happily use a traditional
StateAttribute without any custom shaders if you don't need them, but
then the second pass you'd obvious bring in to play all your custom
shaders.

There are also high level techniques like overlays and shadows where
eye linear texgen is used, here you'd want to leverage the OSG's core
support for eye linear texgen, otherwise you'll end up have to code
lots of your own solutions.

Robert.

On Thu, Jun 24, 2010 at 11:25 PM, Wojciech Lewandowski
lewandow...@ai.com.pl wrote:
 Hi Guys,

 We all seem to agree that the topic actually consists from two subtopics:
 ShaderGeneration and ShaderComposition. Shader generation would be mostly
 used to emulate Fixed Pipeline and replace GL Attributes/Modes with
 generated uniforms and shader pieces. ShaderComposition would allow mixing
 and linking of various shader pieces gathered during cull traversal. Shader
 Generation seems to be the current focus of the discussion.
 ShaderComposition seems to be of a lower priority. I have the felling that
 these two subjects could be addressed together but they also could be
 addressed independently. ShaderComposition is needed to implement
 ShaderGeneration but not the opposite. So wouldn't it be better to first
 implement flexible  ShaderCompositor that at first would only work with
 programmable pipeline in OSG ?. And later build ShaderGenerator on top of
 ShaderCompositor ?

 I believe there is a number of developers interested only in programmable
 pipeline.  They would not mind writing shader chunks  and using their
 uniforms to attain effect that fixed pipeline would also give them. They
 would do this to have clear and consistent shader code. And often they want
 to make transition to fully programmable pipeline as fast as possible
 because they feel thats how the 3D future will look.

 I am ready to make such switch.  I only need working ShaderComposition.
 Personally I'm not interested in fixed pipeline emulation and where possible
 I will try to program shader pipeline myself. Call me short sighted but I am
 afraid that trying to replace programmers with shader generators will make
 this coding more complicated.

 Once ShaderComposition becomes available I am not going to go back to FFP
 and will avoid StateAttributes that exist only in fixed pipeline. I will
 thus indirectly avoid using ShaderGeneration feature as well. In my opinion
 many programmable pipeline concepts are easier to understand than the stuff
 in Fixed Pipeline (take TexEnv or TexMat for example) and I would always
 prefer to have direct control over uniforms  shaders neccessary to
 implement certain feature.  With ShaderGeneration I will never be sure what
 shader code was produced and how uniforms were attached.

 That's my opinion. But I am taking the risk of presenting it because I think
 there are others who share this view and would prefer to make transition to
 fully programmable pipeline as quickly as possible. There is a number of
 algorithms where Fixed pipeline states simply don't  fit. Deffered shading
 for huge number of lights for example. Its better to implement them in fully
 programmable environment than mix fixed and programmable approaches.

 I know there are other users and lots of existing OSG code which relies on
 fixed pipeline. I understand that this existing code will require
 ShaderGeneration but I really feel that the discussion is focusing on second
 step when first one was not made yet.

 Cheers,
 Wojtek Lewandowski
 PS. I won't be able to respond for next 3 days.



 --
 From: Robert Osfield robert.osfi...@gmail.com
 Sent: Thursday, June 24, 2010 4:51 PM
 To: osg-users@lists.openscenegraph.org
 Subject: Re: [osg-users] Shader composition, OpenGL modes and custom modes

 Hi All,

 As another step towards experimenting

  1   2   >