Hi Robert,
> I feel the approach won't scale well as one is hardwiring support for
> osg::Material into RenderLeaf and State, and if we take this approach
> when the times comes to support all the rest of the fixed function pipeline
> we'll end up with huge mess of code
I agree with that, I was not really comfortable with this implementation
because of this.
> they would all provide their own local uniform
I didn't think about this, and I'll try to make a new submission based on this.
About your comments on state attribute :
As an experienced application developper, I can tell that something really
important is to let the developper have the control on what is executed. For
example, all "automatic shader generation" methods must be optionnal :
developper can disable it if needed. Some use case are specific and cannot be
covered by any generic / automatic code generation system.
My approach about StateAttributes and "advanced" OpenGL is minimal : provide
only the data access to avoid the loss of data, but do not provide the
functionnality. The main idea behind this is the following : when you develop
an OpenGL based application, you have to choose an OpenGL version.
1/ You choose a version with fixed pipeline support, then you can use it and
it's very suitable for a lot of application.
2/ You choose an "advanced" version, because you want or you need it. In this
case, without fixed pipeline you have to develop your own shaders. This is more
work but allow to do more advanced rendering, and you was warned when you made
the choice to work with an "advanced" OpenGL version. This is mandatory for
specific applications (for example data vizualisation application may need a
special shading system to display data correctly)
3/ You want "advanced" OpenGL, but with "simple'n'standard" shading (for
example, for mobile hardware without fixed pipeline at all). This case could be
covered by the implementation of a default "simple'n'standard" shader system in
OSG.
But the developper must be able to disable the "simple'n'standard" shader
system to get a "naked" pipeline and cover the second use case. For example,
last year I've worked on an application which display 3d data, but not
"geometric models" : it displays simulation result (still based on a triangle
mesh), with annotations and visual helpers. In this application, we have
written our own shading system, without the use of any light source, without
the use of any standard material or color, but only "scientific" data which
were computed in Cuda and send as a texture to OpenGL. The shader then
translate these values into color using different method (LUT, tonemapping...)
Even the GUI components where not lighted with a light source, but as simple
lines, circles and squares.
So I think that the best long-tem strategy is to integrate in OSG two
functionnality :
- made all the deprecated state attributes accessible from shaders using
"osg_***" uniforms. Maybe select which uniforms to use with a set of flags
=> this ensure that all data contained in a 3d model file are accessible to the
render pipeline
- create a "simple'n'standard" shading system, which use the "osg_***" uniforms
and which can be easily enabled/disabled
=> this ensure fixed pipeline emulation on modern hardware
About your comments on shader composition :
I have worked with osgEarth, and I think their VirtualProgram system is really
great.
I would like to have a system based on this idea andon the idea of the shader
composition, something like a "VirtualShader" state attribute which acts as
follow :
- multiple VirtualShader attributes can be added to a StateSet
- every time a VirtualShader is applied, a program is generated based on the
following :
1/ make a list of all traversed VirtualShaders, one list for each type
(vertex/fragment/geometry...)
2/ prune every list by shader name and override value : there can be only one
"foo" vertex VirtualShader, one "bar" fragment VirtualShader etc..
3/ compile and link all shaders
4/ no automatic code generation/injection
With this system you can write a "generic shader" which is bound on the top of
the scene graph, something like
Fragment VirtualShader named "main" :
Code:
out vec4 FragColor; // Shader output
vec4 getFragColor(); // Declaration of a method which should return
the fragment color : get from vertex, from texture...
vec4 computeLighting(vec4 color); // Declaration of a method which should
compute the lighting of a fragment
main
{
vec4 color = getFragColor();
vec4 lightedColor = computeLighting(color);
FragColor = color;
}
Fragment VirtualShader named "getFragColor" :
Code:
vec4 getFragColor() // implementation of a default frag color getter
{
return vec4(1.0, 1.0, 1.0, 1.0); // return white
}
Fragment VirtualShader named "computeLighting" :
Code:
vec4 computeLighting(in vec4 color) // implementation of a default
lighting equation
{
return doSomePhongStuff; // return the result of a lighting equation :
Phong, Cook-Torrance, a mesuread BRDF...
}
And then, on a node which use a texture, simply add this shader :
Fragment VirtualShader named "getFragColor" :
Code:
in vec2 textureCoords // varying which is an output of the vertex
shader
vec4 getFragColor() // override the implementation of the default
frag color getter
{
return texture2D(textureCoords); // return the texture color
}
On a node which should be rendered using another lighting equation :
Fragment VirtualShader named "computeLighting" :
Code:
vec4 computeLighting(in vec4 color) // override the implementation
of the default lighting equation
{
return doSomeOtherStuff;
}
and of course, if you need to modify the "main" structure, just add a
VirtualShader named "main".
With this :
- you can separate the main rendering logic (a "main" shader on top of scene
graph) and the data getter (each node may have a different getFragColor()
method, with texure sampling or constant value or osg_Material use, or anything
else....)
- your code is never modified : what you is code is what is compiled, what is
linked and what is excuted
- no code source merging
- every shader is compiled separately : it's easier to debug, and maybe allow
to share more resources between nodes (I don't know if a shader instance can be
linken to diferrent program)
And, last but not least, if you need to render anything else than a standard
rendering, you just have to create your specific "main" shader which can be
really different than any classic shader.
=> If you need to render only a sub-graph using different equations (separate
the GUI and the scene for example), you can have a common data access logic (a
set of "get***()" methods) and different "main" shaders on the top of the
different branchs of the scene graph.
Aurelien
------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=52474#52474
_______________________________________________
osg-submissions mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-submissions-openscenegraph.org