Hello Wojtek,
I have noticed issues with Shader compilation on 256 (and above) series
on Windows 7. Some fragment shaders using gl_LightSource fields were
generating internal compiler errors. In fact I also posted a bug report
to NVidia. These errors were normally reported by OSG with compilation
log showing assembly cg output that was causing trouble for compiler.
You don't see any compilation errors even with
OSG_NOTIFY_LEVEL=DEBUG_INFO ?
I have attached the bug report I posted to NVidia you can check if it
could related.
We have put back updating our simulators' drivers for this reason. Our
lighting shaders did not compile with that same error ("binding in
multiple relative-addressedarrays"). I looked it up last night to try
and find a workaround, and found one.
If you simply unroll the loop that iterates over lights, it compiles
fine. So what you can do is:
// Don't forget the backslash at the end of each line except
// the last in the define...
#define APPLY_LIGHT(i) \
// the code that you had in your loop before \
// that uses gl_LightSource[i] \
// for example: \
// ... \
ambient += attenuation * gl_LightSource[i].ambient; \
// ...
APPLY_LIGHT(0)
APPLY_LIGHT(1)
APPLY_LIGHT(2)
APPLY_LIGHT(3)
// Up to however many lights you want to support
Sure it's just a workaround, I think the behavior you reported to nVidia
is still a bug they should fix, but at least this allows us to keep on
working even if some clients or users update their drivers. I always
hate telling people "don't update your drivers to a version newer than
x.y", because that shows some incompatibility in our own software and
also I may forget to tell them that it's ok to update once the driver
bug is fixed, and then they may run into other issues in the future
because they have old drivers.
Also, I don't have any other loops currently in my code, so I can't say
if this same compiler error might affect looping over other variables.
See this thread on OpenGL.org for some discussion of this issue. I got
the inspiration for the #define above there.
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=281429
Someone mentioned in that thread: "In the GLSL spec, if you use some of
the built in array types (like gl_ClipDistance) you need to either
declare the size of the array or make the source code so that the GLSL
compiler can figure out the maximum index you access." It might be
possible to remove the above workaround if we did what he suggests. But
I haven't yet found out how to do that. What I've tried ("uniform
gl_LightSourceParameters gl_LightSource[gl_MaxLights];" or "uniform
gl_LightSourceParameters gl_LightSource[8];" at the top of the shader)
didn't change anything.
I really hope nVidia fixes the bug...
Hope this helps,
J-S
--
______________________________________________________
Jean-Sebastien Guay jean-sebastien.g...@cm-labs.com
http://www.cm-labs.com/
http://whitestar02.webhop.org/
_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org