Hi Paul,

I think we want to allow for apps that don't specify vertices from host code, and forcing the app to call Geometry::setVertexPointer in such a case would obfuscate the code. If an app wants to use only generic vertex attributes, OSG needs to change to allow this.

Sure, but it can allow it while the default usage will be to use setVertexPointer (or some other nomenclature that caters more to flexible vertex formats) to simplify code for users that just want to do rendering quick and easy.

This type of usage (Geometry without a vertex pointer) introduces a new issue, which I alluded to in my BOF presentation: How does OSG cull if the Geometry has no vertices, and therefore no bounding box? setInitialBounds could be a solution, disabling culling on the Geometry is another option.

I think if the user is creating vertices in a geometry shader or some other way that doesn't include any vertices on the CPU, then the user should think of some way of specifying their own tight bounds so that OSG can cull that geometry efficiently. There is already a ComputeBoundsCallback the user can use for that, in addition to setInitialBounds (if the bounds never change) and disabling culling, as you said. I think that's just a case where the user knows more than OSG does, so the user needs to take control and tell OSG what he/she wants.

Once again, graphics programming without a safety net.

On one hand, I have clients with large OSG-based apps, and the last thing they want is some kind of disruptive change in OSG. To serve them, I think we really do need to continue to support the old FFP state in GL3.

Yep.

On the other hand, there's a reason why OpenGL ditched the old FFP feature set: Apps that use *all* the features simultaneously are rare, and testing these features was an O(n^n) problem. Eliminating the features allows for equivalent functionality in much simpler app-specific shaders. If OSG tries to move forward with the old FFP feature set, we will be attempting to do what hardware vendors like ATI and NVIDIA have given up on. As more features are added to OSG, the problem will only get worse.

Yep.

(We already have such feature mixing issues in OSG. Ever try to use osgText with osgShadow? Both nodekits think they own texture unit 0... I've intended to make the texture unit a configurable option for some time now, but haven't gotten to it.)

Well in the past, I've placed text objects outside the osgShadow::ShadowedScene, or put osgText::Text under a Geode that has only such objects and set an empty program on that Geode. Now that last one isn't really an option anymore, but to accomodate GL3 osgText::Text will probably have to have its own shader anyways, so that should work...

But I'm sidestepping your point, which is valid. It's an n^n problem, because what if I want text to cast a shadow? What if I want text that casts a shadow under the ocean? (I recently had trouble mixing osgOcean and osgShadow in the same way, so I can extrapolate).

That's a problem in general with shaders. Even if we make OSG use shader fragments, how will we test that all fragments can interact well with all other fragments? It's only really at runtime (and visually to boot) that we'll know - we'd need to test all combinations and see if there are rendering problems. If there are subtle shading problems when we add a certain fragment, we might not even notice.

I'm currently very interested in researching ways to write apps that use OSG today in such a way that they avoid OSG functionality that is currently implemented on GL 1/2. The idea being, if you write your app this way, then any future changes to OSG aimed at supporting GL3 will be inconsequential.

It's an interesting thought, but it may end up being a lot of work (maybe even as much as writing the app in straight GL3) if OSG ends up supporting GL3 in a manner that's transparent (compared to OSG/GL2). That's why I'd like Robert to enter this discussion so that we can figure out a clear direction for OSG sooner rather than later - it will remove much speculation and allow us to future-proof our code in a way we're sure won't be wasted time.

Still, your visitor is interesting, extending it would be good, and testing it by having OSG create a GL3 context (perhaps even GL3.1 so that it errors out when using deprecated functionality) would be even better.

I agree, interesting discussion.

J-S
--
______________________________________________________
Jean-Sebastien Guay    [email protected]
                               http://www.cm-labs.com/
                        http://whitestar02.webhop.org/
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to