Hi Robert,

Robert Bragg wrote:
I'd be rather worried if your GL driver is causing a hardware flush for
calling glGet* ? Broadly speaking a GL driver will maintain a large
state machine e.g. using a plain C struct {} and perhaps maintain some
dirty flags for various members. If you glEnable somthing then the dirty
flag could be set and the value updated (no HW flush here), and if you
just glGet somthing that should simply read some particular struct
member. When the driver comes to do the next frame it uses the dirty
flags to determine what state changes need to be issued to the HW and
continues to emmit the geometry + kick the render etc.

  
Yes I agree actually, as I said to Neil.
Certainly there are pros and cons. I think typically the GL functions
would have marginally greater overheads in the single threaded use case
(most of the time for Clutter) since GL implicitly has to do a thread
local storage lookup for each GL call, and possibly take a lock. That
used to be quite expensive, though I guess these days with NPTL it might
not be such an issue. Also I wouldn't be so hopeful that all GL/GLES
drivers are "good" yet sadly. Certainly a number of smaller GPU vendors
creating GLES capable hardware are less likely to have very well
optimised drivers.
  
Yes, and PVR is an example of what you say.
Currently our cache supports a different interface than the
glEnable/Disable approach of OpenGL. We currently have cogl_enable()
(The name is a bit misleading because it's not synonymous with glEnable)
that takes a complete bitmask of the state we want enabled which will
determine the appropriate glEnables/Disables etc to call. I.e. because
Cogl's state management requirements have been quite simple so far it's
been convenient for us to be able to setup all our enables in one go via
a single bitmask.

  
Besides that, when it comes to client state and generic vertex attributes, cogl is using a very proprietary and hidden mapping for vertex, texcoords and color attributes, which a developer of a custom actor with native GL calls can't guess, which may lead to collisions when needing a new vertex attribute (like even normals) for a fancy shader.
Currently cogl_enable is purposely not public :-)
It is not a scaleable interface at all, and for example we still haven't determined
how to manage multiple texture units with this approach. Basically it's
not possible to flatten all the GL enums that can be enabled/disabled
etc into one bitmask.
  
I see, as a ref to another question I posted yesterday, is that what is delaying the promotion of multitexture support in the next clutter release ?
Brainstorming this a bit with others, we have the following proposal:
1) We remove cogl_enable. (Internally it only manages 4 flags a.t.m)
2) We expose new cogl_enable and cogl_disable functions to be used like
glEnable/glDisable that will sit on top of an internal cache.
3) We expose a new cogl_flush_cache func that commits the cache to GL
3) We expose a new cogl_dirty_cache func that lets you invalidate cogls
cache.
4) Re-internaly re-work code in terms of these new funcs in place of the
current cogl_enable.

This should support a complete break out into GL (even into code you
can't easily modify) since you just need to call cogl_dirty_cache to
invalidate everything.

Do you think that could help your use case?

  
I love the idea, specially the cogl_dirty_cache...

Best regards,
-- 
Michael Boccara
Graphtech
Herzliya, Israel

-- To unsubscribe send a mail to [EMAIL PROTECTED]

Reply via email to