> Ah, I was hoping to avoid glGets, but you had mentioned here (or, maybe it 
> was elsewhere) that on Mac OS X, glGets are not always synchronization 
> points, do you have any details on that last point?


Basically:  For context-centric state (viewport, matrix mode, currently bound 
fbos, shaders, textures, etc), it doesn't need to sync because the gl state 
machine already knows what you did last to that state -- the only way it 
wouldn't know that is if you were using the context in a thread-unsafe manner, 
in which case all bets are off anyway.  The state isn't submitted to the GPU 
until you actually draw something, and even then it has its copy still around, 
so there are exactly zero circumstances where this kind of state needs to be 
queried from an asynchronously operating subsystem (i.e., the GPU, or another 
thread that does stuff).

Context-shared state (texture state, for example) /Can/ be a sync point 
(because 2 contexts on 2 separate threads can theoretically muck about with the 
same texture object).  However, in reality if someone's doing that 
(manipulating the same object in 2 contexts concurrently -- manipulate, not 
just utilize) they're asking for explosions anyway.  Thanks to the miracle of 
flush/bind semantics, shared resource state is probably cached per-context, and 
updated (synchronously when necessary) during binds.  Since someone's probably 
going to draw or otherwise operate with your texture on the GPU (otherwise it's 
not particularly useful), that bind-sync cost is going to be paid anyway.  I'm 
under the assumption that unless another context has flushed the texture 
object, subsequent binds will do a cheap check to know if they need to do an 
expensive sync update or not -- this can also be see in glBegin(), where the 
first call is expensive because it validates all the state changes you've made, 
but subsequent calls are cheap until you mutate the state again.  flush/bind 
can probably be used as a mechanism to indicate mutation in much the same way.  
 (You'd want to take this to the GL list to know for sure, I'm sure I'm 
glossing over details).   [note that this might be problematic:  if on thread A 
you interrogate the texture, and change some state on it, and then another 
context on thread B mutates it and flushes it, and on thread A someone then 
binds it and renders, they'll get it without whatever changes you set on it.  
hmm...].

The other sync case is when you're using multithreaded contexts.  Those are 
fairly uncommon, but in those cases you can think of all your gl calls as doing 
a dispatch_async() under the hood to some other serial queue. Because of that, 
the context you have doesn't actually have the state you just set, because it 
needs to wait for everything before it to "catch up" (synchronization) on the 
other queue.  At worst, your glGet will block until roughly the same amount of 
work gets done to bring the GL thread up to date, and then they'll share some 
info.  The cost of locking and the context switches will add overhead, which is 
why this can still be slower than non-multithreaded contexts.

Fire up QC under totally pedestrian circumstances, and look at how many glGets 
it does for you under the hood (using OpenGL Profiler, for example).  dodging a 
couple less isn't something you should focus on unless profiling indicates it 
being a problem.

--
Christopher Wright
christopher_wri...@apple.com



 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list      (Quartzcomposer-dev@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to