Brian Paul wrote:
> Keith Whitwell wrote:
>> Brian Paul wrote:
>>> Kristian Høgsberg wrote:
>>>> Hi,
>>>>
>>>> I have this branch with DRI interface changes that I've been
>>>> threatening to merge on several occasions:
>>>>
>>>>   http://cgit.freedesktop.org/~krh/mesa/log/?h=dri2
>>>>
>>>> I've just rebased to todays mesa and it's ready to merge.  Ian
>>>> reviewed the changes a while back gave his ok, and from what we
>>>> discussed at XDS2007, I believe the changes there are compatible with
>>>> the Gallium plans.
>>>>
>>>> What's been keeping me from merging this is that it breaks the DRI
>>>> interface.  I wanted to make sure that the new interface will work for
>>>> redirected direct rendering and GLXPixmaps and GLXPbuffers, which I
>>>> now know that it does.  The branch above doesn't included these
>>>> changes yet, it still uses the sarea and the old shared, static back
>>>> buffer setup.  This is all isolated to the createNewScreen entry
>>>> point, though, and my plan is to introduce a new createNewScreen entry
>>>> point that enables all the TTM features.  This new entry point can
>>>> co-exist with the old entry point, and a driver should be able to
>>>> support one or the other and probably also both at the same time.
>>>>
>>>> The AIGLX and libGL loaders will look for the new entry point when
>>>> initializing the driver, if they have a new enough DRI/DRM available.
>>>> If the loader has an old style DRI/DRM available, it will look for the
>>>> old entry point.
>>>>
>>>> I'll wait a day or so to let people chime in, but if I don't hear any
>>>> "stop the press" type of comments, I'll merge it tomorrow.
>>>
>>> This is basically what's decsribed in the DRI2 wiki at 
>>> http://wiki.x.org/wiki/DRI2, right?
>>>
>>> The first thing that grabs my attention is the fact that front color 
>>> buffers are allocated by the X server but back/depth/stencil/etc 
>>> buffers are allocated by the app/DRI client.
>>>
>>> If two GLX clients render to the same double-buffered GLX window, 
>>> each is going to have a different/private back color buffer, right?  
>>> That doesn't really obey the GLX spec.  The renderbuffers which 
>>> compose a GLX drawable should be accessible/shared by any number of 
>>> separate GLX clients (like an X window is shared by multiple X clients).
>>
>> I guess I want to know what this really means in practice.
>>
>> Suppose 2 clients render to the same backbuffer in a race starting at 
>> time=0, doing something straightforward like (clear, draw, 
>> swapbuffers) there's nothing in the spec that says to me that they 
>> actually have to have been rendering to the same surface in memory, 
>> because the serialization could just be (clear-a, draw-a, swap-a, 
>> clear-b, draw-b, swap-b) so that potentially only one client's 
>> rendering ends up visible.
>>
>> So I would say that at least between a fullscreen clear and either 
>> swap-buffers or some appropriate flush (glXWaitGL ??), we can treat 
>> the rendering operations as atomic and have a lot of flexibility in 
>> terms of how we schedule actual rendering and whether we actually 
>> share a buffer or not.    Note that swapbuffers is as good as a clear 
>> from this perspective as it can leave the backbuffer in an undefined 
>> state.
> 
> On the other hand, a pair of purposely-written programs could clear-a, 
> draw-a, draw-b, swap-b and the results should be coherent.  That's how I 
> read the spec.

Yes, but only if there was a flush-a in there somewhere, and even then 
you'd need something to ensure that it didn't come out like:
        
draw-b, swap-b, clear-a, draw-a

I think you'd have to do

A:  clear, draw, glflush, send a signal to B
B:  wait for A's signal, draw, swapbuffers

Even without me trying to play with the GLX semantics, it's pretty 
normal for A and B to buffer up whole frame-worths of rendering in a 
single DMA buffer, and not to fire that buffer until you get a flush. 
Without the flushes and explicit signals, any ordering is possible as 
long as it respects the local orderings inside A and B.



> 
>> I'm not just splitting hairs for no good reason - the ability for the 
>> 3d driver to know the size of the window it is rendering to while it 
>> is emitting commands, and to know that it won't change size until it 
>> is ready for it to is really crucial to building a solid driver.
> 
> Agreed.
> 
> 
>> The trouble with sharing a backbuffer is what to do about the 
>> situation where two clients end up with different ideas about what 
>> size the buffer should be.
>>
>> So, if it is necessary to share backbuffers, then what I'm saying is 
>> that it's also necessary to dig into the real details of the spec and 
>> figure out how to avoid having the drivers being forced to change the 
>> size of their backbuffer halfway through rendering a frame.
>>
>> I see a few options:
>>     0) The old DRI semantics - buffers change shape whenever they feel 
>> like it, drivers are buggy, window resizes cause mis-rendered frames.
>>
>>     1) The current truly private backbuffer semantics - clean drivers 
>> but some deviation from GLX specs - maybe less deviation than we 
>> actually think.
>>
>>     2) Alternate semantics where the X server allocates the buffers 
>> but drivers just throw away frames when they find the buffer has 
>> changed shape at the end of rendering.  I'm sure this would be 
>> nonconformant, at any rate it seems nasty.  (i915 swz driver is forced 
>> to do this).
>>
>>     3) Share buffers with a reference counting scheme.  When a client 
>> notices the buffer needs a resize, do the resize and adjust refcounts 
>> - other clients continue with the older buffer.  What happens when a 
>> client on the older buffer calls swapbuffers -- I'm sure we can figure 
>> out what the correct behaviour should be.
> 
> I don't know the answers to this either.
> 
> There's probably very few, if any, GLX programs in existance that rely 
> on the behaviour of this.  Chromium is one program that renders into 
> other app's windows (Render SPU's render_to_app_window option).  But in 
> that case, the owner of the window no longer renders into the window 
> once Chromium takes over.  I think the "VirualGL" utility might rely on 
> reading other client's GLX buffers, but I'd have to take a closer look.
> 
> I'm just finishing up a new GLX test app that accesses another client's 
> GLX window.  It's a buffer "snooper" similar to a utility that SGI had 
> on their IRIX systems.  Basically, it lets you view another app's 
> window's back/stencil/z buffers.  I used it a lot back in the day and 
> always missed it when I left IRIX.  I'll check it in soon.

> I could probably quickly whip up another test that does coordinating 
> rendering into one window from two processes.  I could run it with 
> NVIDIA's driver and see what happens there.

Agreed, but also nv behaviour (whatever it may be) is not the only 
option under the spec -- though if they break the spec that would be 
interesting to know.  I suspect they respect it when the window size 
doesn't change, but I wonder what happens when two processes are 
accessing a window which is subject to resize.

Maybe we're examining the wrong spec here.  My concerns are all about 
what happens when the window changes size -- what does X tell us about 
the contents of a window under those circumstances?  Does the GLX spec 
actually specify *anything* about this situation???

Keith


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to