Brian Paul wrote: > Kristian Høgsberg wrote: >> Hi, >> >> I have this branch with DRI interface changes that I've been >> threatening to merge on several occasions: >> >> http://cgit.freedesktop.org/~krh/mesa/log/?h=dri2 >> >> I've just rebased to todays mesa and it's ready to merge. Ian >> reviewed the changes a while back gave his ok, and from what we >> discussed at XDS2007, I believe the changes there are compatible with >> the Gallium plans. >> >> What's been keeping me from merging this is that it breaks the DRI >> interface. I wanted to make sure that the new interface will work for >> redirected direct rendering and GLXPixmaps and GLXPbuffers, which I >> now know that it does. The branch above doesn't included these >> changes yet, it still uses the sarea and the old shared, static back >> buffer setup. This is all isolated to the createNewScreen entry >> point, though, and my plan is to introduce a new createNewScreen entry >> point that enables all the TTM features. This new entry point can >> co-exist with the old entry point, and a driver should be able to >> support one or the other and probably also both at the same time. >> >> The AIGLX and libGL loaders will look for the new entry point when >> initializing the driver, if they have a new enough DRI/DRM available. >> If the loader has an old style DRI/DRM available, it will look for the >> old entry point. >> >> I'll wait a day or so to let people chime in, but if I don't hear any >> "stop the press" type of comments, I'll merge it tomorrow. > > This is basically what's decsribed in the DRI2 wiki at > http://wiki.x.org/wiki/DRI2, right? > > The first thing that grabs my attention is the fact that front color > buffers are allocated by the X server but back/depth/stencil/etc buffers > are allocated by the app/DRI client. > > If two GLX clients render to the same double-buffered GLX window, each > is going to have a different/private back color buffer, right? That > doesn't really obey the GLX spec. The renderbuffers which compose a GLX > drawable should be accessible/shared by any number of separate GLX > clients (like an X window is shared by multiple X clients).
I guess I want to know what this really means in practice. Suppose 2 clients render to the same backbuffer in a race starting at time=0, doing something straightforward like (clear, draw, swapbuffers) there's nothing in the spec that says to me that they actually have to have been rendering to the same surface in memory, because the serialization could just be (clear-a, draw-a, swap-a, clear-b, draw-b, swap-b) so that potentially only one client's rendering ends up visible. So I would say that at least between a fullscreen clear and either swap-buffers or some appropriate flush (glXWaitGL ??), we can treat the rendering operations as atomic and have a lot of flexibility in terms of how we schedule actual rendering and whether we actually share a buffer or not. Note that swapbuffers is as good as a clear from this perspective as it can leave the backbuffer in an undefined state. I'm not just splitting hairs for no good reason - the ability for the 3d driver to know the size of the window it is rendering to while it is emitting commands, and to know that it won't change size until it is ready for it to is really crucial to building a solid driver. The trouble with sharing a backbuffer is what to do about the situation where two clients end up with different ideas about what size the buffer should be. So, if it is necessary to share backbuffers, then what I'm saying is that it's also necessary to dig into the real details of the spec and figure out how to avoid having the drivers being forced to change the size of their backbuffer halfway through rendering a frame. I see a few options: 0) The old DRI semantics - buffers change shape whenever they feel like it, drivers are buggy, window resizes cause mis-rendered frames. 1) The current truly private backbuffer semantics - clean drivers but some deviation from GLX specs - maybe less deviation than we actually think. 2) Alternate semantics where the X server allocates the buffers but drivers just throw away frames when they find the buffer has changed shape at the end of rendering. I'm sure this would be nonconformant, at any rate it seems nasty. (i915 swz driver is forced to do this). 3) Share buffers with a reference counting scheme. When a client notices the buffer needs a resize, do the resize and adjust refcounts - other clients continue with the older buffer. What happens when a client on the older buffer calls swapbuffers -- I'm sure we can figure out what the correct behaviour should be. etc. All of these are superficial approaches. My belief is that if we really make an attempt to understand the sharing semantics encoded in the GLX spec, and interpret that in the terms of allowable ordering of rendering operations of separate clients, a favorable implementation is possible. Kristian - I apologize that I only ever look at this briefly & under duress... I'm off to read the spec properly now. Keith ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ -- _______________________________________________ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel