Kristian Høgsberg wrote: > On 10/11/07, Keith Whitwell <[EMAIL PROTECTED]> wrote: >> Brian Paul wrote: > ... >>> If two GLX clients render to the same double-buffered GLX window, each >>> is going to have a different/private back color buffer, right? That >>> doesn't really obey the GLX spec. The renderbuffers which compose a GLX >>> drawable should be accessible/shared by any number of separate GLX >>> clients (like an X window is shared by multiple X clients). >> I guess I want to know what this really means in practice. >> >> Suppose 2 clients render to the same backbuffer in a race starting at >> time=0, doing something straightforward like (clear, draw, swapbuffers) >> there's nothing in the spec that says to me that they actually have to >> have been rendering to the same surface in memory, because the >> serialization could just be (clear-a, draw-a, swap-a, clear-b, draw-b, >> swap-b) so that potentially only one client's rendering ends up visible. > > I've read the GLX specification a number of times to try to figure > this out. It is very vague, but the only way I can make sense of > multiple clients rendering to the same drawable is if they coordinate > between them somehow. Maybe the scenegraph is split between several > processes: one client draws the backdrop, then passes a token to > another process which then draws the player characters, and then a > third draws a head up display, calls glXSwapBuffers() and passes the > token back to the first process. Or maybe they render in parallel, > but to different areas of the drawable, synchronize when they're all > done and then one does glXSwapBuffers() and they start over on the > next frame. > > ... >> So, if it is necessary to share backbuffers, then what I'm saying is >> that it's also necessary to dig into the real details of the spec and >> figure out how to avoid having the drivers being forced to change the >> size of their backbuffer halfway through rendering a frame. > > This is a bigger issue to figure out than the shared buffer one. I > know you're looking to reduce the number of changing factors during > rendering (clip rects, buffer sizes and locations), but the driver > needs to be able to pick up new buffers in a few more places than just > swap buffers. But I think we agree that we can add that polling in a > couple of places in the driver (before starting a new batch buffer, on > flush, and maybe other places) and it should work.
Yes, there are a few places, but they are very few. Basically I think it is possible to cut a rendering stream up into chunks which are effectively atomic. Drivers do this all the time anyway - just by building a dma buffer that is then submitted atomically to hardware for processing. It isn't too hard to figure out where the boundaries of these regions are - if we think about a driver with effectively infinite dma space, then such a driver only flushes when required to satisfy requirements placed on it by the spec. I also believe that the only sane time to check the size of the destination drawable is when the driver is *entering* such an atomic region (let's call it a scene). Swapbuffers terminates a scene, it doesn't really start the next one - that doesn't happen until actual rendering starts. I would even say that fullscreen clears don't start a scene, but that's another story... The things that terminate a scene are: - swapbuffers - readpixels and similar - maybe glFlush() - though I'm sometimes naughty and no-op it for backbuffer rendering. Basically any API-generated event that implies a flush. Internally generated events, like running out of some resource and having to fire buffers to recover generally don't count. >> I see a few options: >> 0) The old DRI semantics - buffers change shape whenever they feel >> like >> it, drivers are buggy, window resizes cause mis-rendered frames. >> >> 1) The current truly private backbuffer semantics - clean drivers but >> some deviation from GLX specs - maybe less deviation than we actually think. >> >> 2) Alternate semantics where the X server allocates the buffers but >> drivers just throw away frames when they find the buffer has changed >> shape at the end of rendering. I'm sure this would be nonconformant, at >> any rate it seems nasty. (i915 swz driver is forced to do this). >> >> 3) Share buffers with a reference counting scheme. When a client >> notices the buffer needs a resize, do the resize and adjust refcounts - >> other clients continue with the older buffer. What happens when a >> client on the older buffer calls swapbuffers -- I'm sure we can figure >> out what the correct behaviour should be. > > 3) Sounds like the best solution and it's basically what I'm > proposing. For the first implementation (pre-gallium), I'm looking to > just reuse the existing getDrawableInfo polling for detecting whether > new buffers are available. It won't be more or less broken than the > current SAREA scheme. When gallium starts to land, we can fine-tune > the polling to a few select points in the driver. > > The DRI driver interface changes I'm proposing here should not be > affected by these issues though. Detecting that the buffers changed > and allocating and attaching new ones is entirely between the DRI > driver and the DRM. When we're ready to add the TTM functionality to > a driver we add the new createNewScreen entry point I mentioned and > that's all we need to change. So, in other words, I believe we can > move forward with this merge while we figure out the semantics of the > resizing-while-rendering case. OK, sounds good Kristian. Keith ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ -- _______________________________________________ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel