Re: [rfc] cache flush avoidance..
Dave Airlie wrote: Since Poulsbo is CMA, to avoid the SMP ipi issue, it should be possible to enclose the whole reloc fixup within a spinlock and use kmap_atomic which should be faster than kmap. Since within a spinlock, also preemption is disabled we can guarantee that a batchbuffer write followed by a clflush executes on the same processor = no need for ipi, and the clflush can follow immediately after a write. We've used this technique in psb_mmu.c, although we're using preempt_disable() / preempt_enable() to collect per-processor clflushes. So, basically something like the following should be a fast ipi-free way to do this: spin_lock() while(more_relocs_to_do) { kmap_atomic(dst_buffer); // Reuse old map if same page apply_reloc(): clflush(newly_written_address); kunmap_atomic(dst_buffer); } spin_unlock(); So this should work fine if every cacheline portion of the buffer to relocate contains a relocation, so that the snoop logic invalidates that cacheline on the other processors, but if you have very sparse relocations I could see ssomething like CPU0 writes relocation bo initially - one page with no relocations in cache -schedule CPU1 enters kernel preempt sectiion, and starts relocating never hitting that page, CPU1 clflushes GPU never sees the one page with no relocs.. Now maybe I'm missing something but I'm not sure how to protect against that.. Dave. Hmm, OK so the whole BO is cacheable? I was assuming a situation where the kmapped_atomic() page was the only (potentially) cached mapping of that page. (Potentially, since the already uncached logical address can be used if the page is not in high memory). It will create a mapping type conflict between the user space map and the kmap_atomic map, however, but I'm not sure how serious that is, at least not on x86. /Thomas - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Merging DRI interface changes
Hi, I have this branch with DRI interface changes that I've been threatening to merge on several occasions: http://cgit.freedesktop.org/~krh/mesa/log/?h=dri2 I've just rebased to todays mesa and it's ready to merge. Ian reviewed the changes a while back gave his ok, and from what we discussed at XDS2007, I believe the changes there are compatible with the Gallium plans. What's been keeping me from merging this is that it breaks the DRI interface. I wanted to make sure that the new interface will work for redirected direct rendering and GLXPixmaps and GLXPbuffers, which I now know that it does. The branch above doesn't included these changes yet, it still uses the sarea and the old shared, static back buffer setup. This is all isolated to the createNewScreen entry point, though, and my plan is to introduce a new createNewScreen entry point that enables all the TTM features. This new entry point can co-exist with the old entry point, and a driver should be able to support one or the other and probably also both at the same time. The AIGLX and libGL loaders will look for the new entry point when initializing the driver, if they have a new enough DRI/DRM available. If the loader has an old style DRI/DRM available, it will look for the old entry point. I'll wait a day or so to let people chime in, but if I don't hear any stop the press type of comments, I'll merge it tomorrow. cheers, Kristian - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
[Bug 8292] i915: texture crossbar
http://bugs.freedesktop.org/show_bug.cgi?id=8292 --- Comment #9 from [EMAIL PROTECTED] 2007-10-11 13:19 PST --- how do I use Mesa master? I am on a FC6 box, and am reletivly new to the linux experience. -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email --- You are receiving this mail because: --- You are the assignee for the bug, or are watching the assignee. - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: Merging DRI interface changes
Brian Paul wrote: Kristian Høgsberg wrote: Hi, I have this branch with DRI interface changes that I've been threatening to merge on several occasions: http://cgit.freedesktop.org/~krh/mesa/log/?h=dri2 I've just rebased to todays mesa and it's ready to merge. Ian reviewed the changes a while back gave his ok, and from what we discussed at XDS2007, I believe the changes there are compatible with the Gallium plans. What's been keeping me from merging this is that it breaks the DRI interface. I wanted to make sure that the new interface will work for redirected direct rendering and GLXPixmaps and GLXPbuffers, which I now know that it does. The branch above doesn't included these changes yet, it still uses the sarea and the old shared, static back buffer setup. This is all isolated to the createNewScreen entry point, though, and my plan is to introduce a new createNewScreen entry point that enables all the TTM features. This new entry point can co-exist with the old entry point, and a driver should be able to support one or the other and probably also both at the same time. The AIGLX and libGL loaders will look for the new entry point when initializing the driver, if they have a new enough DRI/DRM available. If the loader has an old style DRI/DRM available, it will look for the old entry point. I'll wait a day or so to let people chime in, but if I don't hear any stop the press type of comments, I'll merge it tomorrow. This is basically what's decsribed in the DRI2 wiki at http://wiki.x.org/wiki/DRI2, right? The first thing that grabs my attention is the fact that front color buffers are allocated by the X server but back/depth/stencil/etc buffers are allocated by the app/DRI client. If two GLX clients render to the same double-buffered GLX window, each is going to have a different/private back color buffer, right? That doesn't really obey the GLX spec. The renderbuffers which compose a GLX drawable should be accessible/shared by any number of separate GLX clients (like an X window is shared by multiple X clients). I guess I want to know what this really means in practice. Suppose 2 clients render to the same backbuffer in a race starting at time=0, doing something straightforward like (clear, draw, swapbuffers) there's nothing in the spec that says to me that they actually have to have been rendering to the same surface in memory, because the serialization could just be (clear-a, draw-a, swap-a, clear-b, draw-b, swap-b) so that potentially only one client's rendering ends up visible. So I would say that at least between a fullscreen clear and either swap-buffers or some appropriate flush (glXWaitGL ??), we can treat the rendering operations as atomic and have a lot of flexibility in terms of how we schedule actual rendering and whether we actually share a buffer or not.Note that swapbuffers is as good as a clear from this perspective as it can leave the backbuffer in an undefined state. I'm not just splitting hairs for no good reason - the ability for the 3d driver to know the size of the window it is rendering to while it is emitting commands, and to know that it won't change size until it is ready for it to is really crucial to building a solid driver. The trouble with sharing a backbuffer is what to do about the situation where two clients end up with different ideas about what size the buffer should be. So, if it is necessary to share backbuffers, then what I'm saying is that it's also necessary to dig into the real details of the spec and figure out how to avoid having the drivers being forced to change the size of their backbuffer halfway through rendering a frame. I see a few options: 0) The old DRI semantics - buffers change shape whenever they feel like it, drivers are buggy, window resizes cause mis-rendered frames. 1) The current truly private backbuffer semantics - clean drivers but some deviation from GLX specs - maybe less deviation than we actually think. 2) Alternate semantics where the X server allocates the buffers but drivers just throw away frames when they find the buffer has changed shape at the end of rendering. I'm sure this would be nonconformant, at any rate it seems nasty. (i915 swz driver is forced to do this). 3) Share buffers with a reference counting scheme. When a client notices the buffer needs a resize, do the resize and adjust refcounts - other clients continue with the older buffer. What happens when a client on the older buffer calls swapbuffers -- I'm sure we can figure out what the correct behaviour should be. etc. All of these are superficial approaches. My belief is that if we really make an attempt to understand the sharing semantics encoded in the GLX spec, and interpret that in the terms of allowable ordering of rendering operations of separate clients, a favorable implementation is possible. Kristian - I apologize that I
Re: Merging DRI interface changes
On 10/11/07, Brian Paul [EMAIL PROTECTED] wrote: Kristian Høgsberg wrote: Hi, I have this branch with DRI interface changes that I've been threatening to merge on several occasions: http://cgit.freedesktop.org/~krh/mesa/log/?h=dri2 I've just rebased to todays mesa and it's ready to merge. Ian reviewed the changes a while back gave his ok, and from what we discussed at XDS2007, I believe the changes there are compatible with the Gallium plans. What's been keeping me from merging this is that it breaks the DRI interface. I wanted to make sure that the new interface will work for redirected direct rendering and GLXPixmaps and GLXPbuffers, which I now know that it does. The branch above doesn't included these changes yet, it still uses the sarea and the old shared, static back buffer setup. This is all isolated to the createNewScreen entry point, though, and my plan is to introduce a new createNewScreen entry point that enables all the TTM features. This new entry point can co-exist with the old entry point, and a driver should be able to support one or the other and probably also both at the same time. The AIGLX and libGL loaders will look for the new entry point when initializing the driver, if they have a new enough DRI/DRM available. If the loader has an old style DRI/DRM available, it will look for the old entry point. I'll wait a day or so to let people chime in, but if I don't hear any stop the press type of comments, I'll merge it tomorrow. This is basically what's decsribed in the DRI2 wiki at http://wiki.x.org/wiki/DRI2, right? It's a step towards it. The changes I'd like to merge now doesn't pull in any memory manager integration, but it does introduce the DRI breakage required to move to GLX1.4. The reason that I'm proposing to merge this now is that I'm fairly sure that we can get everything else (DRM, X server, and DDX drivers) pulled together before the next Mesa release is up. In other words, we only break it this one time. The first thing that grabs my attention is the fact that front color buffers are allocated by the X server but back/depth/stencil/etc buffers are allocated by the app/DRI client. If two GLX clients render to the same double-buffered GLX window, each is going to have a different/private back color buffer, right? That doesn't really obey the GLX spec. The renderbuffers which compose a GLX drawable should be accessible/shared by any number of separate GLX clients (like an X window is shared by multiple X clients). [Actually, re-reading the wiki part about serial numbers, it sounds like a GLX drawable's renderbuffers will be shared. Maybe that could be clarified?] Yes, this use is considered in the design. A GLX drawable (window, pixmap or pbuffer) has an associated drm_drawable_t in the DRM. When the DRI driver wants to render to a drawable it asks the X server for the drm_drawable_t for the X drawable and then asks the DRM (using an ioctl I will add later) about the buffers currently associated with the drm_drawable_t. If the driver gets the buffers it needs, it can just create the render buffers and then start rendering. This is typically the case when some other client is rendering to the drawable and has already set up the buffers. Of course, that other client may not have set up all the buffers the client needs (maybe it doesn't use a depth buffer) or maybe the client is the first to render to the drawable, in which case the client must allocate and attach the missing buffers. The serial number mechanism is necessary to prevent to clients from racing to attach buffers. Suppose two clients start rendering at the same time and both find that no buffers have yet been attached. They will both go and allocate the set they need and try to attach them. The buffers that overlap (suppose they both allocate a back buffer) will be set twice. The serial number lets the kernel know that they are both trying to set the back buffer for the same instance of the attached front buffer. Only one buffer can be attached for each increment of the serial number and thus the kernel can let one of the clients know that the buffer it proposed wasn't set. Prior to DRI private back buffers we pretty much got this behaviour automatically (though, software-allocated accum buffers, for example, were not properly sharable). Yup, the shared back buffers design made this easy. With private back buffers it's a little more tricky since we need one place that tracks the mapping between a drawable and the attached ancillary buffers. The prototype I demoed at XDS used the DRI module in the server for this, but we we decided to move it to DRM as described above. Suppose all the renderbuffers which compose a GLX drawable were allocated and resized by the X server. The DRI clients would just have poll or check the drawable size when appropriate but they wouldn't have to allocate them. I don't know
Re: Merging DRI interface changes
On Thu, Oct 11, 2007 at 10:35:28PM +0100, Keith Whitwell wrote: | Suppose 2 clients render to the same backbuffer... The (rare) cases in which I've seen this used, the clients are aware of one another, and restrict their rendering to non-overlapping portions of the drawable. A master client is responsible for swap and clear. I believe the intent of the spec was to allow CPU-bound apps to make use of multiple processors. Rendering to a single drawable, rather than multiple drawables, allowed swap to be synchronized. I recall discussions about ways to coordinate multiple command streams so that rendering to overlapping areas of the drawable could be handled effectively, but I don't remember any apps that used such methods. Allen - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: Merging DRI interface changes
Keith Whitwell wrote: Brian Paul wrote: Kristian Høgsberg wrote: Hi, I have this branch with DRI interface changes that I've been threatening to merge on several occasions: http://cgit.freedesktop.org/~krh/mesa/log/?h=dri2 I've just rebased to todays mesa and it's ready to merge. Ian reviewed the changes a while back gave his ok, and from what we discussed at XDS2007, I believe the changes there are compatible with the Gallium plans. What's been keeping me from merging this is that it breaks the DRI interface. I wanted to make sure that the new interface will work for redirected direct rendering and GLXPixmaps and GLXPbuffers, which I now know that it does. The branch above doesn't included these changes yet, it still uses the sarea and the old shared, static back buffer setup. This is all isolated to the createNewScreen entry point, though, and my plan is to introduce a new createNewScreen entry point that enables all the TTM features. This new entry point can co-exist with the old entry point, and a driver should be able to support one or the other and probably also both at the same time. The AIGLX and libGL loaders will look for the new entry point when initializing the driver, if they have a new enough DRI/DRM available. If the loader has an old style DRI/DRM available, it will look for the old entry point. I'll wait a day or so to let people chime in, but if I don't hear any stop the press type of comments, I'll merge it tomorrow. This is basically what's decsribed in the DRI2 wiki at http://wiki.x.org/wiki/DRI2, right? The first thing that grabs my attention is the fact that front color buffers are allocated by the X server but back/depth/stencil/etc buffers are allocated by the app/DRI client. If two GLX clients render to the same double-buffered GLX window, each is going to have a different/private back color buffer, right? That doesn't really obey the GLX spec. The renderbuffers which compose a GLX drawable should be accessible/shared by any number of separate GLX clients (like an X window is shared by multiple X clients). I guess I want to know what this really means in practice. Suppose 2 clients render to the same backbuffer in a race starting at time=0, doing something straightforward like (clear, draw, swapbuffers) there's nothing in the spec that says to me that they actually have to have been rendering to the same surface in memory, because the serialization could just be (clear-a, draw-a, swap-a, clear-b, draw-b, swap-b) so that potentially only one client's rendering ends up visible. So I would say that at least between a fullscreen clear and either swap-buffers or some appropriate flush (glXWaitGL ??), we can treat the rendering operations as atomic and have a lot of flexibility in terms of how we schedule actual rendering and whether we actually share a buffer or not.Note that swapbuffers is as good as a clear from this perspective as it can leave the backbuffer in an undefined state. On the other hand, a pair of purposely-written programs could clear-a, draw-a, draw-b, swap-b and the results should be coherent. That's how I read the spec. I'm not just splitting hairs for no good reason - the ability for the 3d driver to know the size of the window it is rendering to while it is emitting commands, and to know that it won't change size until it is ready for it to is really crucial to building a solid driver. Agreed. The trouble with sharing a backbuffer is what to do about the situation where two clients end up with different ideas about what size the buffer should be. So, if it is necessary to share backbuffers, then what I'm saying is that it's also necessary to dig into the real details of the spec and figure out how to avoid having the drivers being forced to change the size of their backbuffer halfway through rendering a frame. I see a few options: 0) The old DRI semantics - buffers change shape whenever they feel like it, drivers are buggy, window resizes cause mis-rendered frames. 1) The current truly private backbuffer semantics - clean drivers but some deviation from GLX specs - maybe less deviation than we actually think. 2) Alternate semantics where the X server allocates the buffers but drivers just throw away frames when they find the buffer has changed shape at the end of rendering. I'm sure this would be nonconformant, at any rate it seems nasty. (i915 swz driver is forced to do this). 3) Share buffers with a reference counting scheme. When a client notices the buffer needs a resize, do the resize and adjust refcounts - other clients continue with the older buffer. What happens when a client on the older buffer calls swapbuffers -- I'm sure we can figure out what the correct behaviour should be. I don't know the answers to this either. There's probably very few, if any, GLX programs in existance
Re: Merging DRI interface changes
On 10/11/07, Keith Whitwell [EMAIL PROTECTED] wrote: Brian Paul wrote: ... If two GLX clients render to the same double-buffered GLX window, each is going to have a different/private back color buffer, right? That doesn't really obey the GLX spec. The renderbuffers which compose a GLX drawable should be accessible/shared by any number of separate GLX clients (like an X window is shared by multiple X clients). I guess I want to know what this really means in practice. Suppose 2 clients render to the same backbuffer in a race starting at time=0, doing something straightforward like (clear, draw, swapbuffers) there's nothing in the spec that says to me that they actually have to have been rendering to the same surface in memory, because the serialization could just be (clear-a, draw-a, swap-a, clear-b, draw-b, swap-b) so that potentially only one client's rendering ends up visible. I've read the GLX specification a number of times to try to figure this out. It is very vague, but the only way I can make sense of multiple clients rendering to the same drawable is if they coordinate between them somehow. Maybe the scenegraph is split between several processes: one client draws the backdrop, then passes a token to another process which then draws the player characters, and then a third draws a head up display, calls glXSwapBuffers() and passes the token back to the first process. Or maybe they render in parallel, but to different areas of the drawable, synchronize when they're all done and then one does glXSwapBuffers() and they start over on the next frame. ... So, if it is necessary to share backbuffers, then what I'm saying is that it's also necessary to dig into the real details of the spec and figure out how to avoid having the drivers being forced to change the size of their backbuffer halfway through rendering a frame. This is a bigger issue to figure out than the shared buffer one. I know you're looking to reduce the number of changing factors during rendering (clip rects, buffer sizes and locations), but the driver needs to be able to pick up new buffers in a few more places than just swap buffers. But I think we agree that we can add that polling in a couple of places in the driver (before starting a new batch buffer, on flush, and maybe other places) and it should work. I see a few options: 0) The old DRI semantics - buffers change shape whenever they feel like it, drivers are buggy, window resizes cause mis-rendered frames. 1) The current truly private backbuffer semantics - clean drivers but some deviation from GLX specs - maybe less deviation than we actually think. 2) Alternate semantics where the X server allocates the buffers but drivers just throw away frames when they find the buffer has changed shape at the end of rendering. I'm sure this would be nonconformant, at any rate it seems nasty. (i915 swz driver is forced to do this). 3) Share buffers with a reference counting scheme. When a client notices the buffer needs a resize, do the resize and adjust refcounts - other clients continue with the older buffer. What happens when a client on the older buffer calls swapbuffers -- I'm sure we can figure out what the correct behaviour should be. 3) Sounds like the best solution and it's basically what I'm proposing. For the first implementation (pre-gallium), I'm looking to just reuse the existing getDrawableInfo polling for detecting whether new buffers are available. It won't be more or less broken than the current SAREA scheme. When gallium starts to land, we can fine-tune the polling to a few select points in the driver. The DRI driver interface changes I'm proposing here should not be affected by these issues though. Detecting that the buffers changed and allocating and attaching new ones is entirely between the DRI driver and the DRM. When we're ready to add the TTM functionality to a driver we add the new createNewScreen entry point I mentioned and that's all we need to change. So, in other words, I believe we can move forward with this merge while we figure out the semantics of the resizing-while-rendering case. Kristian - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: Merging DRI interface changes
Allen Akin wrote: On Thu, Oct 11, 2007 at 10:35:28PM +0100, Keith Whitwell wrote: | Suppose 2 clients render to the same backbuffer... The (rare) cases in which I've seen this used, the clients are aware of one another, and restrict their rendering to non-overlapping portions of the drawable. A master client is responsible for swap and clear. I believe the intent of the spec was to allow CPU-bound apps to make use of multiple processors. Rendering to a single drawable, rather than multiple drawables, allowed swap to be synchronized. I recall discussions about ways to coordinate multiple command streams so that rendering to overlapping areas of the drawable could be handled effectively, but I don't remember any apps that used such methods. Allen, Just to clarify, would things look a bit like this: Master: clear, glFlush, signal slaves somehow Slave0..n: wait for signal, don't clear, just draw triangles glFlush signal master Master: wait for all slaves glXSwapBuffers This is fairly sensible and clearly requires a shared buffer. It's also quite a controlled situation that sidesteps some of the questions about what happens when two clients are issuing swapbuffers willy-nilly on the same drawable at the same time as the user is frantically resizing it... Keith - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: Merging DRI interface changes
Kristian Høgsberg wrote: On 10/11/07, Keith Whitwell [EMAIL PROTECTED] wrote: Brian Paul wrote: ... If two GLX clients render to the same double-buffered GLX window, each is going to have a different/private back color buffer, right? That doesn't really obey the GLX spec. The renderbuffers which compose a GLX drawable should be accessible/shared by any number of separate GLX clients (like an X window is shared by multiple X clients). I guess I want to know what this really means in practice. Suppose 2 clients render to the same backbuffer in a race starting at time=0, doing something straightforward like (clear, draw, swapbuffers) there's nothing in the spec that says to me that they actually have to have been rendering to the same surface in memory, because the serialization could just be (clear-a, draw-a, swap-a, clear-b, draw-b, swap-b) so that potentially only one client's rendering ends up visible. I've read the GLX specification a number of times to try to figure this out. It is very vague, but the only way I can make sense of multiple clients rendering to the same drawable is if they coordinate between them somehow. Maybe the scenegraph is split between several processes: one client draws the backdrop, then passes a token to another process which then draws the player characters, and then a third draws a head up display, calls glXSwapBuffers() and passes the token back to the first process. Or maybe they render in parallel, but to different areas of the drawable, synchronize when they're all done and then one does glXSwapBuffers() and they start over on the next frame. ... So, if it is necessary to share backbuffers, then what I'm saying is that it's also necessary to dig into the real details of the spec and figure out how to avoid having the drivers being forced to change the size of their backbuffer halfway through rendering a frame. This is a bigger issue to figure out than the shared buffer one. I know you're looking to reduce the number of changing factors during rendering (clip rects, buffer sizes and locations), but the driver needs to be able to pick up new buffers in a few more places than just swap buffers. But I think we agree that we can add that polling in a couple of places in the driver (before starting a new batch buffer, on flush, and maybe other places) and it should work. Yes, there are a few places, but they are very few. Basically I think it is possible to cut a rendering stream up into chunks which are effectively atomic. Drivers do this all the time anyway - just by building a dma buffer that is then submitted atomically to hardware for processing. It isn't too hard to figure out where the boundaries of these regions are - if we think about a driver with effectively infinite dma space, then such a driver only flushes when required to satisfy requirements placed on it by the spec. I also believe that the only sane time to check the size of the destination drawable is when the driver is *entering* such an atomic region (let's call it a scene). Swapbuffers terminates a scene, it doesn't really start the next one - that doesn't happen until actual rendering starts. I would even say that fullscreen clears don't start a scene, but that's another story... The things that terminate a scene are: - swapbuffers - readpixels and similar - maybe glFlush() - though I'm sometimes naughty and no-op it for backbuffer rendering. Basically any API-generated event that implies a flush. Internally generated events, like running out of some resource and having to fire buffers to recover generally don't count. I see a few options: 0) The old DRI semantics - buffers change shape whenever they feel like it, drivers are buggy, window resizes cause mis-rendered frames. 1) The current truly private backbuffer semantics - clean drivers but some deviation from GLX specs - maybe less deviation than we actually think. 2) Alternate semantics where the X server allocates the buffers but drivers just throw away frames when they find the buffer has changed shape at the end of rendering. I'm sure this would be nonconformant, at any rate it seems nasty. (i915 swz driver is forced to do this). 3) Share buffers with a reference counting scheme. When a client notices the buffer needs a resize, do the resize and adjust refcounts - other clients continue with the older buffer. What happens when a client on the older buffer calls swapbuffers -- I'm sure we can figure out what the correct behaviour should be. 3) Sounds like the best solution and it's basically what I'm proposing. For the first implementation (pre-gallium), I'm looking to just reuse the existing getDrawableInfo polling for detecting whether new buffers are available. It won't be more or less broken than the current SAREA scheme. When gallium starts to land, we can fine-tune the polling
Re: Merging DRI interface changes
On Fri, Oct 12, 2007 at 12:08:09AM +0100, Keith Whitwell wrote: | Just to clarify, would things look a bit like this: | | Master: | clear, | glFlush, | signal slaves somehow | | Slave0..n: | wait for signal, | don't clear, just draw triangles | glFlush | signal master | | Master: | wait for all slaves | glXSwapBuffers Yes, more or less. As I look at it now, I wonder if the master really did a clear, or if the slaves simply drew background polygons over their respective regions. It's also possible that the swap guarantees a flush for commands queued by all the slaves, but I'm unsure of that without checking the spec. | This is fairly sensible and clearly requires a shared buffer. It's also | quite a controlled situation that sidesteps some of the questions about | what happens when two clients are issuing swapbuffers willy-nilly on the | same drawable at the same time as the user is frantically resizing it... Right. Allen - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ -- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
cooperative rendering
I've checked in a new GLX test for rendering into one GLX window by two processes. See comments in progs/xdemos/corender.c for instructions. Two interlocking tori are drawn. The first process draws a red one, the second process draws a blue one. I'm getting mixed results. With an old GeForce3 series card it seems to work perfectly. With a GeForce 7300 card there's rendering glitches. The blue torus isn't depth-buffered correctly. But if I insert an extra glClear(GL_DEPTH_BUFFER_BIT) it works except for an occasional glitch. Pretty weird - the second clear shouldn't do what it does. See attached image. In both cases, rapid window resizes causes lots of window flickering but no noticable distortion of the rendering otherwise. -Brian inline: corender.png- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/-- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: Merging DRI interface changes
On Thu, 2007-10-11 at 23:39 +0100, Keith Whitwell wrote: Maybe we're examining the wrong spec here. My concerns are all about what happens when the window changes size -- what does X tell us about the contents of a window under those circumstances? Does the GLX spec actually specify *anything* about this situation??? As Brian said, X knows exactly when the window changes size, and the contents of the window at resize are well specified by the protocol. As X requests are always atomic, and executed as some shuffle of the request streams from all of the clients, there are no partial resize states to deal with. Clients can always know when drawing occurred before or after a resize as the resize events include the serial number of the most recently executed client request indicating when in the client's request stream the resize occurred. Making the resize asynchronous is a huge feature as it means applications often end up repainting less than once per resize as you reshape the window with the window manager. It sounds like the DRM needs to have an event queue for the X server to deliver resize evens into that is outside the X protocol (and hence not subject to the whims of the application). I suspect the DRI extension will need a new request that causes the X server to post events to the DRM module. Windows always contain their background in areas where expose events are delivered (again, the request serialization means this is always well defined in time). Backgrounds can consist of a single pixel value or an image to be tiled into the window, or they can be left as garbage (background None). This latter mode is often used to avoid flashing on the screen, but the actual contents of the window are not defined by the core protocol to be the parent contents in all cases. The Composite extension stands on its head to make the parent contents visible though, so I suppose we now have defined these contents as the parent contents in all cases. -- [EMAIL PROTECTED] signature.asc Description: This is a digitally signed message part - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/-- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel
Re: Merging DRI interface changes
On Fri, 2007-10-12 at 00:19 +0100, Keith Whitwell wrote: Basically any API-generated event that implies a flush. Internally generated events, like running out of some resource and having to fire buffers to recover generally don't count. If I understand this, then the only time you'll check for window resize is just before the next drawing occurs after one of these events. That makes a huge amount of sense to me, and limits polling to once per scene, instead of once per batchbuffer. And, we do all of this polling through the DRM, which would allow things other than the X server to send resize events for non-X buffers. -- [EMAIL PROTECTED] signature.asc Description: This is a digitally signed message part - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/-- ___ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel