On Tue, 2008-08-19 at 12:50 -0400, Kristian Høgsberg wrote: > On Tue, Aug 19, 2008 at 6:57 AM, Michel Dänzer > <[EMAIL PROTECTED]> wrote: > > On Mon, 2008-08-18 at 15:30 -0400, Kristian Høgsberg wrote: > >> > >> I have pushed the DRI2 update to the dri2proto, mesa, xserver, and > >> xf86-video-intel trees in ~krh. It's on the master branch in those repos. > > > > I don't see anything younger than 5 months in your xf86-video-intel > > repo. > > Ah, forgot to push that one. Should be there now.
Yes, thanks. > >> The way this works now, is that when ctx->Driver.Viewport is called > >> (and thus at least when binding a drawable to a context), the DRI > >> driver calls back to the loader, which then calls into the DRI2 module > >> to get the buffers associated with the drawable. The DRI2 module in > >> turns calls into the DDX driver to allocate these and sends them back > >> to the DRI driver, which updates the renderbuffers to use the given > >> buffers. > > > > So after binding a drawable to a context, the buffer information will > > only be updated when the app calls glViewport()? Any idea if this scheme > > will be suitable for other APIs like GLES or OpenVG? > > Yes. > > GLES has the glViewport entrypoint with the same semantics. > > OpenVG doesn't seem to have a way to communicate the surface size to > the library, but it relies on EGL for the surfarce and context > management. However, the EGL API is "open ended", in that it relies > on implementation dependent types, specifically NativeWindowType, > where we can add API to update the window size. For example, the > NativeWindowType could be a DRIWindow in the MESA/DRI EGL > implementation that you create and pass to eglCreateWindowSurface() > and applications are required to call DRIWindowSetSize, whenever the > underlying window changes size. So EGL applications would need Mesa/DRI(2) specific code to work correctly wrt window resizes? > >> When glXSwapBuffers is called, the loader calls into the DRI > >> driver to finish the frame (this part is missing currently) and then > >> calls into the DRI2 module to actually do the back buffer to front > >> buffer copy. The DRI2 module again implements this using a hook into > >> the DDX driver. The code right now just does a generic CopyArea, and > >> then flushes the batch buffer. Swap buffer needs to be a round trip > >> so the swap buffer commands are emitted before the DRI driver proceeds > >> to render the next frame. > > > > Making SwapBuffers a round trip isn't ideal for sync to vblank > > (especially considering potentially using triple buffering, but see > > below). > > Are you thinking that the DRI client will do the wait-for-vblank and > then post the swap buffer request? That's clearly not feasible, but > my thinking was that the waiting will be done in the X server, thus > the flags argument to DRI2SwapBuffers. BTW, I don't think flags will cut it for synchronization purposes. The interface would probably need to be something like the DRM_I915_VBLANK_SWAP ioctl interface, i.e. specify the CRTC to synchronize to and a target sequence number and return the sequence number when the swap is expected to take effect. (Of course, there would need to be a way to specify 'just do it ASAP' to satisfy the glxgears framerate fetishists ;) > What I think we should do here is to use a dedicated command queue for > the front buffer and block that on vblank using a hw wait instruction. > This lets us just fire and forget swap buffer commands, since any > cliprect changes and associated window contents blits end up in the > front buffer queue after the buffer swap commands. It does mean that > all other rendering to the front buffer (very little, most toolkits > are double buffered) gets delayed a bit (a most a frame), but hey, > that's a feature, and rendering *throughput* isn't affected. Toolkits > double buffering to a pixmap and double buffered GLX apps can run > unhindered. Except when there are dependencies between frontbuffer and other rendering? Anyway, this isn't my concern. It's that the client should not have to block any earlier than when it needs to render to a buffer with a pending buffer swap. Now you guys probably know better than me how best to achieve that with the X protocol, but I don't think the current SwapBuffers interface cuts it, does it? Or if I'm misunderstanding, what exactly does 'round trip' refer to? :) > > Also, it looks like currently every glXCopySubBufferMESA() call > > is a roundtrip as well, which might incur a noticeable hit with compiz. > > Yeah, that's a problem, but the glXCopySubBufferMESA() API doesn't let > us do much better - maybe we need glXCopyBufferRegionMESA()? So long as the DRI2 SwapBuffers interface isn't synchronous, it should be fine I think. > > About triple buffering, AFAICT this scheme makes that impossible, as > > well as implementing buffer swaps by just flipping the front and back > > buffers, because the clients won't know the mapping from API buffers to > > memory buffers changed. > > One thing I had in mind was that DRI2SwapBuffers could return the new > color buffer, or maybe it should just return the full new set of > buffers always. This lets us do page flips for fullscreen > non-redirected windows and for redirected windows (just swap the > pixmaps). Yeah, something like that, but how will other clients using the same drawable (e.g. direct rendering GLX_EXT_texture_from_pixmap) get notified of the change? > > Have you considered any other schemes, e.g. some kind of event triggered > > when a buffer swap actually takes effect, and which includes information > > about the new mapping from API buffers to memory buffers? Or is the idea > > to just leave any advanced SwapBuffers schemes to the drivers? > > Right, the problem with triple buffering is that once we schedule a > swap, we don't know when the previous swap is finished and we can > start rendering again. Is it actually different from the regular > double buffer case though? You still need to block the client, which > we can just do by delaying the reply from DRI2SwapBuffers. In the > triple buffering case you just have an extra buffer and you're > blocking on the previous buffer swap instead of the current. > > Do we need to split DRI2SwapBuffers into two requests? One async > request that schedules the swap and one that blocks the client and > waits for the swap to complete and returns the new buffers? This will > allow clients to non-blocking swaps, then load and decode the next > frame of video, say, and then only after that block on the swap to > complete... Right, something like that would be better I think. The question is, what exactly should the latter synchronize to? Waiting for the swap to be emitted to the hardware isn't sufficient for subsequent software rendering, but waiting for the swap to be executed by the hardware is excessive for subsequent hardware rendering. Maybe SwapBuffers could just return some sort of implementation specific fence handle that can be used for either kind of synchronization. -- Earthling Michel Dänzer | http://tungstengraphics.com Libre software enthusiast | Debian, X and DRI developer ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ -- _______________________________________________ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel