Hi all,

I've been working on the DRI2 implementation recently and I'm starting
to get a little confused, so I figured I'd throw a couple of questions
out to the list.  First of, I wrote up this summary shortly after XD

    http://wiki.x.org/wiki/DRI2

which upon re-reading is still pretty much up to date with what I'm
trying to do.  The buzzword summary from the page has

    * Lockless
    * Always private back buffers
    * No clip rects in DRI driver
    * No DDX driver part
    * Minimal X server part
    * Swap buffer and clip rects in kernel
    * No SAREA

I've implemented the DRI2 xserver module
(http://cgit.freedesktop.org/~krh/xserver/log/?h=dri2) and the new drm
ioctls that is uses
(http://cgit.freedesktop.org/~krh/drm/log/?h=dri2).  I did the DDX
part for the intel driver and DRI2 initialization consists of doing
drmOpen (this is now up to the DDX driver), initialize the memory
manager and use it to allocate stuff and then call DRI2ScreenInit(),
passing in pScreen and the file descriptor.  Basically, all of
i830_dri.c isn't used in this mode.

It's all delightfully simple, but I'm starting to reconsider whether
the "lockless" bullet point is realistic.   Note, the drawable lock is
gone, we always render to private back buffers and do swap buffers in
the kernel, so I'm "only" concerned with the DRI lock here.  The idea
is that since we have the memory manager and the super-ioctl and the X
server now can push cliprects into the kernel in one atomic operation,
we would be able to get rid of the DRI lock.  My overall question,
here is, is that feasible?

I'm trying to figure out how context switches acutally work... the DRI
lock is overloaded as context switcher, and there is code in the
kernel to call out to a chipset specific context switch routine when
the DRI lock is taken... but only ffb uses it... So I'm guessing the
way context switches work today is that the DRI driver grabs the lock
and after potentially updating the cliprects through X protocol, it
emits all the state it depends on to the cards.  Is the state emission
done by just writing out a bunch of registers?  Is this how the X
server works too, except XAA/EXA acceleration doesn't depend on a lot
of state and thus the DDX driver can emit everything for each
operation?

How would this work if we didn't have a lock?  You can't emit the
state and then start rendering without a lock to keep the state in
place...  If the kernel doesn't restore any state, whats the point of
the drm_context_t we pass to the kernel in drmLock?  Should the kernel
know how to restore state (this ties in to the email from jglisse on
state tracking in drm and all the gallium jazz, I guess)?  How do we
identify state to the kernel, and how do we pass it in in the
super-ioctl?  Can we add a list of registers to be written and the
values?  I talked to Dave about it and we agreed that adding a
drm_context_t to the super-ioctl would work, but now I'm thinking if
the kernel doesn't track any state it wont really work.  Maybe
cross-client state sharing isn't useful for performance as Keith and
Roland argues, but if the kernel doesn't restore state when it sees a
super-ioctl coming from a different context, who does?

Sorry for the question-blitz, and I'm sure some of those sound a bit
naiive as I'm not too familiar with the lower levels of many drivers.
But if we're planning to move away from the DRI lock in the near
future, we need to figure what to do here.  Or maybe we're not getting
rid of the DRI lock anytime soon, but that would be a shame given that
we've got everything else lined up.

cheers,
Kristian

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to