In general the problem with the superioctl returning 'fail' is that the client 
has to then go back in time and figure out what the state preamble would have 
been at the start of the batchbuffer.  Of course the easiest way to do this is 
to actually precompute the preamble at batchbuffer start time and store it in 
case the superioctl fails -- in which case, why not pass it to the kernel along 
with the rest of the batchbuffer and have the kernel decide whether or not to 
play it?

Keith

----- Original Message ----
From: Kristian Høgsberg <[EMAIL PROTECTED]>
To: Keith Packard <[EMAIL PROTECTED]>
Cc: Jerome Glisse <[EMAIL PROTECTED]>; Dave Airlie <[EMAIL PROTECTED]>; 
[email protected]; Keith Whitwell <[EMAIL PROTECTED]>
Sent: Tuesday, November 27, 2007 8:44:48 PM
Subject: Re: DRI2 and lock-less operation


On Nov 27, 2007 2:30 PM, Keith Packard <[EMAIL PROTECTED]> wrote:
...
> >   I both cases, the kernel will need to
> > know how to activate a given context and the context handle should
 be
> > part of the super ioctl arguments.
>
> We needn't expose the contexts to user-space, just provide a virtual
> consistent device and manage contexts in the kernel. We could add the
> ability to manage contexts from user space for cases where that makes
> sense (like, perhaps, in the X server where a context per client may
 be
> useful).

Oh, right we don't need one per GLContext, just per DRI client, mesa
handles switching between GL contexts.  What about multithreaded
rendering sharing the same drm fd?

> > I imagine one optimiation you could do with a fixed number of
 contexts
> > is to assume that loosing the context will be very rare, and just
 fail
> > the super-ioctl when it happens, and then expect user space to
> > resubmit with state emission preamble.  In fact it may work well
 for
> > single context hardware...
>
> I recall having the same discussion in the past; have the superioctl
> fail so that the client needn't constantly compute the full state
> restore on every submission may be a performance win for some
 hardware.
> All that this requires is a flag to the kernel that says 'this
> submission reinitializes the hardware', and an error return that says
> 'lost context'.

Exactly.

> > But the super-ioctl is chipset specific and we can decide on the
> > details there on a chipset to chipset basis.  If you have input to
 how
> > the super-ioctl for intel should look like to support lockless
> > operation for current and future intel chipset, I'd love to hear
 it.
> > And of course we can version our way out of trouble as a last
 resort.
>
> We should do the lockless and context stuff at the same time; doing
> re-submit would be a bunch of work in the driver that would be
 wasted.

Is it that bad? We will still need the resubmit code for older
chipsets that dont have the hardware context support.  The drivers
already have the code to emit state in case of context loss, that code
just needs to be repurposed to generate a batch buffer to prepend to
the rendering code.  And the rendering code doesn't need to change
when resubmitting.  Will that work?

> Right now, we're just trying to get 965 running with ttm; once that's
> limping along, figuring out how to make it reasonable will be the
 next
> task, and hardware context support is clearly a big part of that.

Yeah - I'm trying to limit the scope of DRI2 so that we can have
redirected direct rendering and GLX1.4 in the tree sooner rather than
later (before the end of the year).  Maybe the best way to do that is
to keep the lock around for now and punt on the lock-less super-ioctl
if that really blocks on hardware context support.  How far back is
hardware contexts supported?

Kristian




-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
--
_______________________________________________
Dri-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to