On Mon, Apr 20, 2009 at 02:19:19PM +1000, Peter Hutterer wrote:
> On Sun, Apr 19, 2009 at 09:07:29PM -0700, Aaron Plattner wrote:
> > Please don't.  Or at the very least, make this optional.  There are a
> > number of reasons why the graphics driver would get stuck for long
> > periods of time including ridiculous client requests (e.g. huge
> > convolution filters), graphics cards hitting thermal slowdown
> > thresholds, graphics "hardware" that's actually a simluation library,
> > the server being stopped in a debugger, etc.
> 
> Fair enough. Is there any way to "reliably" detect the difference?
> I don't think making this optional is particularly useful, so dropping it is
> probably better.

Why not just make the timeout _really_ long, e.g. about 15sec? You could
make that cheap by just logging the timestamp of the event last popped
out of the queue, and compare to the event you're trying to enqueue.  (Or
the earliest timestamp of all the events in the queue, rather.)

I guess you'd still need to make it command-line configurable for the
gdb case, but yeah.

Cheers,
Daniel

Attachment: signature.asc
Description: Digital signature

_______________________________________________
xorg-devel mailing list
[email protected]
http://lists.x.org/mailman/listinfo/xorg-devel

Reply via email to