The DRI drawable spinlock

2005-04-22 Thread Thomas Hellstrom
Hi!
Does anybody have a clear understanding of the drawable spinlock?
From my reading of code in the X server and dri_utilities.c it is ment 
to be used to stop anyone but the context holding the lock to touch the 
dri drawables in a way that would change their timestamp.

The X server has a very inefficient way of checking whether a client 
died while holding the drawable spinlock. It waits for 10 seconds and 
then grabs it by force.

Also the usage is dri_util.c is beyond my understanding. Basically, to 
lock and validate drawable info, the following happens:

get_heavyweight_lock;
while drawable_stamps_mismatch {
  release_heavyweight_lock;
  get_drawable_spinlock;
   //In dri_util.c
  do_some_minor_processing_that_can_be_done_elsewhere;
  release_drawable_spinlock;
  call_X_to_update_drawable_info;
  get_drawable_spinlock;
  //In driver.
  release_drawable_spinlock;
}
Basically no driver seems to be using it for anything, except possibly 
the gamma driver, which I figure is outdated anyway?

I have found some use for it in XvMC clients: To run the scaling engine 
to render to a drawable without holding the heavyweight lock for 
prolonged periods, but I strongly dislike the idea of freezing the X 
server for 10 secs if the XvMC client accidently dies.

Proposed changes:
1). Could we replace the locking value (which now seems to be 1) with 
the context number | _DRM_LOCK_HELD. In this way the drm can detect when 
the drawable lock is held by a killed client and release it.

2). Could we replace the drawable_spinlock with a futex-like lock 
similar to what's used in the via drm to reserve certain chip functions. 
The purpose would be to sched_yield() if the lock is contended, with an 
immediate wakeup when the lock is released. This would not be backwards 
binary compatible with other drivers, but it seems no up-to-date drivers 
is using this lock anyway.

/Thomas


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: The DRI drawable spinlock

2005-04-22 Thread Keith Whitwell
It's a horrible hack, basically.  I used to understand it but now no 
longer want to.  I think the original DRI documentation will have some 
information on it.

I think the original motiviation was for the gamma (and actually lots of 
subsequent drivers), you end up encoding the window position and 
cliprects in the command stream.  And this lock somehow protected those 
command streams from being submitted to hardware with out-of-date window 
positions.

In fact if you go right back, the DRM supported multiple software 
command stream where commands from the X server could be prioritized 
ahead of commands from 3d clients, so you could get the window-move 
blits being submitted to hardware ahead of 3d drawing commands to the 
old window position.

This created all sorts of amazing problems which have subsequently been 
avoided by simply submitting all commands to hardware in the order they 
are generated.

Right now, no driver uses the spinlock directly or explicitly, but they 
all use it when requesting their cliprects from the X server.

So basically the sequence goes:
	
	Client gets dri lock
	Client sees timestamp changed
	Client wants to talk to the X server to get new cliprects, but because 
client is holding the lock, the X server is unable to respond to the 
client.  So the client needs to release the lock to let the X server reply.
		-- But! (some set of circumstances might apply here), so there's a 
need for this funny other lock, that the client grabs before it releases 
the main dri lock, to stop the cliprects getting changed again in the 
meantime.

I don't see why you couldn't just do something like
get lock
while (timestamp mismatch) {
release lock
request new cliprects and timestamp
get lock
}
Note that is the contended case only.  What's the worst that could 
happen - somebody's wizzing windows around and our 3d client sits in 
this loop for the duration.  Note that the loop includes X server 
communication so it's not going to suck up the cpu or anything drastic.

In some respects it's like the code in the radeon DDX driver which uses 
a software path to perform depth buffer copying on window moves - well 
intentioned perhaps, but not actually useful or desirable.

Keith
Thomas Hellstrom wrote:
Hi!
Does anybody have a clear understanding of the drawable spinlock?
 From my reading of code in the X server and dri_utilities.c it is ment 
to be used to stop anyone but the context holding the lock to touch the 
dri drawables in a way that would change their timestamp.

The X server has a very inefficient way of checking whether a client 
died while holding the drawable spinlock. It waits for 10 seconds and 
then grabs it by force.

Also the usage is dri_util.c is beyond my understanding. Basically, to 
lock and validate drawable info, the following happens:

get_heavyweight_lock;
while drawable_stamps_mismatch {
  release_heavyweight_lock;
  get_drawable_spinlock;
   //In dri_util.c
  do_some_minor_processing_that_can_be_done_elsewhere;
  release_drawable_spinlock;
  call_X_to_update_drawable_info;
  get_drawable_spinlock;
  //In driver.
  release_drawable_spinlock;
}
Basically no driver seems to be using it for anything, except possibly 
the gamma driver, which I figure is outdated anyway?

I have found some use for it in XvMC clients: To run the scaling engine 
to render to a drawable without holding the heavyweight lock for 
prolonged periods, but I strongly dislike the idea of freezing the X 
server for 10 secs if the XvMC client accidently dies.

Proposed changes:
1). Could we replace the locking value (which now seems to be 1) with 
the context number | _DRM_LOCK_HELD. In this way the drm can detect when 
the drawable lock is held by a killed client and release it.

2). Could we replace the drawable_spinlock with a futex-like lock 
similar to what's used in the via drm to reserve certain chip functions. 
The purpose would be to sched_yield() if the lock is contended, with an 
immediate wakeup when the lock is released. This would not be backwards 
binary compatible with other drivers, but it seems no up-to-date drivers 
is using this lock anyway.

/Thomas

___
xorg mailing list
[EMAIL PROTECTED]
http://lists.freedesktop.org/mailman/listinfo/xorg

---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Bug 1707] r200 Radeon driver and Wings 3D

2005-04-22 Thread bugzilla-daemon
Please do not reply to this email: if you want to comment on the bug, go to
   
the URL shown below and enter yourcomments there. 
   
https://bugs.freedesktop.org/show_bug.cgi?id=1707  
 




--- Additional Comments From [EMAIL PROTECTED]  2005-04-22 02:52 ---
I dont suppose any of the r200 developers have any info about
R200_VF_PRIM_3VRT_POINTS and/or R200_VF_PRIM_3VRT_LINES so that I can fix this
for r300 without too much trouble?  
 
 
--   
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email 
 
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: The DRI drawable spinlock

2005-04-22 Thread Thomas Hellström
Hi!
Keith Whitwell wrote:
get lock
while (timestamp mismatch) {
release lock
request new cliprects and timestamp
get lock
}
Note that is the contended case only.  What's the worst that could 
happen - somebody's wizzing windows around and our 3d client sits in 
this loop for the duration.  Note that the loop includes X server 
communication so it's not going to suck up the cpu or anything drastic.
This is basically what I'm doing right now. The problem is as the code 
continues:

   get lock
   while (timestamp mismatch) {
   release lock
   request new cliprects and timestamp
   get lock
   }
   wait_for_device()
   render_to_scale_buffer()
   wait_for_device()
   render_to_back_buffer()
   wiat_for_device()
   blit_to_screen()
   release_lock()
And, to avoid holding the lock while waiting for the device, since that 
blocks use of the decoder while I'm doing scaling operations, I'd like to

mark_scaling_device_busy()
get_drawable_lock()
get lock
while (timestamp mismatch) {
   release lock
   release_drawable_lock()
   request new cliprects and timestamp
   get_drawable_lock
   get lock
 }
release_lock()
wait_for_device()
get_lock()
render_to_scale_buffer()
release_lock()
wait_for_device()
get_lock()
render_to_back_buffer()
release_lock()
wait_for_device()
get_lock()
blit_to_screen()
release_lock()
mark_scaling_device_free()
/Thomas

---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: The DRI drawable spinlock

2005-04-22 Thread Keith Whitwell
Thomas Hellström wrote:
Hi!
Keith Whitwell wrote:
get lock
while (timestamp mismatch) {
release lock
request new cliprects and timestamp
get lock
}
Note that is the contended case only.  What's the worst that could 
happen - somebody's wizzing windows around and our 3d client sits in 
this loop for the duration.  Note that the loop includes X server 
communication so it's not going to suck up the cpu or anything drastic.

This is basically what I'm doing right now. The problem is as the code 
continues:

   get lock
   while (timestamp mismatch) {
   release lock
   request new cliprects and timestamp
   get lock
   }
   wait_for_device()
   render_to_scale_buffer()
   wait_for_device()
   render_to_back_buffer()
   wiat_for_device()
   blit_to_screen()
   release_lock()
And, to avoid holding the lock while waiting for the device, since that 
blocks use of the decoder while I'm doing scaling operations, I'd like to

mark_scaling_device_busy()
get_drawable_lock()
get lock
while (timestamp mismatch) {
   release lock
   release_drawable_lock()
   request new cliprects and timestamp
   get_drawable_lock
   get lock
 }
release_lock()
wait_for_device()
get_lock()
render_to_scale_buffer()
release_lock()
wait_for_device()
get_lock()
render_to_back_buffer()
release_lock()
wait_for_device()
get_lock()
blit_to_screen()
release_lock()
mark_scaling_device_free()
And then release_drawable_lock()?
What semantics are you hoping for from the drawable lock in your 
scenario above?  Just that the cliprects won't change while it is held?

Keith
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_ide95alloc_id396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: The DRI drawable spinlock

2005-04-22 Thread Thomas Hellström
Keith Whitwell wrote:
Thomas Hellström wrote:
Hi!
Keith Whitwell wrote:
get lock
while (timestamp mismatch) {
release lock
request new cliprects and timestamp
get lock
}
Note that is the contended case only.  What's the worst that could 
happen - somebody's wizzing windows around and our 3d client sits in 
this loop for the duration.  Note that the loop includes X server 
communication so it's not going to suck up the cpu or anything drastic.

This is basically what I'm doing right now. The problem is as the 
code continues:

   get lock
   while (timestamp mismatch) {
   release lock
   request new cliprects and timestamp
   get lock
   }
   wait_for_device()
   render_to_scale_buffer()
   wait_for_device()
   render_to_back_buffer()
   wiat_for_device()
   blit_to_screen()
   release_lock()
And, to avoid holding the lock while waiting for the device, since 
that blocks use of the decoder while I'm doing scaling operations, 
I'd like to

mark_scaling_device_busy()
get_drawable_lock()
get lock
while (timestamp mismatch) {
   release lock
   release_drawable_lock()
   request new cliprects and timestamp
   get_drawable_lock
   get lock
 }
release_lock()
wait_for_device()
get_lock()
render_to_scale_buffer()
release_lock()
wait_for_device()
get_lock()
render_to_back_buffer()
release_lock()
wait_for_device()
get_lock()
blit_to_screen()
release_lock()
mark_scaling_device_free()

And then release_drawable_lock()?
What semantics are you hoping for from the drawable lock in your 
scenario above?  Just that the cliprects won't change while it is held?
Exactly on both points, except the drawable_lock would have to be 
released before mark_scaling_device_free() to avoid deadlocks.

Keith


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_ide95alloc_id396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: The DRI drawable spinlock

2005-04-22 Thread Keith Whitwell
Thomas Hellstrom wrote:
Hi!
Does anybody have a clear understanding of the drawable spinlock?
 From my reading of code in the X server and dri_utilities.c it is ment 
to be used to stop anyone but the context holding the lock to touch the 
dri drawables in a way that would change their timestamp.

The X server has a very inefficient way of checking whether a client 
died while holding the drawable spinlock. It waits for 10 seconds and 
then grabs it by force.

Also the usage is dri_util.c is beyond my understanding. Basically, to 
lock and validate drawable info, the following happens:

get_heavyweight_lock;
while drawable_stamps_mismatch {
  release_heavyweight_lock;
  get_drawable_spinlock;
   //In dri_util.c
  do_some_minor_processing_that_can_be_done_elsewhere;
  release_drawable_spinlock;
  call_X_to_update_drawable_info;
  get_drawable_spinlock;
  //In driver.
  release_drawable_spinlock;
}
Basically no driver seems to be using it for anything, except possibly 
the gamma driver, which I figure is outdated anyway?

I have found some use for it in XvMC clients: To run the scaling engine 
to render to a drawable without holding the heavyweight lock for 
prolonged periods, but I strongly dislike the idea of freezing the X 
server for 10 secs if the XvMC client accidently dies.

Proposed changes:
1). Could we replace the locking value (which now seems to be 1) with 
the context number | _DRM_LOCK_HELD. In this way the drm can detect when 
the drawable lock is held by a killed client and release it.
This seems like a reasonable thing to do now.  Note that the X server 
could also be the one responsible for freeing the lock, which might be 
cleaner if the DRM is currenly unaware of this beast.

2). Could we replace the drawable_spinlock with a futex-like lock 
similar to what's used in the via drm to reserve certain chip functions. 
The purpose would be to sched_yield() if the lock is contended, with an 
immediate wakeup when the lock is released. This would not be backwards 
binary compatible with other drivers, but it seems no up-to-date drivers 
is using this lock anyway.
Maybe this is something to put off to be part of Ian's forthcoming 
binary-compatibility breakages for X.org 6.7?

Keith
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: The DRI drawable spinlock

2005-04-22 Thread Keith Whitwell
Thomas Hellström wrote:
Keith Whitwell wrote:
Thomas Hellström wrote:
Hi!
Keith Whitwell wrote:
get lock
while (timestamp mismatch) {
release lock
request new cliprects and timestamp
get lock
}
Note that is the contended case only.  What's the worst that could 
happen - somebody's wizzing windows around and our 3d client sits in 
this loop for the duration.  Note that the loop includes X server 
communication so it's not going to suck up the cpu or anything drastic.


This is basically what I'm doing right now. The problem is as the 
code continues:

   get lock
   while (timestamp mismatch) {
   release lock
   request new cliprects and timestamp
   get lock
   }
   wait_for_device()
   render_to_scale_buffer()
   wait_for_device()
   render_to_back_buffer()
   wiat_for_device()
   blit_to_screen()
   release_lock()
And, to avoid holding the lock while waiting for the device, since 
that blocks use of the decoder while I'm doing scaling operations, 
I'd like to

mark_scaling_device_busy()
get_drawable_lock()
get lock
while (timestamp mismatch) {
   release lock
   release_drawable_lock()
   request new cliprects and timestamp
   get_drawable_lock
   get lock
 }
release_lock()
wait_for_device()
get_lock()
render_to_scale_buffer()
release_lock()
wait_for_device()
get_lock()
render_to_back_buffer()
release_lock()
wait_for_device()
get_lock()
blit_to_screen()
release_lock()
mark_scaling_device_free()

And then release_drawable_lock()?
What semantics are you hoping for from the drawable lock in your 
scenario above?  Just that the cliprects won't change while it is held?

Exactly on both points, except the drawable_lock would have to be 
released before mark_scaling_device_free() to avoid deadlocks.

So a few more questions:
1) Why (exactly) is keeping the cliprects from changing a concern?  What 
happens if they change between steps above?

2) Could the DDX driver blit the contents of these additional buffers 
(scale, back) at the same time it blits the frontbuffer so that the 
window change just works?

3) I don't think that the drawable lock is a pretty thing, is it worth 
keeping it around for this?  Would some black areas or incomplete video 
frames during window moves be so bad?  Note that the next version of 
this hardware might have a proper command stream that just allows you to 
submit all those operations to hardware in a single go, and not have to 
do the waiting in the driver...

Keith
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_ide95alloc_id396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: The DRI drawable spinlock

2005-04-22 Thread Thomas Hellström
Keith Whitwell wrote:
Thomas Hellström wrote:
Keith Whitwell wrote:
Thomas Hellström wrote:
Hi!
Keith Whitwell wrote:
get lock
while (timestamp mismatch) {
release lock
request new cliprects and timestamp
get lock
}
Note that is the contended case only.  What's the worst that could 
happen - somebody's wizzing windows around and our 3d client sits 
in this loop for the duration.  Note that the loop includes X 
server communication so it's not going to suck up the cpu or 
anything drastic.


This is basically what I'm doing right now. The problem is as the 
code continues:

   get lock
   while (timestamp mismatch) {
   release lock
   request new cliprects and timestamp
   get lock
   }
   wait_for_device()
   render_to_scale_buffer()
   wait_for_device()
   render_to_back_buffer()
   wiat_for_device()
   blit_to_screen()
   release_lock()
And, to avoid holding the lock while waiting for the device, since 
that blocks use of the decoder while I'm doing scaling operations, 
I'd like to

mark_scaling_device_busy()
get_drawable_lock()
get lock
while (timestamp mismatch) {
   release lock
   release_drawable_lock()
   request new cliprects and timestamp
   get_drawable_lock
   get lock
 }
release_lock()
wait_for_device()
get_lock()
render_to_scale_buffer()
release_lock()
wait_for_device()
get_lock()
render_to_back_buffer()
release_lock()
wait_for_device()
get_lock()
blit_to_screen()
release_lock()
mark_scaling_device_free()


And then release_drawable_lock()?
What semantics are you hoping for from the drawable lock in your 
scenario above?  Just that the cliprects won't change while it is held?

Exactly on both points, except the drawable_lock would have to be 
released before mark_scaling_device_free() to avoid deadlocks.

So a few more questions:
1) Why (exactly) is keeping the cliprects from changing a concern?  
What happens if they change between steps above?
Not much really. It all boils down to what to do if the per-drawable 
back-buffer mismatches the drawable. In the simplest case one would 
simply skip the blit, which might even be better than an attempt to 
match the old backbuffer to a new drawable size. Problems really only 
occur if  / when the drawable is resized. But I've considered this and 
since it's a simple, still working but not perfect solution, I'm still 
considering it.

2) Could the DDX driver blit the contents of these additional buffers 
(scale, back) at the same time it blits the frontbuffer so that the 
window change just works?

You mean the front blitting during window moves? In this case it doesn't 
relly apply, since the per-drawable back buffer would still be valid. 
Resizing would be the only operation causing problems.

3) I don't think that the drawable lock is a pretty thing, is it worth 
keeping it around for this?  Would some black areas or incomplete 
video frames during window moves be so bad?  Note that the next 
version of this hardware might have a proper command stream that just 
allows you to submit all those operations to hardware in a single go, 
and not have to do the waiting in the driver...

I can see your point, but on the other hand if the command stream were 
that smart, it would only in effect implement a continously held 
heavyweight lock, blocking all dma submissions to the mpeg decoder and 
2D / 3D engine while the scaling engine is working, which is exactly 
what I'm trying to avoid.

The problem is really what to do when there are a lot of independent 
engines on a video chip, with a common command stream, numerous IRQ 
sources and one global hardware lock. I assume this will be more of a 
problem in the future. The solution using the drawable lock is not very 
clean. On the other hand, not being able no to use the engines in 
parallel is not very efficient and is bad for interactivity.

I'm not sure what's the best design to solve this, but one idea would be 
having a futex-like lock and a breadcrumb pool for each engine, 
optionally also with an IRQ. This would be sufficient to

   * be able to independently submit DMA commands.
   * wait for engine idle independently on each engine without ever
 needing to wait for DMA quiescent.
   * hold the global hardware lock only during operations that render
 directly to the front buffer, or to a common back-buffer. The
 global lock would then effectively be a drawable lock.
   * Keep backwards compatibility, as simple architectures may choose
 to retain only the global lock.
Hmm, maybe for now I'll stick to the simple solution, :)
But I think a design that works around the 
single-lock-and-command-stream-multiple-engines would bee needed in the 
not too far future.

/Thomas
Keith


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.

[Bug 2418] Lockup using linux-core on radeon

2005-04-22 Thread bugzilla-daemon
Please do not reply to this email: if you want to comment on the bug, go to
   
the URL shown below and enter yourcomments there. 
   
https://bugs.freedesktop.org/show_bug.cgi?id=2418  
 




--- Additional Comments From [EMAIL PROTECTED]  2005-04-22 13:35 ---
Oops, did I say radeon_exit ? I meant radeon_init has the same address as
drm_init. Obviously it never makes it to radeon_exit...
  
 
 
--   
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email 
 
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Bug 2418] Lockup using linux-core on radeon

2005-04-22 Thread bugzilla-daemon
Please do not reply to this email: if you want to comment on the bug, go to
   
the URL shown below and enter yourcomments there. 
   
https://bugs.freedesktop.org/show_bug.cgi?id=2418  
 




--- Additional Comments From [EMAIL PROTECTED]  2005-04-22 13:33 ---
Actually, this is not specific to radeon. I tried insmod'ing other card-specific
modules and it also lockups.

Also, when I print the address for the radeon_exit function, it's the same as
drm_init. So it looks like when radeon_init is called, it lockups by calling
itself again and again.

Still there with current drm cvs (22/04/2005), btw.
  
 
 
--   
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email 
 
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel