glxgears lock up MDK9.2 with radeon 9200
Hi all, I just installed a MDK 9.2, and as the agpgart module seems OK with my nForce2 chipset, I also loaded DRI. All logs are OK, the dri modules load correctly; but if I launch glxgears it locks up solid (hard reset only to get rid of it). I use the radeon driver of MDK9.2 for my radeon 9200 (from chaintech IIRC). Have you heard of something similar ? I kind of imagine that this could be because of agpgart which is not really happy with the nforce2 AGP feature. Bye Manu ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Adios amigos!
Raymond Jennings wrote: I fear that my membership has not been too helpful. I'm just a newbie, and Xfree86 is _way_ out of my league. Therefore, I have unsubscribed. If anyone wants me to get back on, please reply privately at [EMAIL PROTECTED] (I am no longer listed at XFree86.org). If I get requests to do so, I'll join back on. You should not think that being a newbie should prevent you from doing anything useful for XFree86 (or whatever project thereof). You could for example make some tests with the latest snapshots, or whatever thing which will fit your skills. Sure you have to be prepared to be shaken sometimes by more skilled people, but if you overcome that (and hopefully it won happen too frequently ;-) you can achieve to do a lot of things for Free Software, which will certainly make you happy (at least it makes me happy everytime I have the impression to improve things ;-) BTW : you (and skilled people here also to post whatever they want to be done, like testing certain drivers, features, whatever) should have a look at xwin.org wiki, especially XJANITOR, where you have docs on getting cvs XFree and compile (not easy, risky, but hey it is testing ;-) Bye Manu ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Kernel Module? On second thought...
Mike A. Harris wrote: On Fri, 17 Oct 2003, David Fox wrote: I think that the wisest approach is, instead of suggesting a kernel module to the XFree86 folks, you do two things. First, suggest a kernel module to the Linux folks that implements a protocol for accessing the resource you are trying to use. Then you go to the XFree86 folks and suggest a module to utilize that protocol in the X server. [snip] What _specific_ area of XFree86 performance are you (or anyone else) thinking needs improvement, what solutions have you investigated or even thought about which could improve this performance by modifying XFree86 itself, a driver, Mesa, or other userland code? If you do think the kernel might help for this problem, what steps have you taken to determine if that is truely reasonable, and have you tested your theory? Have you discussed that one small idea with other developers to see what they think about the alleged problem, wether it even really is a problem at all, how important it is, what other solutions there might be, etc. etc. etc. All of this lets stuff things in the kernel, because kernel code is automatically 2 times faster right? stuff gets boring fast. Show me the code. I have no code to show ;-) but I have a case where a kernel module is unavoidable to get out of the statu quo : M. Voijkovich asked about the use of sched_yield() in order to wait for the FIFO to process a big enough part of the DMA buffer before we try to refill it; his idea was to smartly release the CPU because we know that the driver will have to wait quite a while before the GPU has processed the datas. But we also need to get the CPU back fast enough in order to have a unbroken stream of data fed to the GPU.Now the driver is busy looping (correct me if I wrong) which is a waste of CPU. Using sched_yield() for 2.6.x linux kernels will lead to bad latency (basically we will reacquire the CPU very lately). The last alternative (at least following R. Russel and R.M. Love) is to have a kernel module which can poll the DMA to see how much has already been sent to the GPU, and when enough has been processed, this kernel module wakes up the driver in userland (using futexes). This approach could work with all these types of rather long waits where it is not really smart to busy loop (one assumption is that the wait can be long enough that we will consume our whole timeslice busy looping, which will also lead to a bad score on interactiveness on newer linux kernels, and also we will have to wait until we reacquire the CPU). R. Russel seems to be OK to help if the XFree guys wanted to go this way. I must add that M. Vojkovich did not like the idea of a kernel module. My 2 cents... Bye Manu ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Kernel Module? On second thought...
Emmanuel ALLAUD wrote: Mike A. Harris wrote: On Fri, 17 Oct 2003, David Fox wrote: I think that the wisest approach is, instead of suggesting a kernel module to the XFree86 folks, you do two things. First, suggest a kernel module to the Linux folks that implements a protocol for accessing the resource you are trying to use. Then you go to the XFree86 folks and suggest a module to utilize that protocol in the X server. [snip] What _specific_ area of XFree86 performance are you (or anyone else) thinking needs improvement, what solutions have you investigated or even thought about which could improve this performance by modifying XFree86 itself, a driver, Mesa, or other userland code? If you do think the kernel might help for this problem, what steps have you taken to determine if that is truely reasonable, and have you tested your theory? Have you discussed that one small idea with other developers to see what they think about the alleged problem, wether it even really is a problem at all, how important it is, what other solutions there might be, etc. etc. etc. All of this lets stuff things in the kernel, because kernel code is automatically 2 times faster right? stuff gets boring fast. Show me the code. I have no code to show ;-) but I have a case where a kernel module is unavoidable to get out of the statu quo : M. Voijkovich asked about the use of sched_yield() in order to wait for the FIFO to process a big enough part of the DMA buffer before we try to refill it; his idea was to smartly release the CPU because we know that the driver will have to wait quite a while before the GPU has processed the datas. But we also need to get the CPU back fast enough in order to have a unbroken stream of data fed to the GPU.Now the driver is busy looping (correct me if I wrong) which is a waste of CPU. Using sched_yield() for 2.6.x linux kernels will lead to bad latency (basically we will reacquire the CPU very lately). The last alternative (at least following R. Russel and R.M. Love) is to have a kernel module which can poll the DMA to see how much has already been sent to the GPU, and when enough has been processed, this kernel module wakes up the driver in userland (using futexes). This approach could work with all these types of rather long waits where it is not really smart to busy loop (one assumption is that the wait can be long enough that we will consume our whole timeslice busy looping, which will also lead to a bad score on interactiveness on newer linux kernels, and also we will have to wait until we reacquire the CPU). R. Russel seems to be OK to help if the XFree guys wanted to go this way. I must add that M. Vojkovich did not like the idea of a kernel module. My 2 cents... Bye Manu Hmm, replying to myself : I read that the DRI kernel module addresses certain issues with DMA/IRQ and others. Is there any good reason that would prevent 2D drivers to use the API provided by this module? Bye Manu ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: What about a kernel module?
Mark Vojkovich wrote: On Wed, 8 Oct 2003, Emmanuel ALLAUD wrote: Juliusz Chroboczek wrote: I'd like to suggest that you implement device-specific code as a kernel module. Well, that won't happen; we already have working portable driver code in userspace, and there's no chance we'll port that to the Linux kernel. On the other hand, I do think that we'll end up using more kernel-side functionality than we currently do; perhaps someday we'll have enough of that to be able to run non-root servers, at least on hardware that does memory-mapped I/O (the iopl system call is for root only). TR The key problem with this is that kernel modules are Linux-specific, and TR further often need to be kernel-version specific. XFree86 runs quite well TR in many non-Linux environments today. But that doesn't prevent it from using features specific to Linux when needed. Notice for example the use of the vm86old syscall in the Linux/i386 version of the int10 module, or the (optional) use of fbdev in quite a few drivers, or the future use of the /dev/input/event devices (hint, hint). Let alone the DRI. Implementing a kernel module might give access to more resources, like tighter console control, asynchronous accelerations, TR No, I don't think any of that is true. DMA? Smarter polling of FIFO status? Retrace interrupt? For this specific problem : I talked with M. Vojkovich about the yielding problem in that case, then we brought this to linux kernel guys (R. Russel and R. Love), both agreed to say that using sched_yield() will be really incorrect in 2.6, certainly to too much latency. They also both agreed to say that the solution is to use futex to sync the user space driver with the help of a kernel side which would poll the FIFO status (or whatever other conditions we want to wait for). They seemed to be interested in supporting that, so perhaps a joint effort could be successful on both sides so kernel can provide new services the X server could use. Bye Manu This is more of a hack than a solution. I still see little utility in a kernel module. I agree that we have to think of the usefullness of a kernel module. But I am pretty sure there are other problems which can be solved using it, moreover if the implementation is good, it could give us a clean API for hardware things (like DMA or whatever), think of the /dev/rtc as pointed earlier in this thread. Anyways I think it is worth to try and see what it can achieve (by the way does anybody know if DRI people have the same problem with FIFO/DMA?). Bye Manu ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: What about a kernel module?
Juliusz Chroboczek wrote: I'd like to suggest that you implement device-specific code as a kernel module. Well, that won't happen; we already have working portable driver code in userspace, and there's no chance we'll port that to the Linux kernel. On the other hand, I do think that we'll end up using more kernel-side functionality than we currently do; perhaps someday we'll have enough of that to be able to run non-root servers, at least on hardware that does memory-mapped I/O (the iopl system call is for root only). TR The key problem with this is that kernel modules are Linux-specific, and TR further often need to be kernel-version specific. XFree86 runs quite well TR in many non-Linux environments today. But that doesn't prevent it from using features specific to Linux when needed. Notice for example the use of the vm86old syscall in the Linux/i386 version of the int10 module, or the (optional) use of fbdev in quite a few drivers, or the future use of the /dev/input/event devices (hint, hint). Let alone the DRI. Implementing a kernel module might give access to more resources, like tighter console control, asynchronous accelerations, TR No, I don't think any of that is true. DMA? Smarter polling of FIFO status? Retrace interrupt? For this specific problem : I talked with M. Vojkovich about the yielding problem in that case, then we brought this to linux kernel guys (R. Russel and R. Love), both agreed to say that using sched_yield() will be really incorrect in 2.6, certainly to too much latency. They also both agreed to say that the solution is to use futex to sync the user space driver with the help of a kernel side which would poll the FIFO status (or whatever other conditions we want to wait for). They seemed to be interested in supporting that, so perhaps a joint effort could be successful on both sides so kernel can provide new services the X server could use. Bye Manu ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
sched_yield futexes
/I did not have news from my request on linux-kernel, but I came across this interview of Rusty Russel where he talks about futexes (URL : http://kerneltrap.org/node/view/892) : / / - / /Rusty Russell/: OK, a futex (Fast Userspace Mutex) isn't a mutex at all. It was in a (much) earlier implemetation, but now it's just a waitqueue primitive accessible to userspace. FUTEX_WAKE says wake anyone waiting on this address, and FUTEX_WAIT says when I read ADDR, it contained VALUE: if it still contains that value when you look at it, sleep until someone calls FUTEX_WAKE on this address. It's used as follows. Two processes (or threads) share some memory, and agree to use a part of it as a lock. 1 means unlocked, 0 means locked, 0 means locked and someone is waiting. To grab the lock, you do an (architecture-specific) atomic decrement operation: if it hits 0, then it was 1, and you've got the lock. Otherwise you wait for a while. When you want to release the lock, you set it to 1 again. This is really fast if there's no contention on the lock: not a system call in sight. Before futexes, the wait a while was usually implemented try three times and then call yield(). The problem with yield is that it has no clear semantics: the kernel has *no* idea what you are waiting for, so it has to guess when to wake you up. In a word, yield() is always a hack. With futexes, you say this address was -1 when I finished with it, sleep on it: the kernel atomically checks the value and either puts you to sleep or returns EAGAIN, and you try again. When you release a lock, if it's 0, you can simply increment it back to 1, otherwise, you set it to 1 and call FUTEX_WAKE, because you know someone is waiting. You can read some example code here http://www.kernel.org/pub/linux/kernel/people/rusty/futex-2.2.tar.gz, and of course in the current glibc sources. --- The thing here is that I still don't know how we would use futexes... Bye Manu ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Exporting sched_yield to the drivers
Mark Vojkovich wrote: Can we at least agree to export an xf86Yield() function? It will be sched_yield() on whatever plaforms support it, a noop on others. If somebody comes up with something better to implement it with, then great. I have no voice on that but still : I think this would abstract the way to yield which will highly depend on the OS. Perhaps this will needs to be extended (I mean this function could take parameter to be able to yield more smatrly). Bye Manu ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Exporting sched_yield to the drivers
--- Egbert Eich [EMAIL PROTECTED] a écrit : Mark Vojkovich writes: Can we export to the drivers some function that yields the CPU? Currently alot of drivers burn the CPU waiting for fifos, etc... usleep(0) is not good for this because it's jiffy based and usually never returns in less than 10 msec which has the effect of making interactivity worse instead of better. I'm not sure which platforms don't export sched_yield() and which will need alternative implementations. I've experimented with sched_yield in drivers when waiting for retrace on video playback. It turned out that when the system was otherwise idle the video performance was about the same. However when another application was using its full time slice the video frame rate would drop far below the screen refresh rate. I've measured the latencies and it turned out that the time elapsed before the process would be rescheduled was considerably higher than the average time the process was hanging in the wait loop when the system was otherwise busy. Shorter (or variable) time slices may help here, but a better solution would be to use an event (ie. interrupt) driven rescheduling. This would require a small kernel support module for each driver, though. This is perhaps a dumb idea, but could the futexes help here? I don't know if they have equivalent on other OSes than Linux. Bye Manu ___ Do You Yahoo!? -- Une adresse @yahoo.fr gratuite et en français ! Yahoo! Mail : http://fr.mail.yahoo.com ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Exporting sched_yield to the drivers
--- Mark Vojkovich [EMAIL PROTECTED] a écrit : On Tue, 23 Sep 2003, [iso-8859-1] emmanuel ALLAUD wrote: This is perhaps a dumb idea, but could the futexes help here? I don't know if they have equivalent on other OSes than Linux. Bye Manu I don't see how they help. The problem is that the X-server is in a situation where it has to wait for something to complete before it can continue, and it knows that it will take a long time before that something will complete. All it can do now is burn the CPU during that time. I'm assuming futexes are some interprocess mutex mechanism? If so, I don't see how this helps. Yes it is. Actually I am not familiar with them either, but if you had a way to be able to wake up your waiting process ASAP when eg DMA is finished, instead of waiting that the OS scheduler gives you the CPU back (which can be very long), that would be good. Problem is that you must in the first place be able to have a way (here interrupt I guess) that tells you that the work is finished, which means something in kernel (in case of DMA do you have an interrupt driven way to know if it's completed?). I think I raise more questions than I answer, sorry. Bye Manu ___ Do You Yahoo!? -- Une adresse @yahoo.fr gratuite et en français ! Yahoo! Mail : http://fr.mail.yahoo.com ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Exporting sched_yield to the drivers
--- Mark Vojkovich [EMAIL PROTECTED] a écrit : On Tue, 23 Sep 2003, [iso-8859-1] emmanuel ALLAUD wrote: I'm assuming futexes are some interprocess mutex mechanism? If so, I don't see how this helps. Yes it is. Actually I am not familiar with them either, but if you had a way to be able to wake up your waiting process ASAP when eg DMA is finished, instead of waiting that the OS scheduler gives you the CPU back (which can be very long), that would be good. Problem is that you must in the first place be able to have a way (here interrupt I guess) that tells you that the work is finished, which means something in kernel (in case of DMA do you have an interrupt driven way to know if it's completed?). The nv driver is completely in user-space and requires no kernel support. It should remain so due to portability concerns. Besides, I don't really want to know when DMA is finished. I want to know when there's enough free room in the DMA buffer so the CPU can continue putting data into it rather than polling for it. Yielding in my poll loop has been a good solution in other cases. Yes you're right, and then I am not sure this can be done (I don't think it is really possible to have a report from DMA saying that the source buffer has enough free space). The problem with yielding is that you can have interactivity problem if the computer is loaded enough. At least that's what I have understood will happen with 2.6 linux kernels. The problem is to be able to tell the kernel : OK I know I have to wait so I give the CPU back, but I am an interactive process and I don't want my CPU back in 1 hour ;-) Perhaps there should be a give my remaining slice time to someone else and then give me back the CPU scheduler hint? Bye Manu ___ Do You Yahoo!? -- Une adresse @yahoo.fr gratuite et en français ! Yahoo! Mail : http://fr.mail.yahoo.com ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Exporting sched_yield to the drivers
--- Mark Vojkovich [EMAIL PROTECTED] a écrit : On Tue, 23 Sep 2003, [iso-8859-1] emmanuel ALLAUD wrote: The problem with yielding is that you can have interactivity problem if the computer is loaded enough. If you don't yield you have an interactivity problem. What good is keeping your time slices if all you're doing during them is busy waiting? Sure ;-) At least that's what I have understood will happen with 2.6 linux kernels. The problem is to be able to tell the kernel : OK I know I have to wait so I give the CPU back, but I am an interactive process and I don't want my CPU back in 1 hour ;-) Perhaps there should be a give my remaining slice time to someone else and then give me back the CPU scheduler hint? I'm not sure the 2.6 behavior sounds significantly worse than the 2.4 behavior. I really don't understand what the big deal is. I don't want the rest of my slice! I have nothing to do with it except busy wait. Sure it would nice Linux kernel didn't suck in this way, and I consider the sched_yield() change a regression and a bug, but we make due with what we've got and it's better than busy waiting. I totally agree. My point here is : how will yielding will compare with another technic (yet to be found) in terms of interactivity? You don't want to have to wait too long after yielding because other processes are crunching numbers or whatever. Actually I think the kernel guys should be able to tell the good way to achieve that. I know the scheduler is now smarter about interactivity (it will try to prioritize processes that are more interactive-like, so a good point would be to see if yielding will give us enough interactive credit so that we won't wait too long before reacquiring CPU). I amm not saying that yielding is bad, but just that now that the question is raised let's see if there is a better way to do it. BTW should I bring the question in kernel ML? Bye Manu ___ Do You Yahoo!? -- Une adresse @yahoo.fr gratuite et en français ! Yahoo! Mail : http://fr.mail.yahoo.com ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: i855 and 1400x1050
Hi, I read your mails on xfree-devel about your wiki page. I actually had also setup one via the xwin site. The link is : http://xwin.org:9673/xwin/XJANITOR There are a lot of stuff about compiling XFree and also about debugging X (using xscope, valgrind for the mem leaks and other oddities...). Should I just add a link somewhere on your wiki page to this one? Bye Manu ___ Do You Yahoo!? -- Une adresse @yahoo.fr gratuite et en français ! Yahoo! Mail : http://fr.mail.yahoo.com ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Arch specific optimizations?
Hi all, in the thread about RENDER extension, it has beem mentionned that XFree was performing much slower (ie 2 or 3 times slower) than imlib2 (sorry I don't really remember in which tasks). The reason seemed to boil down to the fact that imlib2 has arch specific asm instructions (I think mostly for x586 via MMX or SSE or whatever) for certain crucial functions. My question is why not do that also in XFree (borrowing/adapting codes from images manipulation libs)? By choosing carefully the functions to optimize (that would mean only a few small function so that the maintainance is as easy as possible) we should avoid too much mess, keeping portability by using the old functions for arches with no specific optimizations. Does that sound reasonnable? Bye Manu ___ Do You Yahoo!? -- Une adresse @yahoo.fr gratuite et en français ! Yahoo! Mail : http://fr.mail.yahoo.com ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: RENDER question
--- Thomas Winischhofer [EMAIL PROTECTED] a écrit : Carsten Haitzler (The Rasterman) wrote: That is strange. Without acceleration, I get 9.7 1.7 2.3 Seems imlib uses the video RAM. definitely not. imlib2 only uses system ram - all its buffers are a direct result of malloc() :) imlib2 has no clue that video hardware exists :) In this case I must assume that the Debian package of imlib is compiled for i386 without any MMX/SSE/etc stuff... Yes, I can check the queue location anytime. But doing this before every accelerator command slows down the whole stuff dramatically. hmm- so theres no simple counter? maybe every N commands do a sync and find out where your'e up to (or ever N seconds - whichever comes first) ? : That's in about what I do at the moment. Reading the counter is slow, because it's (of course) MMIO. I check the counter at every quarter boundary of the queue length. And I don't sync, but wait until the counter is outside of the quarter of the queue I am about to write to. That's the (by far) fastest method I have found so far. but that doesn't mean x can't have mmx/sse/sse2/altivec routines to - it really isnt that much code. you only replace the core loop of the most common operations. comapred to x's current size - this is NOTHING. :) Deciding this is up to the people maintaining the XAA/whatever code... :) - and they won't be especially keen to play with this, thinking of the numerous platforms X is supposed to run on... Sorry to pop up at the middle of the thread ;-) But could this be done for a few key functions putting them all in one source file with the awkward ifdef depending on archs? This could be a real performance win with only the cost of a few well determined small functions with (awkward) ifdef for archs. The trade off seems worthwhile, especially because you can already borrow the code from existing libs like imlib2 or others. I'm saying that without knowledge of the code in question though ;-) Bye Manu ___ Do You Yahoo!? -- Une adresse @yahoo.fr gratuite et en français ! Yahoo! Mail : http://fr.mail.yahoo.com ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: XInput: The device type in XListInputDevices comes up again...
--- Bryan W. Headley [EMAIL PROTECTED] a écrit : Egbert Eich wrote: Bryan W. Headley writes: snip Sorry, just telling you how it is, now. hotplug listens to a kernel message layer, and invokes shell scripts in response to events. These scripts can load/unload kernel device drivers, mount discs, etc. All these things would do is write a message to some kind of listener in X. Said listener would be an application... Architecting it any other way would put us very close to the Linux kernel and hard to abstract to other operating system implementations. I come from a messaging middleware background, so I'd prefer some kind of messaging API that clients who have interest would listen to, and receive plug/unplug messages. But Linux hotplug can't do it that way, because the components it deals with only understand command line invocation and reinvocation (e.g., insmod is a command line utility, not a daemon) What makes hotplug interesting (both in a good and bad way) is that it has a flat file database of USB productId/vendorIds and the driver(s) to load. The kernel also has a list of devices, more along the lines of these devices have quirks, and the normal algorithm for determining which device driver to assign to the hardware won't work... let individual drivers ''grab'' these devices, if they are found. Department of redundency department... :-) Knowing that the device has a USB id and using that on our side to keep a database of XFree drivers is a little shortsighted; for example, I don't know what USB will be replaced with next year, and what it'll use for it's identification. Also, with the Aiptek tablets, there happen to be 6 separate IDs that match to the kernel driver, as well as several clones. So the id becomes something like, Aiptek-like, or Wacom-like. The identification could be done by several attempts : -First try should perhaps be done via the X config file. This method would overrid any other one because in last resort the user knows what he bought. This could be done just by a match between whatever information identifying the device returned by the kernel on plug event (called device ID from now on) (The user won't know about it the first time, but I'll explain how to fix that below. -Second try : no sections matches the device ID. No problem let lookup a table like the kernel one (should be possible to have ambiguities here). If we find it and there is no ambiguity we're done else... -Third try : if we don't find it or there were ambiguities, just ask the user telling him: Device ... has been plugged, tell me what it is among this list. And this can be added to the config file so that next time the first try will work. Perhaps one of the problem of that approach is : if the user booted up with all things alredy plugged in how do we get all the devices IDs we need? Moreover the lookup table and the parser can be quite hairy but in a first try we can just do something really simplistic, and complicate that after (this would be more user friendly as X would be smarter and would not bother the user because he did not recognize her device) Bye Manu ___ Do You Yahoo!? -- Une adresse @yahoo.fr gratuite et en français ! Yahoo! Mail : http://fr.mail.yahoo.com ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel
Re: Rant (was Re: ATI Drivers.)
--- Kendall Bennett [EMAIL PROTECTED] a écrit : David Dawes [EMAIL PROTECTED] wrote: Frankly, your own rants against XFree86 and some of its volunteers recently are no different than this. It sure left a bad taste in our mouths. There is a sickening propensity towards hostile and intimidating behaviour from several quarters, and it deserves the negative results it will surely achieve. I have still yet to receive an email from you either backing up your claims that I have been ranting against XFree86 and some it's volunteers recently. Either back it up or offer me an apology. snip the rest of the personal rant Guys could you please finish this in private e-mails? Bye Manu ___ Do You Yahoo!? -- Une adresse @yahoo.fr gratuite et en français ! Yahoo! Mail : http://fr.mail.yahoo.com ___ Devel mailing list [EMAIL PROTECTED] http://XFree86.Org/mailman/listinfo/devel