Re: [Dri-devel] R200: new and exciting crash

2002-10-03 Thread Keith Whitwell

Andy Dustman wrote:
 I managed to get the r200 driver working again by doing a complete CVS
 install. Some notes:
 
 * The card does now seem to generate interrupts at about the same
 frequency as the current mode's vertical refresh.
 
 * Surprisingly (for me, at least), glxgears is now running at about
 2000+ fps and consuming about 70% CPU. I expected a lower frame rate
 (equal to vertical refresh) and minimal CPU usage, but my expectations
 may be unrealistic.
 
 * Quake3 still crashes after a few minutes, but with an error I haven't
 seen before:
 
 r200WaitForFrameCompletion: drmRadeonIrqWait: -16
 
 It did this on two separate occasions. Machine locks up, although I
 believe on one occassion SysRq still worked.
 
 * Setting R200_DEBUG=sanity and running glxgears *immediately* locks the
 machine hard with no output. 

Hmm.  What happens if you pipe the output to a file?

Keith



---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] R200: new and exciting crash

2002-10-03 Thread Michel Dänzer

On Don, 2002-10-03 at 04:12, Andy Dustman wrote:
 I managed to get the r200 driver working again by doing a complete CVS
 install. Some notes:
 
 * The card does now seem to generate interrupts at about the same
 frequency as the current mode's vertical refresh.

It does generate one on each vertical blank.

 * Surprisingly (for me, at least), glxgears is now running at about
 2000+ fps and consuming about 70% CPU. I expected a lower frame rate
 (equal to vertical refresh) and minimal CPU usage, but my expectations
 may be unrealistic.

Set the LIBGL_THROTTLE_REFRESH environment variable to get that.

 * Quake3 still crashes after a few minutes, but with an error I haven't
 seen before:
 
 r200WaitForFrameCompletion: drmRadeonIrqWait: -16
 
 It did this on two separate occasions. Machine locks up, although I
 believe on one occassion SysRq still worked.

Try R200_NO_IRQS, although that error should normally just cause the app
to exit, not a lockup. The wait for the interrupt timing out may just be
a symptom of the lockup.


-- 
Earthling Michel Dänzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
XFree86 and DRI project member   /  CS student, Free Software enthusiast



---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



RE: [Dri-devel] ATI Radeon VE QY (AGP) new drivers (personal) problems

2002-10-03 Thread Michel Dänzer

On Don, 2002-10-03 at 01:52, thork wrote: 

 about the aperture thing, he told me those 8Mb where from system memory
 not from the video card memory, I found this thing in the log:
 (--) RADEON(0): VideoRAM: 65536 kByte (64-bit DDR SDRAM)
 and ofcourse the other lines next:
 (II) RADEON(0): Using 8 MB AGP aperture
 but when I load the agpgart modules it says:
 Sep 30 14:18:32 thork kernel: agpgart: AGP aperture is 64M @ 0xf800

This is the upper limit for the AGP aperture size the DRI can use.

 the 480FPS on glxgear where using 24bits of color, now the CVS thing is
 giving me 500FPS ... but come on! ITS A RADEON 7000!

Yes, exactly. ;) No hardware TCL (well, actually some VEs do seem to
have that, you can try the RADEON_TCL_FORCE_ENABLE environment variable
if you're desperate for more fps, but be warned that it will lock up if
it doesn't work with your chip). Also, try to enable page flipping if
you haven't already.


-- 
Earthling Michel Dänzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
XFree86 and DRI project member   /  CS student, Free Software enthusiast



---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Re: [Dri-users] Mandrake 9 issues and solution (plus minor bug report).

2002-10-03 Thread Felix Kühling

Hi all,

D S and I discovered that Mandrake 9.0 includes Gatos code in their
Xserver. I think this is the reason for the problems Mandrake 9.0 users
are experiencing with binary snapshots. Up to now the only solution
seems to be to compile DRI yourself. A suitable custom binary snapshot
(if anyone wants to make one) would probably have to replace the 2D ati
drivers with clean (non-gatos) ones.

Best regards,
   Felix

On Thu, 03 Oct 2002 17:20:17 -0400
Robert Thomas [EMAIL PROTECTED] wrote:

 I've noticed some people saying that it's almost impossible to get DRI 
 Running on Mandrake 9. Well, they're right. *Almost* impossible.
 
 The hardware:  Athlon 2000+ (1.6G). Radeon 8500 (Identified as 'ATI 
 Radeon 8500 QL'). Standard Mandrake .19-mdk-whatever kernel.
 
 DRI Does Not Work with Mandrake 9 out of the box. glxinfo will not say 
 Yes to direct rendering no matter what you do.
 
 The first issue was an inability to insert the kernel module, due to 
 unmet dependencies. I've had this happen before with mandrake Kernels, 
 so I installed a clean 2.4.19.  After recompiling, I re-inserted and 
 started on the long and strange saga of SegV's when starting X.
 
 After reading some hints on the users lists, it was mentioned that GCC 
 3.2 is shipped with mandrake, and for those that don't know, it's pretty 
 much binary-incompatible with things compiled with previous versions of GCC.
 
 Current State: Clean 2.4.19 kernel, gcc 3.2.
 
 I had to download the DRI X source, and compile and install the X 
 Server. The module inserted cleanly, and, X started up, to my surprise. 
 I was getting quite sick of having to reboot the machine 8-)
 
 I'm quite happy to package up a set of binaries if anyone wants them - 
 but, someone will need to tell me which, specific, binaries are going to 
 be needed 8-)
 
 The bug report I mentioned.  In the source package, 'xf86cfg' is trying 
 to link to libXpm.a -- this doesn't exist in the source, nor does it 
 seem to exist anywhere in the Mandrake distribution (much to my 
 surprise).  I ended up making an empty libXpm myself, and sticking it in 
 /usr/lib, and that seemed to work.
 
 (For those that want to do it themselves --
 root# touch foo.c
 root# gcc -c foo.c
 root# ar -r /usr/lib/libXpm.a foo.o
 )
 
 There are some c++ files, that try to be compiled with 'c++' - there is 
 no such beast (this, I think, is a mandrake-ism. There should be.) - 
 only g++.  Mandrake users:
 
 ln -s /etc/alternatives/g++ /etc/alternatives/c++
 
 fixes it.
 
 A 'make World' should then quite happily work, without any errors. A 
 'make install' installs everything in the right place, -except- for the 
 kernel module.
 
 ./build/xc/programs/Xserver/hw/xfree86/os-support/linux/drm/kernel/radeon.o
 (or whatever your kernel driver should be)
 
 should be copied to /lib/modules/2.4.19/kernel -- it doesn't matter 
 where you put it, depmod -a sorts it out.
 
 A 'startx' will then fire up X, and 'glxinfo' now reports:
 
 [root@linuxrob DRI]# glxinfo
 name of display: :0.0
 Loading required GL library /usr/X11R6/lib/libGL.so.1.2
 r200CreateScreen
 display: :0  screen: 0
 direct rendering: Yes
 server glx vendor string: SGI
 server glx version string: 1.2
 server glx extensions:
 [...etc etc etc...]
 
 [root@linuxrob DRI]# glxgears
 Loading required GL library /usr/X11R6/lib/libGL.so.1.2
 r200CreateScreen
 8001 frames in 5.0 seconds = 1600.200 FPS
 8123 frames in 5.0 seconds = 1624.600 FPS
 8146 frames in 5.0 seconds = 1629.200 FPS
 [root@linuxrob DRI]#
 
 Pre DRI, it was 500 FPS.
 
 I am -totally- unclued on Imakefiles, so I daren't even try to submit a 
 patch, but hopefully the information above will be enough to get people 
 up and working!
 
 --Rob
 
 
 
 ---
 This sf.net email is sponsored by:ThinkGeek
 Welcome to geek heaven.
 http://thinkgeek.com/sf
 ___
 Dri-users mailing list
 [EMAIL PROTECTED]
 https://lists.sourceforge.net/lists/listinfo/dri-users
 
 


   __\|/_____ ___ ___
__Tschüß___\_6 6_/___/__ \___/__ \___/___\___You can do anything,___
_Felix___\Ä/\ \_\ \_\ \__U___just not everything
  [EMAIL PROTECTED]o__/   \___/   \___/at the same time!


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] ATI Radeon VE QY (AGP) new drivers (personal) pro blems

2002-10-03 Thread Felix Kühling

On 03 Oct 2002 11:01:57 +0200
Michel Dänzer [EMAIL PROTECTED] wrote:

 On Don, 2002-10-03 at 01:52, thork wrote: 
 
  about the aperture thing, he told me those 8Mb where from system memory
  not from the video card memory, I found this thing in the log:
  (--) RADEON(0): VideoRAM: 65536 kByte (64-bit DDR SDRAM)
  and ofcourse the other lines next:
  (II) RADEON(0): Using 8 MB AGP aperture
  but when I load the agpgart modules it says:
  Sep 30 14:18:32 thork kernel: agpgart: AGP aperture is 64M @ 0xf800
 
 This is the upper limit for the AGP aperture size the DRI can use.
 
  the 480FPS on glxgear where using 24bits of color, now the CVS thing is
  giving me 500FPS ... but come on! ITS A RADEON 7000!
 
 Yes, exactly. ;) No hardware TCL (well, actually some VEs do seem to
 have that, you can try the RADEON_TCL_FORCE_ENABLE environment variable
 if you're desperate for more fps, but be warned that it will lock up if
 it doesn't work with your chip). Also, try to enable page flipping if
 you haven't already.

That's funny: without TCL glxgears is slightly faster on my Radeon 7500!
Just the CPU usage is higher. With TCL I get 864 FPS (about 14% CPU
usage), without TCL it's 872 FPS (about 22% CPU).

Felix

   __\|/_____ ___ ___
__Tschüß___\_6 6_/___/__ \___/__ \___/___\___You can do anything,___
_Felix___\Ä/\ \_\ \_\ \__U___just not everything
  [EMAIL PROTECTED]o__/   \___/   \___/at the same time!


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] ATI Radeon VE QY (AGP) new drivers (personal) pro blems

2002-10-03 Thread Keith Whitwell

Felix Kühling wrote:
 On 03 Oct 2002 11:01:57 +0200
 Michel Dänzer [EMAIL PROTECTED] wrote:
 
 
On Don, 2002-10-03 at 01:52, thork wrote: 


about the aperture thing, he told me those 8Mb where from system memory
not from the video card memory, I found this thing in the log:
(--) RADEON(0): VideoRAM: 65536 kByte (64-bit DDR SDRAM)
and ofcourse the other lines next:
(II) RADEON(0): Using 8 MB AGP aperture
but when I load the agpgart modules it says:
Sep 30 14:18:32 thork kernel: agpgart: AGP aperture is 64M @ 0xf800

This is the upper limit for the AGP aperture size the DRI can use.


the 480FPS on glxgear where using 24bits of color, now the CVS thing is
giving me 500FPS ... but come on! ITS A RADEON 7000!

Yes, exactly. ;) No hardware TCL (well, actually some VEs do seem to
have that, you can try the RADEON_TCL_FORCE_ENABLE environment variable
if you're desperate for more fps, but be warned that it will lock up if
it doesn't work with your chip). Also, try to enable page flipping if
you haven't already.

 
 That's funny: without TCL glxgears is slightly faster on my Radeon 7500!
 Just the CPU usage is higher. With TCL I get 864 FPS (about 14% CPU
 usage), without TCL it's 872 FPS (about 22% CPU).

That should make some sense if you think about it.  Because you aren't using 
100% cpu, you know that in some way the card is the limiting factor.  By 
turning off tcl you are unloading work from the card, thus perhaps making its 
life easier  allowing it go faster...

But really, the difference is so small that it probably doesn't mean anything 
at all.

Keith



---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Re: Re: Ann: gcc-2.96 compiled snapshots available (I'm going tosmack redhat)

2002-10-03 Thread Mike A. Harris

On 2 Oct 2002, Russ Dill wrote:

 But I see rough times ahead for the binary snapshots. I surely can't make
 one for each system out there. And if the others distros don't also
 upgrade to glic-2.3 then I think the best is to completely stop the
 snapshots builds and replace them with a nice set of scripts which
 people can use to make their own customized snapshot.

upgrade to glibc-2.3? technically, such a thing doesn't exist yet, so to

Actually, technically glibc 2.3 does exist.  You can download it 
while you read the rest of this email message if you like.

http://sources.redhat.com/ml/libc-alpha/2002-10/msg00048.html


ask every distro to upgrade to it...

Nobody is asking every distribution to upgrade to it, at least I 
don't see anyone doing so.  Each distribution will use glibc 2.3 
when they're ready to do so.  For most distributions I presume 
that means their next official release.


redhat is making cvs snapshots of glibc, and distributing those
instead of patching important bugs in the release version, and
using that. CVS versions of software often contain new bugs and
even security vulnerabilities, it is far more prudent to work
with a release version of such a major system component. Because
of this, most distros will probably wait until it becomes a
release until they include it.

This is 100% complete and total fabrication with not even a shred 
of truth to it.  Red Hat has poured significant resources into 
the development of glibc 2.3, and employs 3 people working full 
time on glibc, and various others contributing to it.

The glibc 2.3 work for x86 was completed a while ago, and 
only bugfixes and whatnot have occured since then, along 
with fixes for other architectures, etc. glibc 2.2.93 is 
glibc 2.3 for all intents and purposes on x86, and is not 
considered a random CVS snapshot.  But don't take my word for it 
by all means, when you can get it right from the horses mouth 
below...

The official GNU glibc maintainer Ulrich Drepper fully supports,
and promotes the glibc version in Red Hat Linux 8.0.  It wouldn't
have been included in the distribution otherwise.  Here is his
official opinion (with his GNU glibc maintainer hat on) on this
matter:

http://sources.redhat.com/ml/libc-alpha/2002-10/msg00050.html

There's absolutely no valid technical reason that glibc in Red
Hat Linux 8.0 should not have been included.  It is superior to 
glibc 2.2 in numerous ways, including standards compliance, 
performance, and also various new features.  Every mainstream 
distribution will be using glibc 2.3 likely within the next 6 
months, and there's no reason not to.  In addition, the other 
distributions will benefit greatly from all of the legwork that 
has been done by Red Hat, and that includes beta testing, bug 
fixing, stabilization, etc.

In all honesty, without someone stepping forward to include a new 
glibc test release in their beta releases, and then include the 
stabilized result in their final OS, glibc would never get well 
tested at all, because people simply are not willing to risk 
updating glibc on their systems to every development version that 
comes out.  Thanks to everyone working on glibc, and everyone who 
tested and followed the Red Hat beta release process during Red 
Hat development, glibc 2.3 has been beaten on, and a great many 
bugs were fixed in it which allowed it to stabilize to the state 
it is in right now, and be ready for mainstream usage.



I really do see your frustration, as now anyone who develops software on
redhat (at least those that keep up with redhat) cannot release binaries
and expect them to work on anyone elses system. You don't need to worry
about compiling for every system out there, just what is current
release.

Sure you can.  If you need to build binaries which are compatible
with older glibc, simply compile them using older compat glibc.  
It's quite simple actually.  Again, don't spread FUD.

Please read the above message from Uli, and think where glibc 2.3 
would be right now had Red Hat not poured all of the funding and 
resources into it's development.  It would be nowhere near where 
it is now.  It's also open source, and therefore useable by 
anyone, and any distribution.


As far as actually getting this done, redhat has provided cross compiler
rpms in the past, so you may be able to get these, and cross compile for
glibc2.2. I don't see a rough time for binary snapshots, just a rough
time for developers using cvs snapshots of glibc

A cross compiler is something used to produce binaries for an 
architecture other than the architecture the compiler is running 
on.  Not sure what that has to do with glibc.

I hope this clarifies any misunderstandings, and misconceptions 
that people have about glibc 2.2.9x and glibc 2.3 which is now 
officially released.  If not, please feel free to discuss the 
issue on the glibc mailing lists, where I'm sure all of the glibc 
developers would be glad to discuss any concerns people may have.


Re: [Dri-devel] issues/goals for improved texture memory management

2002-10-03 Thread Benjamin Herrenschmidt

On Mac OS X (from 10.1 on, I believe), the OS can map any memory page into
the AGP aperture at any time.  The idea behind A_cs is that people malloc
space for a texture, specify GL_UNPACK_CLIENT_STORAGE_APPLE to
glPixelStorei, then call glTexImage?D with their pointer.  The GL driver
then keeps the pointer (instead of copying the data) and, when the texture
is needed, maps the memory into AGP space.  This way you really do only have
one copy of the texture in memory.

I guess the short answer to your question is yes. :)

This is interesting but nasty to implement in linux... also one must
take care to properly cache flush the memory before mapping it into
the aperture (and eventally unmapping it from client or re-doing the
client mapping uncached while the page is mapped into the aperture).


Ben.




---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Strange messages

2002-10-03 Thread Konstantin Lepikhov

Hi!

After success build  install of DRI-CVS (yeah, i even't respect that is
so easy), Xserver don't crush and run smoothly (gears ~ 680 fps (560 with
old (20020916) drivers), gl apps from xcreensaver also run perfectly and
don't cpu hungry (~0-20% with new and ~60-100 with old one - maybe some
wrong with my dri installation?)). But in XFree86.log i see some strange
messages:

(WW) RADEON(0): [dri] RADEONDRITransitionTo2d: kernel failed to unflip
buffers.

what is this? It's good or bad? :)

-- 
WBR, Konstantin

ZAO ELKATEL Network/Security assistant

...The information is like the bank... (c) EC8OR



---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Patch to enable 3rd TMU on R100

2002-10-03 Thread Ian Romanick

On Thu, Sep 26, 2002 at 07:20:58AM +0100, Keith Whitwell wrote:
 Ian Romanick wrote:
  - Do we really need the 3 in radeon_vtxfmt_c.c:
  
   static void radeon_MultiTexCoord1fARB( GLenum target, GLfloat s  )
   {
  -   GLfloat *dest = vb.texcoordptr[(target - GL_TEXTURE0_ARB)1];
  +   GLfloat *dest = vb.texcoordptr[(target - GL_TEXTURE0_ARB)3];
  dest[0] = s;
  dest[1] = 0;
   }
  
If we don't need the mask, then the AND instructions should be removed
from the assembly stubs in radeon_vtxtmp_x86.S as well.
 
 We definitely need something here.  This code must not crash for bogus values 
 of target, but the behaviour is otherwise undefined.  In the above code you'd 
 want to set up a bogus value for texcoordptr[3] to point to some temporary 
 storage, or anywhere at all, really.  An alternative is a guard like:
 
 if (target - GL_TEXTURE_0_ARB  3)
 
 which is slightly more work when looking at the codegen templates.

I've made some changes (and a discovery) here, and I should have a new
version of the patch out for people to review either later today or
tomorrow.  My discovery is that the '- GL_TEXTURE0' is useless.  The value
for GL_TEXTURE0 is 0x84C0.  The low order 5 bits are all 0.  For any of the
possible valid values for target, subtracting GL_TEXTURE0 is the same as
masking with 0x1F.  Masking with 0x1F followed by a mask with 0x03 (or
0x01) is redundant.

My vote is to change the 2-TMU version (in all 6 places) to:

static void radeon_MultiTexCoord1fARB( GLenum target, GLfloat s  )
{
   GLfloat *dest = vb.texcoordptr[target  1];
   dest[0] = s;
   dest[1] = 0;
}

-- 
Smile!  http://antwrp.gsfc.nasa.gov/apod/ap990315.html


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Patch to enable 3rd TMU on R100

2002-10-03 Thread Keith Whitwell

Ian Romanick wrote:
 On Thu, Sep 26, 2002 at 07:20:58AM +0100, Keith Whitwell wrote:
 
Ian Romanick wrote:

- Do we really need the 3 in radeon_vtxfmt_c.c:

 static void radeon_MultiTexCoord1fARB( GLenum target, GLfloat s  )
 {
-   GLfloat *dest = vb.texcoordptr[(target - GL_TEXTURE0_ARB)1];
+   GLfloat *dest = vb.texcoordptr[(target - GL_TEXTURE0_ARB)3];
dest[0] = s;
dest[1] = 0;
 }

  If we don't need the mask, then the AND instructions should be removed
  from the assembly stubs in radeon_vtxtmp_x86.S as well.

We definitely need something here.  This code must not crash for bogus values 
of target, but the behaviour is otherwise undefined.  In the above code you'd 
want to set up a bogus value for texcoordptr[3] to point to some temporary 
storage, or anywhere at all, really.  An alternative is a guard like:

if (target - GL_TEXTURE_0_ARB  3)

which is slightly more work when looking at the codegen templates.

 
 I've made some changes (and a discovery) here, and I should have a new
 version of the patch out for people to review either later today or
 tomorrow.  My discovery is that the '- GL_TEXTURE0' is useless.  The value
 for GL_TEXTURE0 is 0x84C0.  The low order 5 bits are all 0.  For any of the
 possible valid values for target, subtracting GL_TEXTURE0 is the same as
 masking with 0x1F.  Masking with 0x1F followed by a mask with 0x03 (or
 0x01) is redundant.
 
 My vote is to change the 2-TMU version (in all 6 places) to:
 
 static void radeon_MultiTexCoord1fARB( GLenum target, GLfloat s  )
 {
GLfloat *dest = vb.texcoordptr[target  1];
dest[0] = s;
dest[1] = 0;
 }

Neat!

Keith



---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Snaps for gcc3, glibc-2.2

2002-10-03 Thread Dieter Nützel

Am Mittwoch, 2. Oktober 2002 19:29 schrieb José Fonseca:
 On Wed, Oct 02, 2002 at 06:50:50PM +0200, Dieter Nützel wrote:
 Am Mittwoch, 2. Oktober 2002 18:35 schrieb Andy Dustman:

 [...]

  Which is it, then: Snapshots or no snapshots? The current snapshots (for
  linux-i386) don't work unless you have Red Hat 8.0 and/or glibc-2.3; I'm
  not even sure that they work on that platform. Broken snapshots are
  worse than no snapshots at all (you can't download something that isn't
  going to work if it isn't there).
 
 Sorry, but read again.
 I didn't deny the snapshots per se.
 Only your call for something like dripkg.sh.

 I'm completely lost with the heading of this thread.

 If I understood correctly, what's on the table is the generation of some
 snapshots until things go on track again (i.e., I setup a chroot'd
 environment to build the snapshots).

Yes, that's my only point.

 If things don't workout this way then it's best to have no
 snapshots at all for a couple of days, than to have greater fuss than
 the one that already is.

 The RedHat 8.0/glibc-2.3 problem is simple. Stay away from it before
 glibc-2.3 is in wide spread. Installing a brand new distro on a
  building system isn't much useful in any way.

 Dieter, could you please explain what do you mean with this?

Same as above.
Use a chroot'd environment or wait with RH 8.0 (glibc-2.3) some little longer 
on your building machine.

 If you whish
 to take up the task of building the snapshots please be my guest,
 because I'm pretty tired of having to justify myself to others in
 respect of a service I offer freely using resources which are destinated to
 a specific unrelated end and that don't even belong to me.

No offence were going to you from me. Ok?

Regards,
Dieter


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] What is the status of the Radeon driver on pci cards?

2002-10-03 Thread William P McCartney

Thanks in advanced to anyone who responds, but I am looking for at a few
cards to buy, and I found a radeon 7500le pci (as well as a radeon 7000
pci) and after searching the archives for information, It seemed that
these cards may not work on an x86 box...  Is this correct, or does the
driver now work ( i think i read about pcigart???)

 - Bill McCartney


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Re: [Dri-patches] CVS Update: xc (branch: trunk)

2002-10-03 Thread Linus Torvalds


On Thu, 3 Oct 2002, Keith Whitwell wrote:
 
 Would the appropriate place to call 'pci_enable_device' be just after a 
 successful call to (deprecated) pci_find_slot() ?

That should work (but you should check for failure on the find, instead 
of potentially trying to pass in a NULL pointer to pci_enable_device()).

In the long run it would be even better to not try to find  the device
by hand, but just tell the system what kind of device you want to drive,
and the system will enumerate each and every such device for you
regardless of where they are (and then you can obviously try to match it
against whatever info X gave you).

That way the code should actually work correctly even if the graphics card 
is somewhere unexpected.

Linus



---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Snaps for gcc3, glibc-2.2

2002-10-03 Thread José Fonseca

On Thu, Oct 03, 2002 at 05:06:38PM +0200, Dieter Nützel wrote:
Am Mittwoch, 2. Oktober 2002 19:29 schrieb José Fonseca:
 On Wed, Oct 02, 2002 at 06:50:50PM +0200, Dieter Nützel wrote:
[...]
 The RedHat 8.0/glibc-2.3 problem is simple. Stay away from it before
 glibc-2.3 is in wide spread. Installing a brand new distro on a
  building system isn't much useful in any way.

 Dieter, could you please explain what do you mean with this?

Same as above.
Use a chroot'd environment or wait with RH 8.0 (glibc-2.3) some little longer 
on your building machine.

 If you whish
 to take up the task of building the snapshots please be my guest,
 because I'm pretty tired of having to justify myself to others in
 respect of a service I offer freely using resources which are destinated to
 a specific unrelated end and that don't even belong to me.

No offence were going to you from me. Ok?

I appologize then. It seemed that I got too sensitive with the
you-shouldn't-have-installed-latest-redhat-beta environment that arised.

Anyway, I already have the minimum chroot environment setup. I just need
to test it with a few snapshots, and arrange so that everything can be
automated from a cronjob again.

Jose Fonseca


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Re: Re: Ann: gcc-2.96 compiled snapshots available (I'm going tosmack redhat)

2002-10-03 Thread Russ Dill


 There's absolutely no valid technical reason that glibc in Red
 Hat Linux 8.0 should not have been included.  It is superior to 
 glibc 2.2 in numerous ways, including standards compliance, 
 performance, and also various new features.  Every mainstream 
 distribution will be using glibc 2.3 likely within the next 6 
 months, and there's no reason not to.  In addition, the other 
 distributions will benefit greatly from all of the legwork that 
 has been done by Red Hat, and that includes beta testing, bug 
 fixing, stabilization, etc.

I do appreciate the work that redhat does, and if their users are
willing to be their beta testers and stabilizers for glibc, then I do
suppose its up to them.

 
 I really do see your frustration, as now anyone who develops software on
 redhat (at least those that keep up with redhat) cannot release binaries
 and expect them to work on anyone elses system. You don't need to worry
 about compiling for every system out there, just what is current
 release.
 
 Sure you can.  If you need to build binaries which are compatible
 with older glibc, simply compile them using older compat glibc.  
 It's quite simple actually.  Again, don't spread FUD.

please, explain, as this has been the whole reason this is come up, tell
us how, and we'll all be much happier.

 As far as actually getting this done, redhat has provided cross compiler
 rpms in the past, so you may be able to get these, and cross compile for
 glibc2.2. I don't see a rough time for binary snapshots, just a rough
 time for developers using cvs snapshots of glibc
 
 A cross compiler is something used to produce binaries for an 
 architecture other than the architecture the compiler is running 
 on.  Not sure what that has to do with glibc.

you can compile gcc against any libc (different versions, different
libc's), etc. So, if you compile a gcc against uclibc, and install that,
its a cross-compiler. same deal with other versions of glibc

 I hope this clarifies any misunderstandings, and misconceptions 
 that people have about glibc 2.2.9x and glibc 2.3 which is now 
 officially released.  If not, please feel free to discuss the 
 issue on the glibc mailing lists, where I'm sure all of the glibc 
 developers would be glad to discuss any concerns people may have.

as of 7 hours ago, its time to upgrade



---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Re: Strange messages

2002-10-03 Thread Konstantin Lepikhov

Hi Steven!

Thursday 03, at 11:41:04 AM you wrote:

 On Thu, Oct 03, 2002 at 05:43:17PM +0400, Konstantin Lepikhov wrote:
  (WW) RADEON(0): [dri] RADEONDRITransitionTo2d: kernel failed to unflip
  buffers.
  
  what is this? It's good or bad? :)
 
 Looks bad.  Did you upgrade your drm kernel module after upgrading DRI?
Yes, of course. Seeking this found another problem - darkplaces after hard
playing (intro demo) lockups X - program exited abnormally with
drmRadeonIrqWait: -4 message.

XFree86 Version 4.2.0 (DRI trunk) / X Window System
(protocol Version 11, revision 0, vendor release 6600)
Release Date: 18 January 2002
If the server is older than 6-12 months, or if your card is
newer than the above date, look for a newer version before
reporting problems.  (See http://www.XFree86.Org/)
Build Operating System: Linux 2.4.18-alt6master-up i686 [ELF] 
Module Loader present
Markers: (--) probed, (**) from config file, (==) default setting,
 (++) from command line, (!!) notice, (II) informational,
 (WW) warning, (EE) error, (NI) not implemented, (??) unknown.
(==) Log file: /var/log/XFree86.0.log, Time: Thu Oct  3 21:53:53 2002
(==) Using config file: /etc/X11/XF86Config-4
(==) ServerLayout layout1
(**) |--Screen screen1 (0)
(**) |   |--Monitor monitor1
(**) |   |--Device ATI Radeon
(**) |--Input Device Mouse1
(**) |--Input Device Keyboard1
(**) Option AutoRepeat 250 30
(**) Option XkbRules xfree86
(**) XKB: rules: xfree86
(**) Option XkbModel pc105
(**) XKB: model: pc105
(**) Option XkbLayout ru
(**) XKB: layout: ru
(**) Option XkbOptions grp:caps_toggle,grp_led:scroll
(**) XKB: options: grp:caps_toggle,grp_led:scroll
(==) Keyboard: CustomKeycode disabled
(**) FontPath set to unix/:-1
(**) RgbPath set to /usr/X11R6/lib/X11/rgb
(==) ModulePath set to /usr/X11R6-DRI/lib/modules
(**) Option AllowMouseOpenFail
(--) using VT number 7

(II) Open APM successful
(II) Module ABI versions:
XFree86 ANSI C Emulation: 0.1
XFree86 Video Driver: 0.5
XFree86 XInput driver : 0.3
XFree86 Server Extension : 0.1
XFree86 Font Renderer : 0.3
(II) Loader running on linux
(II) LoadModule: bitmap
(II) Loading /usr/X11R6-DRI/lib/modules/fonts/libbitmap.a
(II) Module bitmap: vendor=The XFree86 Project
compiled for 4.2.0, module version = 1.0.0
Module class: XFree86 Font Renderer
ABI class: XFree86 Font Renderer, version 0.3
(II) Loading font Bitmap
(II) LoadModule: pcidata
(II) Loading /usr/X11R6-DRI/lib/modules/libpcidata.a
(II) Module pcidata: vendor=The XFree86 Project
compiled for 4.2.0, module version = 0.1.0
ABI class: XFree86 Video Driver, version 0.5
(II) PCI: Probing config type using method 1
(II) PCI: Config type is 1
(II) PCI: stages = 0x03, oldVal1 = 0x8058, mode1Res1 = 0x8000
(II) PCI: PCI scan (all values are in hex)
(II) PCI: 00:00:0: chip 8086,2530 card 147b,0507 rev 02 class 06,00,00 hdr 00
(II) PCI: 00:01:0: chip 8086,2532 card , rev 02 class 06,04,00 hdr 01
(II) PCI: 00:1e:0: chip 8086,244e card , rev 04 class 06,04,00 hdr 01
(II) PCI: 00:1f:0: chip 8086,2440 card , rev 04 class 06,01,00 hdr 80
(II) PCI: 00:1f:1: chip 8086,244b card 147b,0507 rev 04 class 01,01,80 hdr 00
(II) PCI: 00:1f:3: chip 8086,2443 card 147b,0507 rev 04 class 0c,05,00 hdr 00
(II) PCI: 00:1f:5: chip 8086,2445 card 147b,0507 rev 04 class 04,01,00 hdr 00
(II) PCI: 01:00:0: chip 1002,5157 card 1002,013a rev 00 class 03,00,00 hdr 00
(II) PCI: 02:02:0: chip 1000,0001 card 1000,1000 rev 23 class 01,00,00 hdr 00
(II) PCI: 02:04:0: chip 10ec,8139 card 10ec,8139 rev 10 class 02,00,00 hdr 00
(II) PCI: End of PCI scan
(II) LoadModule: scanpci
(II) Loading /usr/X11R6-DRI/lib/modules/libscanpci.a
(II) Module scanpci: vendor=The XFree86 Project
compiled for 4.2.0, module version = 0.1.0
ABI class: XFree86 Video Driver, version 0.5
(II) UnloadModule: scanpci
(II) Unloading /usr/X11R6-DRI/lib/modules/libscanpci.a
(II) Host-to-PCI bridge:
(II) PCI-to-ISA bridge:
(II) PCI-to-PCI bridge:
(II) PCI-to-PCI bridge:
(II) Bus 0: bridge is at (0:0:0), (-1,0,0), BCTRL: 0x08 (VGA_EN is set)
(II) Bus 0 I/O range:
[0] -1  0x - 0x (0x1) IX[B]
(II) Bus 0 non-prefetchable memory range:
[0] -1  0x - 0x (0x0) MX[B]
(II) Bus 0 prefetchable memory range:
[0] -1  0x - 0x (0x0) MX[B]
(II) Bus 1: bridge is at (0:1:0), (0,1,1), BCTRL: 0x0a (VGA_EN is set)
(II) Bus 1 I/O range:
[0] -1  0x9000 - 0x9fff (0x1000) IX[B]
(II) Bus 1 non-prefetchable memory range:
[0] -1  0xdc00 - 0xddff (0x200) MX[B]
(II) Bus 1 prefetchable memory range:
[0] -1  0xd000 - 0xd7ff (0x800) MX[B]
(II) Bus 2: bridge is at (0:30:0), (0,2,2), BCTRL: 0x06 (VGA_EN is cleared)
(II) Bus 2 I/O range:
[0] -1  0xa000 - 0xa0ff (0x100) IX[B]
[1] -1  0xa400 - 0xa4ff (0x100) IX[B]
[2] -1  

Re: [Dri-devel] ATI Radeon VE QY (AGP) new drivers (personal) problems

2002-10-03 Thread Jens Owen

Keith Whitwell wrote:
 Felix Kühling wrote:
 
 On 03 Oct 2002 11:01:57 +0200
 Michel Dänzer [EMAIL PROTECTED] wrote:


 On Don, 2002-10-03 at 01:52, thork wrote:

 about the aperture thing, he told me those 8Mb where from system memory
 not from the video card memory, I found this thing in the log:
 (--) RADEON(0): VideoRAM: 65536 kByte (64-bit DDR SDRAM)
 and ofcourse the other lines next:
 (II) RADEON(0): Using 8 MB AGP aperture
 but when I load the agpgart modules it says:
 Sep 30 14:18:32 thork kernel: agpgart: AGP aperture is 64M @ 0xf800

 This is the upper limit for the AGP aperture size the DRI can use.


 the 480FPS on glxgear where using 24bits of color, now the CVS thing is
 giving me 500FPS ... but come on! ITS A RADEON 7000!

 Yes, exactly. ;) No hardware TCL (well, actually some VEs do seem to
 have that, you can try the RADEON_TCL_FORCE_ENABLE environment variable
 if you're desperate for more fps, but be warned that it will lock up if
 it doesn't work with your chip). Also, try to enable page flipping if
 you haven't already.


 That's funny: without TCL glxgears is slightly faster on my Radeon 7500!
 Just the CPU usage is higher. With TCL I get 864 FPS (about 14% CPU
 usage), without TCL it's 872 FPS (about 22% CPU).
 
 
 That should make some sense if you think about it.  Because you aren't 
 using 100% cpu, you know that in some way the card is the limiting 
 factor.  By turning off tcl you are unloading work from the card, thus 
 perhaps making its life easier  allowing it go faster...
 
 But really, the difference is so small that it probably doesn't mean 
 anything at all.

Could it be that the AGP bus is the limiting factor and pushing TCL 
vertices requires more bandwidth than just pushing rasterization info?

Sorry, I know we shouldn't even get into it regarding gears...it's not a 
benchmark.

Okay, everyone go back to their corners and repeat:

   Gears is not a benchmark
   Gears is not a benchmark
   Gears is not a benchmark

:-)

-- 
/\
  Jens Owen/  \/\ _
   [EMAIL PROTECTED]  /\ \ \   Steamboat Springs, Colorado



---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] ATI Radeon VE QY (AGP) new drivers (personal) pro blems

2002-10-03 Thread Keith Whitwell


 
 Could it be that the AGP bus is the limiting factor and pushing TCL 
 vertices requires more bandwidth than just pushing rasterization info?

Maybe, but the difference Felix reports (1%) might as well be noise.

 Sorry, I know we shouldn't even get into it regarding gears...it's not a 
 benchmark.
 
 Okay, everyone go back to their corners and repeat:
 
   Gears is not a benchmark
   Gears is not a benchmark
   Gears is not a benchmark

It is a benchmark - for glClear() and glXSwapBuffers() only.  That's why it 
benefits so much from page flipping.

Keith





---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Slow performance with Matrox G400

2002-10-03 Thread Jouni . Tulkki

This is not really directly related to DRI, but more a general driver
problem. I decided to post it here as the XFree86 - groups
are for members only.

I have been using XFree4.02 for some time and have encountered some speed
problems. These problems show when I use double buffering for normal windows. 
That is, I create a pixmap that is the same size as the window and draw stuff
on it. After all drawing is done the pixmap is copied into the window.
This works very fast when there are no other pixmaps, but when I have
even one other pretty large pixmap the performance drops dramatically.
Also if there is for example galeon or dillo running the same
thing happens.

Has anyone any idea why this happens? Simple theory is
that the video memory is not sufficient for both the backbuffer
and other pixmaps which causes the backbuffer being put in
system memory. However, my G400 has 16 MB memory
and that should be more then enough when running at 1024x768 
resolution. For example at 32bpp color depth the screen-area
takes only 1024*768*4 = 3 MBytes. So I could in theory have
4 screen-area sized pixmaps and still have 1 MB left.


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



[Dri-devel] Slow performance with Matrox G400

2002-10-03 Thread Jouni . Tulkki

This is not really directly related to DRI, but more a general driver
problem. I decided to post it here as the XFree86 - groups
are for members only.

I have been using XFree4.02 for some time and have encountered some speed
problems. These problems show when I use double buffering for normal windows. 
That is, I create a pixmap that is the same size as the window and draw stuff
on it. After all drawing is done the pixmap is copied into the window.
This works very fast when there are no other pixmaps, but when I have
even one other pretty large pixmap the performance drops dramatically.
Also if there is for example galeon or dillo running the same
thing happens.

Has anyone any idea why this happens? Simple theory is
that the video memory is not sufficient for both the backbuffer
and other pixmaps which causes the backbuffer being put in
system memory. However, my G400 has 16 MB memory
and that should be more then enough when running at 1024x768 
resolution. For example at 32bpp color depth the screen-area
takes only 1024*768*4 = 3 MBytes. So I could in theory have
4 screen-area sized pixmaps and still have 1 MB left.


---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Re: Strange messages

2002-10-03 Thread Michel Dänzer

On Don, 2002-10-03 at 20:04, Konstantin Lepikhov wrote:
 
 Thursday 03, at 11:41:04 AM you wrote:
 
  On Thu, Oct 03, 2002 at 05:43:17PM +0400, Konstantin Lepikhov wrote:
   (WW) RADEON(0): [dri] RADEONDRITransitionTo2d: kernel failed to unflip
   buffers.
   
   what is this? It's good or bad? :)

Basically harmless. It means that page 1 is being displayed after page
flipping has ended, so any 2D rendering has to be copied from page 0.


 Yes, of course. Seeking this found another problem - darkplaces after hard
 playing (intro demo) lockups X - program exited abnormally with
 drmRadeonIrqWait: -4 message.

-4 is -EINTR, i.e. the system call was interrupted by a signal. It
shouldn't abort on that, please try the attached patch.


-- 
Earthling Michel Dänzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
XFree86 and DRI project member   /  CS student, Free Software enthusiast


Index: lib/GL/mesa/src/drv/radeon/radeon_ioctl.c
===
RCS file: /cvsroot/dri/xc/xc/lib/GL/mesa/src/drv/radeon/radeon_ioctl.c,v
retrieving revision 1.29
diff -p -u -r1.29 radeon_ioctl.c
--- lib/GL/mesa/src/drv/radeon/radeon_ioctl.c	2 Oct 2002 12:32:45 -	1.29
+++ lib/GL/mesa/src/drv/radeon/radeon_ioctl.c	3 Oct 2002 22:27:32 -
 -635,7 +635,9  static int radeonWaitForFrameCompletion(
   /* if there was a previous frame, wait for its IRQ */
   if (iw-irq_seq != -1) {
  UNLOCK_HARDWARE( rmesa ); 
- ret = drmCommandWrite( fd, DRM_RADEON_IRQ_WAIT, iw, sizeof(*iw) );
+ do {
+	ret = drmCommandWrite( fd, DRM_RADEON_IRQ_WAIT, iw, sizeof(*iw) );
+	 } while (ret  errno == EINTR);
  if ( ret ) {
 fprintf( stderr, %s: drmRadeonIrqWait: %d\n, __FUNCTION__, ret );
 exit(1);
 -1148,7 +1150,9  void radeonFinish( GLcontext *ctx )
   }
   UNLOCK_HARDWARE( rmesa );
 
-  ret = drmCommandWrite( fd, DRM_RADEON_IRQ_WAIT, iw, sizeof(iw) );
+  do {
+	 ret = drmCommandWrite( fd, DRM_RADEON_IRQ_WAIT, iw, sizeof(iw) );
+  } while (ret  errno == EINTR);
   if ( ret ) {
 	 fprintf( stderr, %s: drmRadeonIrqWait: %d\n, __FUNCTION__, ret );
 	 exit(1);



Re: [Dri-devel] Snaps for gcc3, glibc-2.2

2002-10-03 Thread Jens Owen

José Fonseca wrote:

 Anyway, I already have the minimum chroot environment setup. I just need
 to test it with a few snapshots, and arrange so that everything can be
 automated from a cronjob again.

Your awesome!  Thanks for your effort on these snapshots.

-- 
/\
  Jens Owen/  \/\ _
   [EMAIL PROTECTED]  /\ \ \   Steamboat Springs, Colorado



---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] Slow performance with Matrox G400

2002-10-03 Thread Jens Owen

[EMAIL PROTECTED] wrote:
 This is not really directly related to DRI, but more a general driver
 problem. I decided to post it here as the XFree86 - groups
 are for members only.

[EMAIL PROTECTED] is an open list...however, I think your issues is 
related to the DRI, so I'll respond here.

 I have been using XFree4.02 for some time and have encountered some speed
 problems. These problems show when I use double buffering for normal windows. 
 That is, I create a pixmap that is the same size as the window and draw stuff
 on it. After all drawing is done the pixmap is copied into the window.
 This works very fast when there are no other pixmaps, but when I have
 even one other pretty large pixmap the performance drops dramatically.
 Also if there is for example galeon or dillo running the same
 thing happens.
 
 Has anyone any idea why this happens? Simple theory is
 that the video memory is not sufficient for both the backbuffer
 and other pixmaps which causes the backbuffer being put in
 system memory. However, my G400 has 16 MB memory
 and that should be more then enough when running at 1024x768 
 resolution. For example at 32bpp color depth the screen-area
 takes only 1024*768*4 = 3 MBytes. So I could in theory have
 4 screen-area sized pixmaps and still have 1 MB left.

Jouni,

With the DRI enabled, you'll also need 3 MBytes for the back buffer, 
another 3 for the depth buffer.  Finally, the remainder needs to be 
divided up between texture cache and pixmap cache.  I believe the 
current allocation is to provide a relatively small amount for the 
pixmap cache and the rest for textures.

If you disable the DRI, almost the entire offscreen will be decicated to 
the pixmap cache.  The other thing you could consider is leaving the DRI 
enabled and using OpenGL for graphics rendering where you already have a 
dedicated back buffer allocated.

Two other options come to mind, both require some development:

1) Help w/ some of the more dynamic (and sharing) oriented schemes that 
are targeted at the Radeon, then back port to the MGA driver when it's done.

2) Implement the X double buffer extension so that it uses the dedicated 
OpenGL back buffer in hardware, instead of allocating a pixmap (which 
has the same pixmap cache limitation).

Regards,
Jens

-- 
/\
  Jens Owen/  \/\ _
   [EMAIL PROTECTED]  /\ \ \   Steamboat Springs, Colorado



---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel



Re: [Dri-devel] What is the status of the Radeon driver on pcicards?

2002-10-03 Thread Michel Dänzer

On Don, 2002-10-03 at 17:22, William P McCartney wrote: 
 Thanks in advanced to anyone who responds, but I am looking for at a few
 cards to buy, and I found a radeon 7500le pci (as well as a radeon 7000
 pci) and after searching the archives for information, It seemed that
 these cards may not work on an x86 box...  Is this correct, or does the
 driver now work ( i think i read about pcigart???)

PCI GART is still disabled in the driver on x86, and when I tried
enabling it a while ago I encountered some bugs. I've fixed one of them
in the meantime, but there might still be others. Someone will have to
try...


-- 
Earthling Michel Dänzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
XFree86 and DRI project member   /  CS student, Free Software enthusiast



---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel