Perhaps:
switch (nr) {
case 0: return 0;
case 1: ovf = 1; break;
case 2: ovf = 2; break;
default: ovf = MIN2(nr-1, 2); break;
}
(or similar) would be better, if the code below does indeed fix the problem?
-- Gareth
Andreas Stenglein wrote:
Yes, at least the part with GL_TRIANGLE_STRIP.
In case of 0 you can just return 0, no copying is needed.
case 0: return 0; break;
You're going to do that, just in a slightly different manner:
switch (nr) {
case 0: ovf = 0; break;
case 1: ovf = 1; break;
Keith Whitwell wrote:
Yes, very nice.
Utah did have some stuff going for it. It was designed as a
server-side-only accelerated indirect renderer. My innovation was to
figure out that the client could pretty easily play a few linker tricks
load that server module with dlopen(), and then with
Keith Whitwell wrote:
libGL.so provides a dispatch table that can be efficiently switched. The
real
'gl' entrypoints basically just look up an offset in this table and jump
to
it. No new arguments, no new stack frame, nada -- just an extremely
efficient
jump. Note that this is the
Jon Smirl wrote:
I really don't understand ATI's position on Linux
drivers. They have better hardware but they are losing
because of their drivers. I can't think of a better
solution than having a couple hundred highly skilled,
performance obsessed, unpaid hackers fixing their code
for
,
which emulated the minimum win32 kernel service API's the rest of the
kernel module needed.
I'm always amused by the reasons people come up with for things like this...
Note: In no way am I speaking officially as an employee of NVIDIA
Corporation.
--
Gareth Hughes ([EMAIL PROTECTED])
OpenGL
Okay, here's an almost-functional implementation of an OpenGL dispatch
layer and driver backend. The dispatching into a dlopened driver
backend works, the backend just doesn't do anything terribly interesting
yet (been struggling with bad allergies all week, so I'm not thinking
very clearly
I'm putting the finishing touches on some example asm code that might be
generated at runtime by an OpenGL driver, to go with a sample dispatch
layer, that exercises some of the issues we've been discussing over the
past week. As it's 6:20am, I might go home and sleep first though ;-)
Thanks to
Keith Whitwell wrote:
__thread doesn't require -fpic. There are 4 different TLS models (on
IA-32):
-ftls-model=global-dynamic
-ftls-model=local-dynamic
-ftls-model=initial-exec
-ftls-model=local-exec
Neither of these require -fpic, though the first 3 use pic
register (if not -fpic,
Keith Whitwell wrote:
Gareth,
A simplified example of the dispatch codegen layers sounds like an
excellent way to get across the performance environment we're working
in. Let me know if I can help putting this together.
Agreed -- hence the effort to put this together ;-)
I'm
Sorry for the delay in getting back to you, I've been offline since late
last week moving into a new building at work.
I've been working on some sample code that clearly demonstrates the
issues we (as in vendors of OpenGL on Linux) face. I'm hoping to have
that wrapped up this afternoon and
David S. Miller wrote:
Why does it matter? Jakub has shown how to get the same kind of
non-PIC relocations you want in the GL libraries by using private
versions of symbols.
Using a feature that is a very new thing (to quote Jakub) -- only GCC
3.2 (mainline CVS), the Red Hat GCC 3.1
David S. Miller wrote:
Even if this were not the case, stupid compilation tools are not an
excuse to put changes into the C library. That is a fact.
We've been talking about two completely separate issues:
- Fast thread-local storage for libGL and GL drivers.
- PIC for libGL and GL
Jakub Jelinek wrote:
On Thu, May 16, 2002 at 08:08:02PM -0700, Gareth Hughes wrote:
Let's be clear about what I'm proposing: you agree to reserve an
8*sizeof(void *) block at a well-defined and well-known offset in the
TCB. OpenGL is free to access that block, but only that block
I would like to propose a small change to the pthread_descr structure in
the latest LinuxThreads code, to better support OpenGL on GNU/Linux
systems (particularly on x86, but not excluding other platforms). The
purpose of this patch is to provide efficient thread-local storage for
both libGL
Jakub Jelinek wrote:
Hi!
What percentage of applications use different dispatch
tables among its threads? How often do dispatch table changes
occur? If both of these are fairly low, computing a dispatch table
in an awx section at dispatch table switch time might be fastest
(ie. prepare
Keith Whitwell wrote:
2) last time I looked, libGL.so was linked unconditionally against
libpthread. This is punnishing all non-threaded apps, weak undefined
symbols work very well
This is because we currently use the standard way of getting thread-local-data
and detecting
Jakub Jelinek wrote:
Hi!
What percentage of applications use different dispatch
tables among its threads? How often do dispatch table changes
occur? If both of these are fairly low, computing a dispatch table
in an awx section at dispatch table switch time might be fastest
I should also
Ulrich Drepper wrote:
This is the only way you'll get access to thread-local storage. It is
out of question to allow third party program peek and poke into the
thread descriptor.
What do you mean, a third party program? We're talking about a system
library (libGL.so) here. There is a
Gareth Hughes wrote:
Let's be clear about what I'm proposing: you agree to reserve an
8*sizeof(void *) block at a well-defined and well-known offset in the
TCB.
Of course, I should add that space for such a block exists, and has
existed for some time. My proposal requires no real
A question about the __thread stuff: does it require -fPIC? What
happens if you don't compile a library with -fPIC, and have __thread
variables declared in that library?
-- Gareth
___
Have big pipes? SourceForge.net is looking for
Smitty wrote:
Are there any plans to implement drivers for these cards?
Have any of the manufactureres made an approach or started asking
questions about DRI?
Or is this all completely off the radar screen at the moment?
Getting specs (full or otherwise) for DX8.1 and/or DX9.x
José Fonseca wrote:
So what do you think the future reserves for the open-source OSs? Just
closed-source drivers, perhaps some Wine-alike binary emulated Windows
drivers and a bunch of opensource legacy cards drivers..?
I really hope not... and at least in with the cards that I may
Leif Delgass wrote:
Do we know for sure that pci gart is supported on mach64? The rage 128
and radeon drivers both write to PCI GART registers, but I don't see
anything analogous in the Rage PRO docs. My understanding is that to use
the scatter/gather memory, the card has to implement
Allen Akin wrote:
If the expected value is 255 and the OpenGL implementation yields 254,
that's only one LSB of error, so glean probably won't complain about it.
We could make the test more stringent, but then some reasonable
implementations (especially some hardware implementations)
Felix Kühling wrote:
Hi,
I recently found out that the 3d performance of the mach64 branch (in
terms of glxgears frame rates) is related to the physical screen
resolution. I got the following glxgears frame rates with different
resolutions:
1152x864: 155.2 fps
1024x768: 165.6 fps
Stephen J Baker wrote:
Everything starts out in hardware and eventually moves to software.
That's odd - I see the reverse happening. First we had software
The move from hardware to software is an industry-wide pattern for all
technology. It saves money. 3D video cards have been
Sounds great! I'll be arrving home after several weeks of holidays
next week, but I'm interested to see what you've done and will take a
look soon.
-- Gareth
--- Jens Owen [EMAIL PROTECTED] wrote:
I've checked into the drmcommand-0-0-1-branch the complete conversion
of
the Radeon driver
Forwarding to dri-devel.
Original Message
Subject: [Mesa3d-dev] viewperf
Date: Tue, 12 Mar 2002 13:51:52 +0100 (MET)
From: Klaus Niederkrueger [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Hi,
In the last week I have been playing with the spec-viewperf programs and
(at least on my
Brian Paul wrote:
One question to ask is: regardless of the vertex buffer size, typically
how many vertices are issued between glBegin/End or state changes? Does
Q3 (for example) render any objects/characters with 1000 vertices?
Never. The maximum size of any locked array from the Q3
Dieter Nützel wrote:
Yes, SPECviewperf worked out of the box, GLperf never tried.
Got SPECviewperf running several times for testing the tdfx driver since
'2000.
I only changed CDEBUGFLAGS in makefile.linux for better Athlon optimization.
CDEBUGFLAGS = -O -mcpu=k6 -pipe
Bill Currie wrote:
I can only speak about the quake 1/quakeworld source (I haven't studied the
quake2 code enough yet), but it's actually nothing that complex. In fact,
it's the opposit. quake doesn't do the integeration properly at all. It just
adds the gravity acceleration to the velocity
Brian Paul wrote:
OK, it looks like the templatized code for texture image conversion is
the problem. It's using the CONVERT_TEXEL_DWORD macro even when the
texture width is one, causing an out-of-bounds write.
I'll fix this up, Gareth :)
Hmmm, looks like my assumption that allocations
Keith Whitwell wrote:
What is the point of sustaining such a frame rate that has no pratical
advantage?
You do see the partial frames, it seems. The eye seems to do a reasonable
job of integrating it all, providing you with a low-latency view of the game
world.
Hardcore gamers want
Frank C. Earl wrote:
On Friday 08 February 2002 07:09 pm, José Fonseca wrote:
Does this mean that client code can lock the card but is not really
capable of putting the security of the system in danger?
Depends on what you define as in danger. It won't allow a user to
commit
local
Frank C. Earl wrote:
The command pathway doesn't seem to allow for that. Only the blit
pathway.
I've coded only inbound to the aperture writes with that pathway, but not
outbound (there's very little that anything other than the X server needs
to
do that sort of thing).
How do you
Shouldn't it work for the whole tree if we provided a wrapper for mcount
and whatever for the modules?
Possibly not, given the method we use to actually process the profiling
data. Besides, it's never been a priority -- the current method works
great for the 3D drivers.
-- Gareth
Just a reminder to all developers interested in contributing to
the DRI project that there is an developmental IRC meeting
scheduled for Monday February the 4th 2002 (today) at 2100h UTC
(4:00pm EST).
Does this mean the time has changed again? Is this going to be a
reasonably final time?
Smitty wrote:
Maybe a bit of a strange question, but I think it should be asked.
Does the DRI project have a contact person at each of the IHV's?
Specifically ATI, Nvidia seems to be more of a closed source
house, and 3dfx is now defunct.
Alexander Stohr [EMAIL PROTECTED] is
That's too bad because this will imply a _lot_ of hair in the drivers.
That's the way it has to be, for the DRM code to remain in the stock
kernel distro. Linus has make this crystal clear.
The fact is that we have a driver split several ways: 3 portions from
XFree tree (2d, 2d and drm),
The assumption was only made for experimental GATOS drivers. It is a
practical one. More people come and ask: I upgraded to GATOS driver and
DRI won't work anymore ! Answer: RTFM, upgrade drm driver.
It's already been determined that:
I just upgraded my kernel, and DRI won't work anymore!
Gareth, the current driver is broken. If someone wants to use video
capture they _need_ both GATOS 2d driver and GATOS drm driver, period.
What's so wrong about upgrading ?
Guaranteed, someone will get a mismatch -- your changes may go back
into the stock kernel, breaking DRI CVS or
In case you missed it, or forgot, the IRC meeting is taking place
right now on #dri-devel.
-- Gareth
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel
I had a meeting last night over an early dinner, so was unable to
attend yesterday's IRC session. Does anyone have a log they could
send me, or post on the web somewhere? I do plan on attending
these sessions every week, for those that were interested...
-- Gareth
Thanks for this, I skimmed through it and will take some time later
today (yes, something other than 4:18am) to read it properly. I look
forward to participating in next week's discussion!
And now, back to hacking code...
-- Gareth
___
Dri-devel
Frank C. Earl wrote:
On Monday 21 January 2002 09:21 am, Mike Westall wrote:
Conversely, if MS considers OpenGL to be dead and buried,
period, it seems that Bill would be bit silly to want to
spend $62.5 to become the owner of said dead + buried
technology!!
OpenGL is not really
Mike A. Harris wrote:
The i830 DRM driver contains empty for loops used for short
delays. Modern gcc and other compilers, when used with
optimization switches will optimize these empty for loops out,
leaving no delay. In addition, CPU's such as the Pentium 4, will
needlessly overheat
I think microsoft is trying to kill DRI. It is a big threat
to all their products. If the open source community can offer
good 3d graphics at low cost then their system will suffer a
good loss in market share.
Ummm, somehow I don't think so...
The DRI is encompassed by OpenGL (as a
Brian Paul wrote:
Even before VA Linux laid-off everyone we were losing momentum on the
DRI project because the engineers had to work on other projects that
generated revenue. After everyone was laid-off we all went in different
directions. I think I'm one of the few who still reads this
I've found something strange in reporting the chipset:
I've got ATI R128 on VIA kt 266 chipset, Yet the driver writes:
[drm] AGP 0.99 on VIA Apollo KT133 @ 0xe000 64MB
[drm] Initialized r128 2.1.6 20010405 on minor 0
the chipset IS kt266
goran@glaugrung:~\ /sbin/lspci
00:00.0
I would be quite surprised if the two chipsets had the same PCI id (have
a look at the pci.ids in the linux kernel)... they should only share the
same vendor id, which makes the agpgart code work properly (I think Via
is less silly than Intel that has the nasty habit of changing the
Frank C. Earl wrote:
While we're discussing things here, can anyone tell me why
things like the emit state code is in the DRM instead of in
the Mesa drivers? It looks like it could just as easily be
in the Mesa driver at least in the case of the RagePRO code-
is there a good reason why
Sounds like you haven't set the permissions on /dev/dri to allow user
access.
From the DRI User's Guide:
quote
If you want all of the users on your system to be able to use
direct-rendering, then use a simple DRI section like this:
Section DRI
Mode 0666
EndSection
Forwarding this to a more appropriate discussion forum...
-- Gareth
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 08, 2002 5:06 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: bug in drivers/char/drm/drm_vm.h?
in 2.4.17
On Thu, Dec 27, 2001 at 03:27:07AM -0800, Philip Brown wrote:
I'm finally looking at xf86drm.c again. The first routine that looked
interesting was
drmGetEntry(int fd)
The second call in that function is drmHashCreate()
and it was not picked up with ctags, so I was wondering where
On Fri, Dec 21, 2001 at 05:34:43PM -0800, Philip Brown wrote:
On Sat, Dec 22, 2001 at 02:30:14AM +0100, Alexander Stohr wrote:
The GART is the paging unit of the AGP system.
It deals nicely with fragmented chunks of page sized
memory chunks. So you only need some sort of memory
On Fri, Dec 14, 2001 at 02:54:33PM -0500, [EMAIL PROTECTED] wrote:
I am seeing this too. I thought it was from me tweaking
stuff. Interestingly enough, quake works fine.
Is there any kind of DRM/DRI test app along the lines of x11perf ?
Quake3, viewperf... :-)
-- Gareth
On Fri, Dec 14, 2001 at 03:41:01PM -0500, Vladimir Dergachev wrote:
Yes, but I was looking for something that would allow me to exercise each
primitive separately - so as not to cause overflowing of dmesg buffer ;)
Try SPEC glperf then.
-- Gareth
On Wed, Dec 12, 2001 at 01:36:10PM +0100, Alexander Stohr wrote:
Suggestion:
typedef unsigned intelcount_t;
or
#define elcount_t unsigned int
Ack. Don't do that.
-- Gareth
___
Dri-devel mailing list
[EMAIL
On Wed, Dec 12, 2001 at 04:30:56PM +, Sergey V. Udaltsov wrote:
Why force any application to implement some more or less wide
set of external shell varibles to query while the same is much
easier to maintain if its part of a gatekeeper library?
Exactly! That's what I meant!
Quake3
On Tue, Dec 11, 2001 at 07:28:54PM -0500, Leif Delgass wrote:
I think the point is (but I could be wrong) whether this is
user-configurable without recoding/recompiling anything, and it seems the
answer is no. The driver can enable/disable extensions for all apps using
the driver, or an
On Mon, Dec 10, 2001 at 12:39:57AM -0800, Philip Brown wrote:
So I'm looking through the AGP stuff, still learning...
and it seems that there's a whole lot of redundancy in the current API.
If I'm understanding the sequence properly, generally programs do the
following:
1. open
On Mon, Dec 10, 2001 at 09:50:42AM -0800, Philip Brown wrote:
But I thought that GATT is simply a scatter/gather table, so
you only have to update the GATT when you allocate and bind pages.
Then, if you allocate and bind the whole range at once, you're done, and
you dont have to do any
On Mon, Dec 10, 2001 at 05:33:09PM -0800, Philip Brown wrote:
On Mon, Dec 10, 2001 at 08:52:26PM +0100, Benjamin Herrenschmidt wrote:
...
Some chipsets (and the original agpgart supported those only) can
let the CPU access the AGP aperture directly. All mmap had to do
was then to map the
could get some accelerated 3D support because Gareth Hughes had coded in a
kludge in the Mesa driver to verify that the code there was working properly-
this is in the form of direct register writes for the 3D operations. It
won't take too much to migrate the placeholder code to the real thing
On Fri, Nov 30, 2001 at 01:48:16AM -0700, Derrik Pates wrote:
On 30 Nov 2001, Michel Dänzer wrote:
I'll see to it that it gets fixed, but I'd like to check the docs for
what the value should really be. I hope I'll get around to it this
weekend.
Well, the tdfx driver uses 16 *
On Fri, Nov 30, 2001 at 09:17:07AM -0700, Derrik Pates wrote:
On Fri, 30 Nov 2001, Gareth Hughes wrote:
If I remember correctly, the hardware requires pitches to be multiples
of 64 (that's pixels, not bytes). It's been a while, but we don't do
that sort of thing for nothing...
Well
On Tue, Nov 20, 2001 at 07:18:16PM -0500, Frank C. Earl wrote:
I don't think there's any more available from ATI than what we already have.
If memory serves, Gareth and John worked from the register docs and the 2D
coding info from the Programmer's guide.
Yep, that's correct. Had to
Peter Lemken wrote:
It is, actually. At least if you are stuck with a notebook computer. The
Rage LT Pro and Rage Mobility are among the most popular graphics
adapters around. I wish I could just put in a different card...
Yes, I understand that. That's not the point I was making, or what
Frank C. Earl wrote:
On Wednesday 24 October 2001 07:17 pm, Carl Busjahn wrote:
Your depth is 24. 3D depths are only 16bit and 32bit. The Mach64 is
really not powerful enough to handle 32bit (which is what 24 yeilds in
XFree86 4.1). I'm not even sure if the driver supports 32bit depth, but
Frank C. Earl wrote:
Now, now, not everybody can use your employer's gear, Gareth... :-
It's not hard to get something rather more powerful than a Rage Pro --
anandtech.com lists current-generation hardware for under $120. One
would guess going back a generation or two would bring the
Leif Delgass wrote:
Great work! I'll check this out soon.
Once we get DMA working for the 3D operations, I guess the next task is to
get the 2D acceleration routines synchronizing with the 3D ones so we can
reenable XAA, right? Also, it looks as if the AGP setup has not been
finished
Jeffrey W. Baker wrote:
On Wed, 3 Oct 2001, David Johnson wrote:
There is some seriously proprietary stuff with idct that for legal
reasons ATI wouldn't want to expose.
That is one of the most ridiculous statements I have heard. Substitute
some equivalent terms in there:
There is
Dacobi Coding wrote:
But are they planing to, or have they allready releaced the specs
for the new Radeon chips? And I mean full specs complete
with V/P Shaders and TL?
Did they ever release specs for the original Radeon? No. One would
guess the same policy will apply in this case as
Mike A. Harris wrote:
After reading some people's postings on donating X amount of
money for feature Y, and the like, I thought about it and come to
the conclusion that donation driven DRI project even partially is
quite unrealistic. I'd like to discuss why I think that is so.
Mike,
Michel Dänzer wrote:
I certainly don't question your past dedication. I appreciate it very much. I
was a bit deceived by your abandoning it though.
Mate, if you understood the situation, you wouldn't be saying this. I will
let this pass by as a result.
My point is that nobody is
Andrew James Richardson wrote:
I'm sure that everybody has their say on this but would you think of a
company set up so that people donated money in exchange of binary drivers
(source was free of course) for DRI, more like ordered donation than
business really. I for one would be
Mark Allan wrote:
So do we give up on open source drivers completely? I'm willing to bet
that there is some way to generate sufficient revenue to fund the DRI. I
don't know what it is, but it would be worth throwing some ideas around
rather than throwing our hands up and saying oh, well.
Benjamin Herrenschmidt wrote:
Another point is that binary drivers like NVIDIA are x86 only, which is
a problem for me (PPC) as Apple now bundles their cards with recent
Mac G4s.
Also, despite beeing pretty complete, the r128 driver is experiencing
all sorts of lockups (depending on the
Frank Earl wrote:
Some are saying that Linux on the desktop is already dead...
Really? Somehow, I find that hard to believe with places like Largo, FL
using it on the desktop- I'm of the belief that it's still in its infancy.
Again, I don't actually agree with the statement. However,
Frank Earl wrote:
How about all those people without the luxury of upgrading- say laptops
and things like iMacs? Go buy a whole new computer- not an option, when
you think about it. This is not to say you have to be doing it- but
someone ought to be doing something about it. I'm not
Zilvinas Valinskas wrote:
just compiled installed XFree 4.1 + mesa 3.5 (mesa-3-5 trunk from CVS).
swoop@tweakster:~$ ./gears
5544 frames in 5 seconds = 1108.8 FPS
5600 frames in 5 seconds = 1120 FPS
5602 frames in 5 seconds = 1120.4 FPS
5606 frames in 5 seconds = 1121.2 FPS
5599 frames
Zilvinas Valinskas wrote:
swoop@tweakster:~$ ./isosurf
7179 vertices, 7177 triangles
:\ that's all I get :) Looks neat ...
Hit 'b' to run the benchmark. Use the source, Luke...
-- Gareth
___
Dri-devel mailing list
[EMAIL PROTECTED]
Philip Willoughby wrote:
Just out of interest, does 2kx2k texture support mean 2048x2048 or 2050x2050
(for borders?).
I have never seen a graphics chip accelerate texture borders. All the
Mesa-based drivers fall back to software rendering if you have them.
-- Gareth
Brian Paul wrote:
That's what I was worried about. Changing the SAREA layout would
require bumping the version number. However, I've grepped all the
Radeon sources and it doesn't appear that RADEON_MAX_TEXTURE_LEVELS
is used at all in the SAREA or kernel code. I'll have to do some
I've been banging my head on this for too long without much luck. Karl,
can you send me (and me only) a snapshot of Mesa/demos/texenv running
with your patch applied? I'm getting incorrect rendering no matter what
I do, and Tribes2 is still broken (the landscape is clearly being
textured
Geoff Reedy wrote:
On Wed, Jun 20, 2001 at 01:50:03PM -0400, Karl Lessard [EMAIL PROTECTED] said
There is a AGP-to-PCI bridge on a G450 PCI. The thing is that you have a
AGP chip on your PCI card , and the bridge is used to connect the chip
on a PCI bus. But I don't know why DRI only
Brian Paul wrote:
I've looked into this a bit more. Looks like the token
RADEON_MAX_TEXTURE_LEVELS is incorrectly defined to be 11 in
the current driver. It should be 12 in order to support 2Kx2K
textures. It looks like this can safely be changed from 11 to 12
without a kernel module
Keith Whitwell wrote:
Catonga wrote:
The Voodoo 3 card is now nearly as fast as under Windows 2000
in Quake 3.
I never imagined that this could get ever be possible.
Thanks a lot to all that made this possible.
Congratulations to Gareth...
Pity I was just getting started, eh?
Keith Whitwell wrote:
If it can't be done in the templates, the templates should be fixed...
However, in the 3.5 branch I've just removed the interrupt altogether.
Good to hear :-)
-- Gareth
___
Dri-devel mailing list
[EMAIL PROTECTED]
Sottek, Matthew J wrote:
This patch adds in shared IRQ's. I added it in a manner that
goes along with the template code so it should only alter
the i810 driver.
Plus, the i810 drm isn't working in XFree 4.1.0. I get
regular oopsen when shutting down X leaving me with a blank
console
Ed Schernau wrote:
David Miller suggested I use the X-supplied driver. How is this change made?
I tried compiling X4.1.0 from source, which failed with syntax errors.
Any points appreciated - is this something obvious?
Can you give an example of the syntax errors? Something has gone
Karl Lessard wrote:
Hi everybody,
this patch fixes some problems in texturing in the mga driver.
1) It fixes a texture corruption problem, as the mga driver did not use
the right texture format in some case
( I've seen that when an application wants to store a texture in
[EMAIL PROTECTED] wrote:
In the below line of code, which file contains the definition of DRM(),
and is it a macro , and if so what exactly does it do to the function alloc()
?
radeon_cp.c:entry = DRM(alloc)( sizeof(drm_radeon_freelist_t),
The new templated architecture
Trond Eivind Glomsrød wrote:
Joseph Carter [EMAIL PROTECTED] writes:
They extend beyond that. VIA KT133A-based Abit KT7A (friend has KT7-RAID
which is KT133 based and has the same problem..) 30 seconds or so after
we start any serious 3D app, down goes the entire box.
FTR, I've
Digital Z-Man wrote:
There is a project on sourceforge to create a new X server from
scratch. linuxgfx
While it is a cool idea, it will take 10 years to complete.
XFree86 wont be sitting idle for that time. It is easy to say
scrap XFree86, and I agree that it is a huge amount of
My association with VA Linux Systems came to and end last week. Many of
you will not know that I was never actually an employee of Precision
Insight or VA. I started at PI as a contractor, with the expectation
that once US work visa issues were resolved I'd relocate from Australia
and come on
Brian Paul wrote:
At first I was going to suggest a memory management bug in the driver
but after a quick check I see that the maximum viewport size in
Mesa 3.4.1 is 2048 x 1200.
I didn't realize that people were running screens that tall.
I'll bump the vertical limit to 1400 or so for
Joseph Carter wrote:
Kernel claims to support it. Does it actually work? Well, that's a good
question. At any rate, broken USB would be a killer. I'll probably just
have to continue to be patient and hope Gareth runs into and squashes the
big and nasty bug that (many?) VIA users are
Andy Isaacson wrote:
On Sat, May 05, 2001 at 01:32:07AM +1000, Gareth Hughes wrote:
Adam K Kirchhoff wrote:
Speaking of which, I've been thinking about ordering Tribes2 and was
wondering if there are any issues that we should be aware of?
Go and buy this game. It rocks. Loki has
1 - 100 of 111 matches
Mail list logo