Re: MergedFB and resolution limits...

2005-02-07 Thread Jacek Rosik
Dnia 06-02-2005, nie o godzinie 13:02 -0500, Alex Deucher napisa(a):
 
 On Sun, 06 Feb 2005 11:52:54 -0500, Adam K Kirchhoff
 [EMAIL PROTECTED] wrote:
  
  At one point someone posted to the dri-devel list an idea on how to
  overcome the 2048x2048 limitation on 3D rendering (for r200
  hardware)...  I'm curious if it could be explained again and if
 anyone
  has already begun work on this?
  
  Adam
  
 
 Jacek posted a patch on DRI devel to do this a week or so ago under
 the subject R200 depth tiling questions.
 
 Alex

That patch was not complete (Although applications which don't use depth
buffer should work). I'm quite busy with other things right now. I hope
I will get back to it soon, but I'm not very optimistic :(. The problem
is that depth buffer is tiled and it may be not possible to translate
buffer offset just like with untiled color buffer. Anyway if I'll come
up to some solution I'll let You know.

Best,
-- 
Jacek Rosik [EMAIL PROTECTED]



---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: R200 depth tiling questions.

2005-02-07 Thread Jacek Rosik
Dnia 06-02-2005, nie o godzinie 14:53 -0500, Adam K Kirchhoff
napisa(a):
 Jacek Rosik wrote:
 
 BTW: I have working solution for color but I wonder if this will work
 with color tiling. Of course offset Would have to be aligned to the
 closest tile. Can You take a look at it? (It's missing some bits but
 generaly apps which don't use depth should work Unfortunately I don't
 think there are many ;). Attached is a patch. Any comments are welcome.
   
 
 I somehow missed this discussion the first time, but thankfully Alex 
 pointed it out to me...
 
 Anyway...  I've applied your patch, Jacek.  It mostly works, but 
 definitely has some issues:

I thought It doesn't work. :) It was only a partial fix for color buffer
only.

 
 http://68.44.156.246/glforestfire.png

This looks like those transparent triangles are rendered without depth
buffer, while the rest is. Depth is not working so that's why those
triangles appear.

 http://68.44.156.246/glplanet.png

 This is what happens when I move the window to the lower right hand 
 corner on a MergedFB setup running at 2560x1024.

Does it look different if you force redisplay after window move. Just
after window move offset is not updated  correctly, it will be corrected
at next frame. I haven't figure out that bit yet.

Best,
-- 
Jacek Rosik [EMAIL PROTECTED]



---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: texturing performance local/gart on r100

2005-02-07 Thread Keith Whitwell
Roland Scheidegger wrote:
Since Felix implemented a different heuristics for texture allocation, I 
decided to do some measurements on the r100 how fast agp texturing 
actually is.
Test conditions are as follows:
Celeron Tualatin [EMAIL PROTECTED] on bx-133 (note this has consequences for AGP 
speed, AGP 1x will actually have transfer rate of AGP 1.33x, AGP 2x is 
AGP 2.66x), 1.06GB/s main memory bandwidth. Graphic card is a Radeon 
7200 SDR (@160Mhz, memory bandwidth is a paltry 2.05GB/s) 32MB.
Desktop resolution is 1152x864, local memory available for textures is 
16896kB.
GART texture size was always 3MB less than GARTsize (32MB gart size 
unless specifically mentioned). BIOS agp aperture size was 128MB, but I 
could not test with a GART size of 64MB in xorg.conf (hard lockup when 
starting X, without anything unusual in the logs as far as I could 
tell). I highly doubt it would have made any difference in performance 
though.
I tested with only using the GART heap, with only local tex memory, and 
with both. Note that some quick hacks to disable the local tex memory 
were unsuccesful, with results ranging from chip lockups, hard lockups 
to segfaults (the latter when I used a size of 0 for the local tex 
size), so I just hacked the local tex size to be 65KB instead. GART heap 
was disabled by using only 1 texture heap.
QuakeIII 1.32b, 800x600 windowed, timedemo demo four, with color tiling, 
with texture tiling (that's another story, btw...), with hyperz, without 
compressed textures, 32bit textures, trilinear. best means highest 
texture quality, 2nd means I used the second-highest texture quality 
setting.

AGP 1x, GART only,  best: 38 fps
AGP 1x, GART only,  2nd:  50 fps
AGP 1x, local only, best: 33 fps
AGP 1x, local only, 2nd:  74 fps
AGP 1x, GART+local, best: 54 fps
AGP 1x, GART+local, 2nd:  75 fps
AGP 2x, GART only,  best: 57 fps
AGP 2x, GART only,  2nd:  70 fps
AGP 2x, local only, best: 34 fps
AGP 2x, local only, 2nd:  74 fps
AGP 2x, GART+local, best: 64 fps
AGP 2x, GART+local, 2nd:  74 fps
Some additional results to provide some information about the in-use 
texture sizes of these quake3 benchmark runs:
AGP 2x, 16MB GART, GART only,  best: 13 fps
AGP 2x, 16MB GART, GART only,  2nd:  67 fps
AGP 2x, 8MB GART,  GART+local, best: 60 fps
AGP 2x, 8MB GART,  GART+local, 2nd:  74 fps

And for reference:
AGP 2x, GART+local, 16bit textures (still with tex tiling), best: 77 fps
AGP 2x, GART+local, compressed textures, best: 85 fps
AGP 2x, GART+local, without texture tiling, best: 57fps (just a teaser 
what what you can expect from that patch, the good news is that it now 
actually seems to be fully working...)

All results were only reasonably consistent, I got something like +- 2 
fps, IIRC I got quite more reliable results in the past.

So, now the interesting part, interpretation of the results...
When not using gart texturing, AGP 1x vs. AGP 2x won't do a dime (that's 
not exactly news, nothing to see here, move along...).
BUT, GART texturing performance definitely takes a big hit with AGP 1x 
vs. 2x. With AGP 2x, overall performance is only around 15% slower than 
with local memory however. Obviously, using the gart texture heap is 
MUCH preferable to texture thrashing where the performance really tanks 
completely (you can't see it from the numbers, but for instance those 
33fps with only local memory, there are some sections of the benchmark 
run where the framerate is constantly below 10 fps for several seconds, 
OTOH some parts seem to run a bit faster than when you have both local 
and gart textures).
As a consequence, I think it would be a really good idea to enable 
faster AGP modes and larger GART sizes by default, especially on those 
ultra-low-mem (16MB or even 8MB, though the latter probably hardly ever 
get 3d acceleration at all) radeon mobilitys.
Also, r200 driver really should get gart textures too (in fact, with my 
rv250, which only has 33MB or so available for textures, there are parts 
in some rtcw maps where performance goes down to 5 fps or so, where now 
the r100 thanks to its larger total available ram for texture maps still 
manages 20 or so fps...).
I fully support the idea of enabling gart texturing on the r200 driver. 
 If the old client texturing code can be kept around as an X config 
option, so much the better, but it shouldn't stand in the way of gart 
texturing given the data above.

Keith
---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: texturing performance local/gart on r100

2005-02-07 Thread Keith Whitwell

btw texdown showed that texture transfers to card memory are faster than 
to AGP memory, but not by very much (something like 100MB/s vs. 140MB/s 
in the best case, though the numbers I got fluctuated quite a bit).
How are AGP texture uploads being done?
The card memory uploads are actually done via agp buffers - ie the data 
is written by the driver to agp memory, the card then copies that to 
card memory.  If the AGP case is the same, the data probably travels up 
to the card and then back down again to AGP memory, accounting for the 
relative slowdown.

One benefit of using the card to do the up/downloads is synchronization 
with the graphics engine - if you were to write the texture data 
directly you'd have to have some extra mechanisms to ensure that the 
memory wasn't being used by commands still unprocessed by the GPU.  This 
actually wouldn't be that hard to organize.

Also, note that there is quite a bit of copying going on:
- Application calls glTexImage
- Mesa allocates system memory and copies image
- Driver allocates agp buffers and copies image into them
- Card receives blit command and copies image to final destination.
Currently Mesa needs to keep the system memory copy because texture 
images in card or agp memory can be clobbered by other apps at any time 
- Ian's texture manager will address this.

In the via and sis drivers, texture allocations are permanent, so I've 
been able to try a different strategy:

	- Application calls glTexImage
	- Mesa allocates AGP/card memory and copies texture directly to final 
destination (using memcpy().)

This resulted in an approximate 2x speedup in texture downloads against 
a strategy similar to the first one outlined (but implemented with cpu 
copies, not a hostdata blit).

Keith
---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: texturing performance local/gart on r100

2005-02-07 Thread Felix Kühling
Am Montag, den 07.02.2005, 09:20 + schrieb Keith Whitwell:
  btw texdown showed that texture transfers to card memory are faster than 
  to AGP memory, but not by very much (something like 100MB/s vs. 140MB/s 
  in the best case, though the numbers I got fluctuated quite a bit).
 
 How are AGP texture uploads being done?
 
 The card memory uploads are actually done via agp buffers - ie the data 
 is written by the driver to agp memory, the card then copies that to 
 card memory.  If the AGP case is the same, the data probably travels up 
 to the card and then back down again to AGP memory, accounting for the 
 relative slowdown.
 
 One benefit of using the card to do the up/downloads is synchronization 
 with the graphics engine - if you were to write the texture data 
 directly you'd have to have some extra mechanisms to ensure that the 
 memory wasn't being used by commands still unprocessed by the GPU.  This 
 actually wouldn't be that hard to organize.

The Savage driver does this. Currently it waits for engine idle before
uploading a texture. I thought there must be some more efficient
(age-based) method. I havn't looked into the details yet. Do you have a
hint that would get me started in the right direction?

 
 Also, note that there is quite a bit of copying going on:
 
   - Application calls glTexImage
   - Mesa allocates system memory and copies image
   - Driver allocates agp buffers and copies image into them
   - Card receives blit command and copies image to final destination.
 
 
 Currently Mesa needs to keep the system memory copy because texture 
 images in card or agp memory can be clobbered by other apps at any time 
 - Ian's texture manager will address this.
 
 In the via and sis drivers, texture allocations are permanent, so I've 
 been able to try a different strategy:
 
   - Application calls glTexImage
   - Mesa allocates AGP/card memory and copies texture directly to final 
 destination (using memcpy().)
 
 This resulted in an approximate 2x speedup in texture downloads against 
 a strategy similar to the first one outlined (but implemented with cpu 
 copies, not a hostdata blit).

The Savage driver uploads textures by memcpy (actually a bit more
complicated due to texture tiling) to destination memory. I did some
optimization of that tiled upload recently. Now oprofile shows that most
CPU usage in texdown is not in the tiled upload but in mesa's
texstore-functions. I suppose they could use some optimization too.


-- 
| Felix Kühling [EMAIL PROTECTED] http://fxk.de.vu |
| PGP Fingerprint: 6A3C 9566 5B30 DDED 73C3  B152 151C 5CC1 D888 E595 |



---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: texturing performance local/gart on r100

2005-02-07 Thread Keith Whitwell
Felix Kühling wrote:
Am Montag, den 07.02.2005, 09:20 + schrieb Keith Whitwell:
btw texdown showed that texture transfers to card memory are faster than 
to AGP memory, but not by very much (something like 100MB/s vs. 140MB/s 
in the best case, though the numbers I got fluctuated quite a bit).
How are AGP texture uploads being done?
The card memory uploads are actually done via agp buffers - ie the data 
is written by the driver to agp memory, the card then copies that to 
card memory.  If the AGP case is the same, the data probably travels up 
to the card and then back down again to AGP memory, accounting for the 
relative slowdown.

One benefit of using the card to do the up/downloads is synchronization 
with the graphics engine - if you were to write the texture data 
directly you'd have to have some extra mechanisms to ensure that the 
memory wasn't being used by commands still unprocessed by the GPU.  This 
actually wouldn't be that hard to organize.

The Savage driver does this. Currently it waits for engine idle before
uploading a texture. I thought there must be some more efficient
(age-based) method. I havn't looked into the details yet. Do you have a
hint that would get me started in the right direction?

Also, note that there is quite a bit of copying going on:
- Application calls glTexImage
- Mesa allocates system memory and copies image
- Driver allocates agp buffers and copies image into them
- Card receives blit command and copies image to final destination.
Currently Mesa needs to keep the system memory copy because texture 
images in card or agp memory can be clobbered by other apps at any time 
- Ian's texture manager will address this.

In the via and sis drivers, texture allocations are permanent, so I've 
been able to try a different strategy:

	- Application calls glTexImage
	- Mesa allocates AGP/card memory and copies texture directly to final 
destination (using memcpy().)

This resulted in an approximate 2x speedup in texture downloads against 
a strategy similar to the first one outlined (but implemented with cpu 
copies, not a hostdata blit).

The Savage driver uploads textures by memcpy (actually a bit more
complicated due to texture tiling) to destination memory.
The savage upload mechanism is effectively the same as the via's was 
before this change - Mesa is still creating a copy of the texture in 
local memory (and using it's texstore functions to populate it).  Later 
on, the upload to AGP/FB memory is done by a 
SetTexImages/UploadTexImages step.

Because savage (like via) has a simple kernel texture memory manager, 
there's no actual reason the upload can't go straight to AGP or 
framebuffer memory, skipping the intermediate copy.

Have a look at the code on the mesa_20050114_branch - it's quite 
different to the standard dri driver mechanisms.  It's really just 
waiting for Ian's memory manager to round the whole thing out to a 
grown up texture memory system.

I did some
optimization of that tiled upload recently. Now oprofile shows that most
CPU usage in texdown is not in the tiled upload but in mesa's
texstore-functions. I suppose they could use some optimization too.
I've done a little bit of this on the (badly named) 
mesa_20050114_branch.  There are some pretty obvious things to do there 
which I'll pull onto the trunk today.

Keith

---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: texturing performance local/gart on r100

2005-02-07 Thread Keith Whitwell
Felix Kühling wrote:
Am Montag, den 07.02.2005, 09:20 + schrieb Keith Whitwell:
btw texdown showed that texture transfers to card memory are faster than 
to AGP memory, but not by very much (something like 100MB/s vs. 140MB/s 
in the best case, though the numbers I got fluctuated quite a bit).
How are AGP texture uploads being done?
The card memory uploads are actually done via agp buffers - ie the data 
is written by the driver to agp memory, the card then copies that to 
card memory.  If the AGP case is the same, the data probably travels up 
to the card and then back down again to AGP memory, accounting for the 
relative slowdown.

One benefit of using the card to do the up/downloads is synchronization 
with the graphics engine - if you were to write the texture data 
directly you'd have to have some extra mechanisms to ensure that the 
memory wasn't being used by commands still unprocessed by the GPU.  This 
actually wouldn't be that hard to organize.

The Savage driver does this. Currently it waits for engine idle before
uploading a texture. I thought there must be some more efficient
(age-based) method. I havn't looked into the details yet. Do you have a
hint that would get me started in the right direction?
I'm still working on the age stuff, but the general strategy is to not 
release memory back into the pool until it is guarenteed no longer 
referenced.  This means hanging onto it for a little while until perhaps 
the end of a frame or until the next time you notice the engine is idle.

Note that the via doesn't provide any nice IRQ notification for tracking 
engine progress - you could do a lot better with that sort of mechanism.

Keith

---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: texturing performance local/gart on r100

2005-02-07 Thread Felix Kühling
Am Montag, den 07.02.2005, 12:14 + schrieb Keith Whitwell:
 Felix Kühling wrote:
  Am Montag, den 07.02.2005, 09:20 + schrieb Keith Whitwell:
  
 btw texdown showed that texture transfers to card memory are faster than 
 to AGP memory, but not by very much (something like 100MB/s vs. 140MB/s 
 in the best case, though the numbers I got fluctuated quite a bit).
 
 How are AGP texture uploads being done?
 
 The card memory uploads are actually done via agp buffers - ie the data 
 is written by the driver to agp memory, the card then copies that to 
 card memory.  If the AGP case is the same, the data probably travels up 
 to the card and then back down again to AGP memory, accounting for the 
 relative slowdown.
 
 One benefit of using the card to do the up/downloads is synchronization 
 with the graphics engine - if you were to write the texture data 
 directly you'd have to have some extra mechanisms to ensure that the 
 memory wasn't being used by commands still unprocessed by the GPU.  This 
 actually wouldn't be that hard to organize.
  
  
  The Savage driver does this. Currently it waits for engine idle before
  uploading a texture. I thought there must be some more efficient
  (age-based) method. I havn't looked into the details yet. Do you have a
  hint that would get me started in the right direction?
 
 
 I'm still working on the age stuff, but the general strategy is to not 
 release memory back into the pool until it is guarenteed no longer 
 referenced.  This means hanging onto it for a little while until perhaps 
 the end of a frame or until the next time you notice the engine is idle.

The Savage driver doesn't have its own texture memory manager (you
claimed it had in your other reply). So there is no memory pool managed
by the kernel. I'm trying to do this with the current user-space shared
memory manager (texmem.[ch]). I think it'll be difficult to do what I
want without sacrificing driver-independence or breaking binary
compatibility of the sarea structures. I'll have to take a closer look
at this.

 
 Note that the via doesn't provide any nice IRQ notification for tracking 
 engine progress - you could do a lot better with that sort of mechanism.

Yep, though it doesn't use IRQs (yet).

 
 Keith

-- 
| Felix Kühling [EMAIL PROTECTED] http://fxk.de.vu |
| PGP Fingerprint: 6A3C 9566 5B30 DDED 73C3  B152 151C 5CC1 D888 E595 |



---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: OpenGL apps causes frequent system locks

2005-02-07 Thread Geller Sandor
On Sun, 6 Feb 2005, Richard Stellingwerff wrote:

 On Sun, 06 Feb 2005 13:45:47 -0500, Michel Dnzer [EMAIL PROTECTED] wrote:
  Does it also happen without either or all of the options in the radeon
  device section?

 I just removed the AGPFastWrite and DynamicClocks options. The crashes
 still happen though.

Looks like not only I have problems with the radeon driver. I update the
X.org, drm, Mesa CVS once a week, but haven't found a working combination
since 4-5 months...

I don't intend to start a flame-war, but is there anybody who can use the
r200 driver without X crashes? I'm testing X.org CVS regularly (almost on
every weekend) with my RV280, with the latest linux 2.6 kernel.

I checked out X.org on last Saturday, played with Descent3 for some
minutes, it didn't crashed. Good. Restarted X, started Descent3 again, it
crashed almost immediately, as expected :(( That's why I have a 'shutdown
-r 5' in the background, when I test X.org CVS...

Compiled Mesa CVS, installed the libraries and the driver, started D3.
(Descent3 looks great, textures are visible, thanks to Eric Anholt's
projected texture patch which is in Mesa CVS) The game crashed X in a few
seconds. This was expected too :((

I tried out other OpenGL-based games, unfortunately, I can crash X with
almost every game I have - it is only a matter of time. I tried setting
color depth to 16 bit, changed the AGP to 1x in the system BIOS, none of
these helped.

Last time I used the 2.6.11-rc3 linux kernel, X.org CVS (updated on
20050205), Mesa CVS (20050205, linux-x86-dri target). I didn't built the
drm module, I used the kernel's radeon drm module. I used to test the drm
compiled from the CVS version, but I found out, that it is only a matter
of time to crash the X server, so I skipped the drm CVS test. Of course
the real tests will be these:

1. build and install everything from CVS, if the X server can be crashed,
 go to step 2, otherwise be happy :))
2. use the X.org CVS version with the stock kernel's drm, if X still
 crashes, go to step 3. Otherwise use the  X.org CVS, live without
 projected textures...
3. use the X.org and Mesa CVS versions. If X still crashes, then the bug
 can be in X.org or Mesa or in drm - I'm not able to trace down the
 problem.

Unfortunately all 3 scenarios give the same result: X crashes.

Is there any way I can help to track down the problem(s)? My machine
doesn't have network connection, so I can use only scripts which run in
the background. With expect and gdb maybe it is possible to get at least a
backtrace from my non-local-interactive machine.

Regards,

  Geller Sandor [EMAIL PROTECTED]


---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: texturing performance local/gart on r100

2005-02-07 Thread Keith Whitwell
Felix Kühling wrote:
Am Montag, den 07.02.2005, 12:14 + schrieb Keith Whitwell:
Felix Kühling wrote:
Am Montag, den 07.02.2005, 09:20 + schrieb Keith Whitwell:

btw texdown showed that texture transfers to card memory are faster than 
to AGP memory, but not by very much (something like 100MB/s vs. 140MB/s 
in the best case, though the numbers I got fluctuated quite a bit).
How are AGP texture uploads being done?
The card memory uploads are actually done via agp buffers - ie the data 
is written by the driver to agp memory, the card then copies that to 
card memory.  If the AGP case is the same, the data probably travels up 
to the card and then back down again to AGP memory, accounting for the 
relative slowdown.

One benefit of using the card to do the up/downloads is synchronization 
with the graphics engine - if you were to write the texture data 
directly you'd have to have some extra mechanisms to ensure that the 
memory wasn't being used by commands still unprocessed by the GPU.  This 
actually wouldn't be that hard to organize.

The Savage driver does this. Currently it waits for engine idle before
uploading a texture. I thought there must be some more efficient
(age-based) method. I havn't looked into the details yet. Do you have a
hint that would get me started in the right direction?
I'm still working on the age stuff, but the general strategy is to not 
release memory back into the pool until it is guarenteed no longer 
referenced.  This means hanging onto it for a little while until perhaps 
the end of a frame or until the next time you notice the engine is idle.

The Savage driver doesn't have its own texture memory manager (you
claimed it had in your other reply). So there is no memory pool managed
by the kernel. I'm trying to do this with the current user-space shared
memory manager (texmem.[ch]). I think it'll be difficult to do what I
want without sacrificing driver-independence or breaking binary
compatibility of the sarea structures. I'll have to take a closer look
at this.
Ah, sorry - I had a braino:  Sis and Savage - different different 
different.  Oh well...

Yes, it doesn't make sense to try and incorporate this code at all.  The 
texstore.c fixes should help with download of argb textures, 
otherwise I don't have a lot new to offer...

Keith
---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: texturing performance local/gart on r100

2005-02-07 Thread Keith Whitwell
Keith Whitwell wrote:
Yes, it doesn't make sense to try and incorporate this code at all.  The 
texstore.c fixes should help with download of argb textures, 
otherwise I don't have a lot new to offer...

These are committed now - let me know if they make a difference.
Keith
---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


sis-20050205-linux snapshot - problems

2005-02-07 Thread mhf
Hardware P4 with SIS chipset:

:00:01.0 PCI bridge: Silicon Integrated Systems [SiS] Virtual PCI-to-PC=
I bridge (AGP) (prog-if 00 [Normal decode])
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-=
 Stepping- SERR+ FastB2B-
Status: Cap- 66Mhz- UDF- FastB2B- ParErr- DEVSEL=3Dfast TAbort- T=
Abort-MAbort- SERR- PERR-
Latency: 64
Bus: primary=3D00, secondary=3D01, subordinate=3D01, sec-latency=3D=
32
I/O behind bridge: d000-dfff
Memory behind bridge: ea00-ea0f
Prefetchable memory behind bridge: e000-e7ff
BridgeCtl: Parity- SERR+ NoISA+ VGA+ MAbort- Reset- FastB2B-


:01:00.0 VGA compatible controller: Silicon Integrated Systems [SiS] 65=
x/M650/740 PCI/AGP VGA Display Adapter (prog-if 00 [VGA])
Subsystem: Micro-Star International Co., Ltd.: Unknown device 5339
Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr-=
 Stepping- SERR- FastB2B-
Status: Cap+ 66Mhz+ UDF- FastB2B+ ParErr- DEVSEL=3Dmedium TAbort- =
TAbort- MAbort- SERR- PERR-
Interrupt: pin A routed to IRQ 10
BIST result: 00
Region 0: Memory at e000 (32-bit, prefetchable) [size=3D128M]
Region 1: Memory at ea00 (32-bit, non-prefetchable) [size=3D128=
K]
Region 2: I/O ports at d000 [size=3D128]
Capabilities: [40] Power Management version 2
Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=3D0mA PME(D0-,D1-,D2=
=2D,D3hot-,D3cold-)
Status: D0 PME-Enable- DSel=3D0 DScale=3D0 PME-
Capabilities: [50] AGP version 2.0
Status: RQ=3D16 Iso- ArqSz=3D0 Cal=3D0 SBA+ ITACoh- GART64-=
 HTrans- 64bit- FW- AGP3- Rate=3Dx1,x2,x4
Command: RQ=3D1 ArqSz=3D0 Cal=3D0 SBA- AGP- GART64- 64bit- =
=46W- Rate=3Dnone


Software:

Gentoo current with Gentoo supplied X Window System Version 6.8.1.903 (6.8.=
2 RC 3)
Release Date: 25 January 2005
X Protocol Version 11, Revision 0, Release 6.8.1.903
Build Operating System: Linux 2.4.29-rc3-mhf239 i686 [ELF]=20
Current Operating System: Linux mhfl2 2.4.29-rc3-mhf239 #2 Tue Jan 18 17:43=
:33 CET 2005 i686
Build Date: 05 February 2005

Installed snapshot from sis-20050205-linux.i386.tar.bz2. On starting X:

=46rom dmesg:

No messages

=46rom Xorg.0.log:

(II) SIS(0): Primary V_BIOS segment is: 0xc000=20
(II) SIS(0): VESA BIOS detected
(II) SIS(0): VESA VBE Version 3.0
(II) SIS(0): VESA VBE Total Mem: 16384 kB
(II) SIS(0): VESA VBE OEM: SiS
(II) SIS(0): VESA VBE OEM Software Rev: 1.0
(II) SIS(0): VESA VBE OEM Vendor: Silicon Integrated Systems Corp.
(II) SIS(0): VESA VBE OEM Product: 6325
(II) SIS(0): VESA VBE OEM Product Rev: 1.11.29
(=3D=3D) SIS(0): Write-combining range (0xe000,0x100)
(II) SIS(0): Setting standard mode 0x18
(NI) SIS(0): DRI not supported on this chipset

DRI never worked on this hardware. What is the reason for DRI not supported?

Anything I can help to fix/test please let me know.

Michael


---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


i810-20050205-linux - success report

2005-02-07 Thread mhf
Hardware Celeron 433 with i810 chipset:

:00:00.0 Host bridge: Intel Corp. 82810E DC-133 GMCH [Graphics Memory 
Controller Hub] (rev 03)
Subsystem: Intel Corp. 82810E DC-133 GMCH [Graphics Memory Controller 
Hub]
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR+ FastB2B-
Status: Cap- 66Mhz- UDF- FastB2B+ ParErr- DEVSEL=fast TAbort- 
TAbort-MAbort+ SERR- PERR-
Latency: 0

:00:01.0 VGA compatible controller: Intel Corp. 82810E DC-133 CGC [Chipset 
Graphics Controller] (rev 03) (prog-if 00 [VGA])
Subsystem: FIRST INTERNATIONAL Computer Inc: Unknown device 9980
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B-
Status: Cap+ 66Mhz+ UDF- FastB2B+ ParErr- DEVSEL=medium TAbort- 
TAbort- MAbort- SERR- PERR-
Latency: 0
Interrupt: pin A routed to IRQ 9
Region 0: Memory at e800 (32-bit, prefetchable) [size=64M]
Region 1: Memory at eff8 (32-bit, non-prefetchable) [size=512K]
Capabilities: [dc] Power Management version 1
Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA 
PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-

Software:

Gentoo current with Gentoo supplied X Window System Version 6.8.1.903 (6.8.2 RC 
3)
Release Date: 25 January 2005
X Protocol Version 11, Revision 0, Release 6.8.1.903
Build Operating System: Linux 2.4.29-rc3-mhf239 i686 [ELF] 
Current Operating System: Linux mhfl2 2.4.29-rc3-mhf239 #2 Tue Jan 18 17:43:33 
CET 2005 i686
Build Date: 05 February 2005

Installed snapshot from i810-20050205-linux.i386.tar.bz2. 

DRI functional with glxgears fs 1024x768 at 58 FPS and 18FPS wo DRI.

There was only a minor problem when running X -configure, it
loaded the dri-old libraries as well and crashed

$ ll /usr/lib/modules/extensions
total 6184
-rw-r--r--  1 root 2203502 Feb  7 07:55 dri-old.libGLcore.a
-rw-r--r--  1 root   28570 Feb  7 07:55 dri-old.libdri.a
-rw-r--r--  1 root  462514 Feb  7 07:55 dri-old.libglx.a
[snip]
$ sudo mv /usr/lib/modules/extensions/dri-old.* /tmp

fixed the problem

Thank you
Michael


---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


savage-20050205-linux snapshot - problems

2005-02-07 Thread mhf
Hardware:

Toshiba Libretto L2 Tm5600 with:

:00:04.0 VGA compatible controller: S3 Inc. 86C270-294=20
Savage/IX-MV (rev 13) (prog-if 00 [VGA])
Subsystem: Toshiba America Info Systems: Unknown device 0001
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-=
 Stepping- SERR- FastB2B-
Status: Cap+ 66Mhz- UDF- FastB2B- ParErr- DEVSEL=3Dmedium TAbort- =
TAbort- MAbort- SERR- PERR-
Latency: 248 (1000ns min, 63750ns max), cache line size 08
Interrupt: pin A routed to IRQ 11
Region 0: Memory at e000 (32-bit, non-prefetchable) [size=3D128=
M]
Expansion ROM at 000c [disabled] [size=3D64K]
Capabilities: available only to root

Software:

Gentoo current with Gentoo supplied X Window System Version 6.8.1.903 (6.8.=
2 RC 3)
Release Date: 25 January 2005
X Protocol Version 11, Revision 0, Release 6.8.1.903
Build Operating System: Linux 2.4.29-rc3-mhf239 i686 [ELF]=20
Current Operating System: Linux mhfl4 2.4.29-rc3-mhf239 #2 Tue Jan 18 17:43=
:33 CET 2005 i686
Build Date: 05 February 2005

Installed snapshot from savage-20050205-linux.i386.tar.bz2. On starting X:

=46rom dmesg:

[drm] Initialized savage 1.0.0 20011023 on minor 0: S3 Inc. 86C270-294 Sava=
ge/IX-MV
[drm:savage_unlock] *ERROR* Process 5736 using kernel context 0

=46rom Xorg.0.log:

(II) SAVAGE(0): Primary V_BIOS segment is: 0xc000
(II) SAVAGE(0): VESA BIOS detected
(II) SAVAGE(0): VESA VBE Version 2.0
(II) SAVAGE(0): VESA VBE Total Mem: 8192 kB
(II) SAVAGE(0): VESA VBE OEM: S3 Incorporated. M7 BIOS
(II) SAVAGE(0): VESA VBE OEM Software Rev: 1.0
(II) SAVAGE(0): VESA VBE OEM Vendor: S3 Incorporated.
(II) SAVAGE(0): VESA VBE OEM Product: VBE 2.0
(II) SAVAGE(0): VESA VBE OEM Product Rev: Rev 1.1
(--) SAVAGE(0): mapping framebuffer @ 0xe000 with size 0x100
(II) SAVAGE(0): map aperture:0x413cc000
(II) SAVAGE(0): 4692 kB of Videoram needed for 3D; 16384 kB of Videoram ava=
ilable
(II) SAVAGE(0): Sufficient Videoram available for 3D
(II) SAVAGE(0): [drm] bpp: 16 depth: 16
(II) SAVAGE(0): [drm] Sarea 2200+284: 2484
drmOpenDevice: node name is /dev/dri/card0
drmOpenDevice: open result is 7, (OK)
drmOpenDevice: node name is /dev/dri/card0
drmOpenDevice: open result is 7, (OK)
drmOpenByBusid: Searching for BusID pci::00:04.0
drmOpenDevice: node name is /dev/dri/card0
drmOpenDevice: open result is 7, (OK)
drmOpenByBusid: drmOpenMinor returns 7
drmOpenByBusid: drmGetBusid reports pci::00:04.0
(II) SAVAGE(0): [drm] DRM interface version 1.2
(II) SAVAGE(0): [drm] created savage driver at busid pci::00:04.0
(II) SAVAGE(0): [drm] added 8192 byte SAREA at 0xcfa3a000
(II) SAVAGE(0): [drm] mapped SAREA 0xcfa3a000 to 0x40024000
(II) SAVAGE(0): [drm] framebuffer handle =3D 0xe000
(II) SAVAGE(0): [drm] added 1 reserved context for kernel
(EE) SAVAGE(0): [dri] SAVAGEDRIScreenInit failed because of a version misma=
tch.
[dri] savage.o kernel module version is 1.0.0 but version 2.0.x is needed.
[dri] Disabling DRI.
(II) SAVAGE(0): [drm] removed 1 reserved context for kernel
(II) SAVAGE(0): [drm] unmapping 8192 bytes of SAREA 0xcfa3a000 at 0x40024000
(EE) SAVAGE(0): DRI isn't enabled

So, driver in snapshot still reports 1.0. Seems to be quite old (2001).

Changed that to 2.0, rebuild on starting X:

=46rom dmesg:

[drm] Initialized savage 2.0.0 20011023 on minor 0: S3 Inc. 86C270-294 Sava=
ge/IX-MV
[drm:savage_unlock] *ERROR* Process 9671 using kernel context 0
[drm:savage_unlock] *ERROR* Process 11025 using kernel context 0

=46rom Xorg.0.log:

(II) SAVAGE(0): Primary V_BIOS segment is: 0xc000
(II) SAVAGE(0): VESA BIOS detected
(II) SAVAGE(0): VESA VBE Version 2.0
(II) SAVAGE(0): VESA VBE Total Mem: 8192 kB
(II) SAVAGE(0): VESA VBE OEM: S3 Incorporated. M7 BIOS
(II) SAVAGE(0): VESA VBE OEM Software Rev: 1.0
(II) SAVAGE(0): VESA VBE OEM Vendor: S3 Incorporated.
(II) SAVAGE(0): VESA VBE OEM Product: VBE 2.0
(II) SAVAGE(0): VESA VBE OEM Product Rev: Rev 1.1
(--) SAVAGE(0): mapping framebuffer @ 0xe000 with size 0x100
(II) SAVAGE(0): map aperture:0x413cc000
(II) SAVAGE(0): 4692 kB of Videoram needed for 3D; 16384 kB of Videoram ava=
ilable
(II) SAVAGE(0): Sufficient Videoram available for 3D
(II) SAVAGE(0): [drm] bpp: 16 depth: 16
(II) SAVAGE(0): [drm] Sarea 2200+284: 2484
drmOpenDevice: node name is /dev/dri/card0
drmOpenDevice: open result is 7, (OK)
drmOpenDevice: node name is /dev/dri/card0
drmOpenDevice: open result is 7, (OK)
drmOpenByBusid: Searching for BusID pci::00:04.0
drmOpenDevice: node name is /dev/dri/card0
drmOpenDevice: open result is 7, (OK)
drmOpenByBusid: drmOpenMinor returns 7
drmOpenByBusid: drmGetBusid reports pci::00:04.0
(II) SAVAGE(0): [drm] DRM interface version 1.2
(II) SAVAGE(0): [drm] created savage driver at busid pci::00:04.0
(II) SAVAGE(0): [drm] added 8192 byte SAREA at 0xcfa3a000
(II) SAVAGE(0): [drm] mapped SAREA 0xcfa3a000 to 0x40024000
(II) SAVAGE(0): [drm] framebuffer handle =3D 0xe000
(II) SAVAGE(0): [drm] 

Re: savage-20050205-linux snapshot - problems

2005-02-07 Thread Felix Kühling
Am Montag, den 07.02.2005, 15:12 +0100 schrieb [EMAIL PROTECTED]:
 Hardware:
 
 Toshiba Libretto L2 Tm5600 with:
 
 :00:04.0 VGA compatible controller: S3 Inc. 86C270-294=20
 Savage/IX-MV (rev 13) (prog-if 00 [VGA])
 Subsystem: Toshiba America Info Systems: Unknown device 0001
 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-=
  Stepping- SERR- FastB2B-
 Status: Cap+ 66Mhz- UDF- FastB2B- ParErr- DEVSEL=3Dmedium TAbort- =
 TAbort- MAbort- SERR- PERR-
 Latency: 248 (1000ns min, 63750ns max), cache line size 08
 Interrupt: pin A routed to IRQ 11
 Region 0: Memory at e000 (32-bit, non-prefetchable) [size=3D128=
 M]
 Expansion ROM at 000c [disabled] [size=3D64K]
 Capabilities: available only to root
 
 Software:
 
 Gentoo current with Gentoo supplied X Window System Version 6.8.1.903 (6.8.=
 2 RC 3)
 Release Date: 25 January 2005
 X Protocol Version 11, Revision 0, Release 6.8.1.903
 Build Operating System: Linux 2.4.29-rc3-mhf239 i686 [ELF]=20
 Current Operating System: Linux mhfl4 2.4.29-rc3-mhf239 #2 Tue Jan 18 17:43=
 :33 CET 2005 i686
 Build Date: 05 February 2005
 
 Installed snapshot from savage-20050205-linux.i386.tar.bz2. On starting X:
 
 =46rom dmesg:
 
 [drm] Initialized savage 1.0.0 20011023 on minor 0: S3 Inc. 86C270-294 Sava=
 ge/IX-MV
 [drm:savage_unlock] *ERROR* Process 5736 using kernel context 0
 
 =46rom Xorg.0.log:
 
 (II) SAVAGE(0): Primary V_BIOS segment is: 0xc000
 (II) SAVAGE(0): VESA BIOS detected
 (II) SAVAGE(0): VESA VBE Version 2.0
 (II) SAVAGE(0): VESA VBE Total Mem: 8192 kB
 (II) SAVAGE(0): VESA VBE OEM: S3 Incorporated. M7 BIOS
 (II) SAVAGE(0): VESA VBE OEM Software Rev: 1.0
 (II) SAVAGE(0): VESA VBE OEM Vendor: S3 Incorporated.
 (II) SAVAGE(0): VESA VBE OEM Product: VBE 2.0
 (II) SAVAGE(0): VESA VBE OEM Product Rev: Rev 1.1
 (--) SAVAGE(0): mapping framebuffer @ 0xe000 with size 0x100
 (II) SAVAGE(0): map aperture:0x413cc000
 (II) SAVAGE(0): 4692 kB of Videoram needed for 3D; 16384 kB of Videoram ava=
 ilable
 (II) SAVAGE(0): Sufficient Videoram available for 3D
 (II) SAVAGE(0): [drm] bpp: 16 depth: 16
 (II) SAVAGE(0): [drm] Sarea 2200+284: 2484
 drmOpenDevice: node name is /dev/dri/card0
 drmOpenDevice: open result is 7, (OK)
 drmOpenDevice: node name is /dev/dri/card0
 drmOpenDevice: open result is 7, (OK)
 drmOpenByBusid: Searching for BusID pci::00:04.0
 drmOpenDevice: node name is /dev/dri/card0
 drmOpenDevice: open result is 7, (OK)
 drmOpenByBusid: drmOpenMinor returns 7
 drmOpenByBusid: drmGetBusid reports pci::00:04.0
 (II) SAVAGE(0): [drm] DRM interface version 1.2
 (II) SAVAGE(0): [drm] created savage driver at busid pci::00:04.0
 (II) SAVAGE(0): [drm] added 8192 byte SAREA at 0xcfa3a000
 (II) SAVAGE(0): [drm] mapped SAREA 0xcfa3a000 to 0x40024000
 (II) SAVAGE(0): [drm] framebuffer handle =3D 0xe000
 (II) SAVAGE(0): [drm] added 1 reserved context for kernel
 (EE) SAVAGE(0): [dri] SAVAGEDRIScreenInit failed because of a version misma=
 tch.
 [dri] savage.o kernel module version is 1.0.0 but version 2.0.x is needed.
 [dri] Disabling DRI.
 (II) SAVAGE(0): [drm] removed 1 reserved context for kernel
 (II) SAVAGE(0): [drm] unmapping 8192 bytes of SAREA 0xcfa3a000 at 0x40024000
 (EE) SAVAGE(0): DRI isn't enabled
 
 So, driver in snapshot still reports 1.0. Seems to be quite old (2001).

The new Savage DRM 2.0.0 (in fact 2.2.0 by now) is only available for
Linux 2.6. Since Linux 2.4 is no longer open for new features there is
not much point back-porting it to Linux 2.4. See
http://dri.freedesktop.org/wiki/S3Savage for more information about the
savage driver status. I just added a note about Linux 2.4 to that page.

 
 Changed that to 2.0, rebuild on starting X:

Don't do that. You're pretending an interface version that this DRM
doesn't provide. There are good reasons for checking interface versions
as you found out below. ;-)

 
 =46rom dmesg:
 
 [drm] Initialized savage 2.0.0 20011023 on minor 0: S3 Inc. 86C270-294 Sava=
 ge/IX-MV
 [drm:savage_unlock] *ERROR* Process 9671 using kernel context 0
 [drm:savage_unlock] *ERROR* Process 11025 using kernel context 0
 
 =46rom Xorg.0.log:
 
 (II) SAVAGE(0): Primary V_BIOS segment is: 0xc000
 (II) SAVAGE(0): VESA BIOS detected
 (II) SAVAGE(0): VESA VBE Version 2.0
 (II) SAVAGE(0): VESA VBE Total Mem: 8192 kB
 (II) SAVAGE(0): VESA VBE OEM: S3 Incorporated. M7 BIOS
 (II) SAVAGE(0): VESA VBE OEM Software Rev: 1.0
 (II) SAVAGE(0): VESA VBE OEM Vendor: S3 Incorporated.
 (II) SAVAGE(0): VESA VBE OEM Product: VBE 2.0
 (II) SAVAGE(0): VESA VBE OEM Product Rev: Rev 1.1
 (--) SAVAGE(0): mapping framebuffer @ 0xe000 with size 0x100
 (II) SAVAGE(0): map aperture:0x413cc000
 (II) SAVAGE(0): 4692 kB of Videoram needed for 3D; 16384 kB of Videoram ava=
 ilable
 (II) SAVAGE(0): Sufficient Videoram available for 3D
 (II) SAVAGE(0): [drm] bpp: 16 depth: 16
 (II) SAVAGE(0): [drm] Sarea 2200+284: 2484
 drmOpenDevice: node 

Re: sis-20050205-linux snapshot - problems

2005-02-07 Thread Adam Jackson
On Monday 07 February 2005 09:11, [EMAIL PROTECTED] wrote:
 (II) SIS(0): Primary V_BIOS segment is: 0xc000=20
 (II) SIS(0): VESA BIOS detected
 (II) SIS(0): VESA VBE Version 3.0
 (II) SIS(0): VESA VBE Total Mem: 16384 kB
 (II) SIS(0): VESA VBE OEM: SiS
 (II) SIS(0): VESA VBE OEM Software Rev: 1.0
 (II) SIS(0): VESA VBE OEM Vendor: Silicon Integrated Systems Corp.
 (II) SIS(0): VESA VBE OEM Product: 6325
 (II) SIS(0): VESA VBE OEM Product Rev: 1.11.29
 (=3D=3D) SIS(0): Write-combining range (0xe000,0x100)
 (II) SIS(0): Setting standard mode 0x18
 (NI) SIS(0): DRI not supported on this chipset

 DRI never worked on this hardware. What is the reason for DRI not
 supported?

According to Tom Winischofer's page, DRI is only supported on the SiS 300 
series, which apparently means the 300, 540, 630, and 730 but not the 650/740 
you have.  SiS's numbering scheme makes even less sense than ATI's, it 
seems...

As to why, either a) it doesn't actually have a 3D engine, or b) we don't have 
any docs or sample code for it, or c) we do but no one's turned it into a 
working driver.  I suspect option b.

- ajax


pgp4w18uhaLML.pgp
Description: PGP signature


Re: DRM change for R300 DMA

2005-02-07 Thread Jan Kreuzer
Hi Ben

your patch seems to solve some of the lockups I experienced (for example
without your patch i got random lockups after trying some of the nehe
lessons, now i can run most of them fine). However i noticed that xorg
cpu-usage went up to 10% (from around 1%) and that the screen rendering
(2D and 3D) stops a short time every second. Also nehe-lesson-16 still
produces a hardlock. I will test more with neverball and tuxracer (as i
am in x86_64 i could not test 32-bit legacy apps).

Greetings Jan



---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Bug 1195] (EE) I810(0): [dri] DRIScreenInit failed. Disabling DRI.

2005-02-07 Thread bugzilla-daemon
Please do not reply to this email: if you want to comment on the bug, go to 
   
the URL shown below and enter yourcomments there. 
   
https://bugs.freedesktop.org/show_bug.cgi?id=1195  
 




--- Additional Comments From [EMAIL PROTECTED]  2005-02-07 07:04 ---
Jim,

You should use 

VideoRAM 16384

in your config file, so that a texture pool can be created. That should
re-enable DRI.

As for bad refresh - I presume you mean the monitor refresh. Looks like you have
60Hz selected for all your modes. You should check your Monitor Section if 60Hz
is too low for you.  
 
 
--   
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email 
 
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.


---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: r300 texture format

2005-02-07 Thread Vladimir Dergachev

On Sun, 6 Feb 2005, Jerome Glisse wrote:
Hi,
I got little prob on ppc i get unknown texture format for
nehe lesson 6-7 (haven't tested other but i guess they have
the same prob). I think that somewhere there is a little
big endian issue (always this :)) because with previous
version i have got the texture working well.
I have looked a bit at the code but did not manage to find
where the texture format used in r300_setup_textures
is set. I guess this is set set in r300SetTexImages
using tx_table value which itself may be set according
to r300ChooseTextureFormat.
The value i get is 0xff01 (second entry in tx_table).
So my question : does the rgb texture format supported
and so i should not have that, or does the texture format
is being reworked ? and so i should get this.
I changed code a little - to get this working again just create the format 
entry (instead of 0xff01) like the first one.

The reason this is not set is that I could not find any programs to test 
it and did not want to guess wrong - would be a pain to debug later.

  best
Vladimir Dergachev
thx
Jerome Glisse
---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM change for R300 DMA

2005-02-07 Thread Jan Kreuzer
Hi again
when i try to load the drm module with debug enabled i get the following
message in my syslog (and dri works not in xorg):

Unable to handle kernel NULL pointer dereference at 0018
RIP:
a004afe4{:radeon:gpio_setsda+20}
PML4 efa6067 PGD e4ce067 PMD 0
Oops:  [1] PREEMPT
CPU 0
Modules linked in: md5 ipv6 ide_cd cdrom snd_ioctl32 snd_pcm_oss
snd_mixer_oss snd_seq_oss snd_seq_midi_event snd_seq usblp joydev
usbmouse usbhid snd_via82xx snd_ac97_codec snd_pcm snd_timer
snd_page_alloc gameport snd_mpu401_uart snd_rawmidi snd_seq_device snd
soundcore i2c_viapro ehci_hcd uhci_hcd sd_mod sata_promise libata
scsi_mod tuner bttv video_buf firmware_class v4l2_common btcx_risc
videodev evdev usbcore ext3 jbd mbcache eeprom i2c_sensor radeon
i2c_algo_bit i2c_core drm powernow_k8 freq_table processor r8169 crc32
unix
Pid: 13336, comm: sensors Not tainted 2.6.10-gentoo-r7
RIP: 0010:[a004afe4] a004afe4{:radeon:gpio_setsda
+20}
RSP: 0018:01000f303c50  EFLAGS: 00010246
RAX:  RBX: 01001e715a50 RCX: ffda
RDX: 0060 RSI:  RDI: 01001e715780
RBP: 01001e715790 R08: 01001e715000 R09: 0006
R10: 0051e010 R11: a003cd00 R12: 01001e715790
R13: 0006 R14: 0001 R15: 01001e715790
FS:  002a95b646e0() GS:803d0a40()
knlGS:
CS:  0010 DS:  ES:  CR0: 80050033
CR2: 0018 CR3: 00101000 CR4: 06e0
Process sensors (pid: 13336, threadinfo 01000f302000, task
01000ef9f270)Stack: a003c028 01000f303d58
a003c4b9 a00f45e8
   01000f303d78 000100100100 a00f4828
fffdfe96
   01000f303d98 00010020
Call Trace:a003c028{:i2c_algo_bit:i2c_start+40}
a003c4b9{:i2c_algo_bit:bit_xfer+41}
   a0034f6b{:i2c_core:i2c_transfer+59}
a0035a51{:i2c_core:i2c_smbus_xfer+1265}
   801ad65e{sysfs_lookup+382} 8017dce6{do_lookup
+214}
   a0035bd3{:i2c_core:i2c_smbus_read_i2c_block_data+51}
   8017d720{generic_permission+208}
8017d7f9{permission+41}
   a0035548{:i2c_core:i2c_check_functionality+8}
   a005b13d{:eeprom:eeprom_read+317}
801ae491{read+145}
   8016fd96{vfs_read+214} 80170073{sys_read+83}
   8010d376{system_call+126}

Code: 48 03 50 18 8b 02 25 ff ff fe ff 89 c1 81 c9 00 00 01 00 85
RIP a004afe4{:radeon:gpio_setsda+20} RSP 01000f303c50
CR2: 0018

Anyone knows what this means ?
Uname -a output:
Linux rockerduck 2.6.10-gentoo-r7 #1 Sun Feb 6 11:49:23 CET 2005 x86_64
AMD Athlon(tm) 64 Processor 3000+ AuthenticAMD GNU/Linux

Cheers Jan



---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM change for R300 DMA

2005-02-07 Thread Vladimir Dergachev

Hi Ben,
Thank you for the patch :)
I have two concerns about it:
 1) It does not appear to be R300 specific - why doesn't similar
Radeon ioctl work ? Also, I would imagine that this would
require a change in r300 driver to work, wouldn't it ?
 2) I was able to play Quake for somewhat prolonged periods,
I don't think this would have really worked if aging was
truly broken, though, maybe, I am wrong on this one.
Would you have a test app that shows brokenness ? Perhaps
something that uses a lot of textures once.
Also, if aging does not work on your setup, it might have to do with 
amount of system RAM and memory controller settings. I *was* wondering why
fbLocation is 0 for R300. If so, this needs to be fixed in the 2d driver.

best
  Vladimir Dergachev
On Sun, 6 Feb 2005, Ben Skeggs wrote:
Hello Vladimir,
I've attached a patch which implements the RADOEN_CMD_DMA_DISCARD ioctl from
the radeon/r200 drm.  I thought I'd post here before commiting to cvs in case 
I've
done something bad.

Without this, eventually r300AllocDmaRegion will get stuck in a loop 
continually calling
drmDMA (r300_ioctl.c::r300RefillCurrentDmaRegion).

It seems that the drm buffer management code depends on having a scratch 
register
containing the age of a buffer.  I'm not sure of the details, I just know 
that it stops
the infinite drmDMA loop.

Is this the correct way of fixing this?  Or have I completely missed 
something?

Regards,
Ben Skeggs.


---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM change for R300 DMA

2005-02-07 Thread Vladimir Dergachev

On Mon, 7 Feb 2005, Jan Kreuzer wrote:
Hi again
when i try to load the drm module with debug enabled i get the following
message in my syslog (and dri works not in xorg):
Unable to handle kernel NULL pointer dereference at 0018
RIP:
a004afe4{:radeon:gpio_setsda+20}
 ^^
This is I2C code, probably something to do with DDC. Nothing to do with 3d 
or R300. Could be that you need a different kernel version or something.

Alternatively, you might need to do make clean - if some headers have 
changed, for example, and did not trigger dependency for some reason.

 best
Vladimir Dergachev

---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM change for R300 DMA

2005-02-07 Thread Jan Kreuzer
Ok the oops seems to be related to my sensor monitor (superkaramba), i
disabled it and i get no more oops (although this should not happen).

However i still am not able to get dri working when debugging is enabled
in the drm module (with and without Bens patch).
Here is the output from dmesg:

[drm:drm_stub_open]
[drm:drm_open_helper] pid = 14658, minor = 0
[drm:drm_setup]
[drm:drm_ioctl] pid=14658, cmd=0xc0406400, nr=0x00, dev 0xe200, auth=1
[drm:drm_ioctl] pid=14658, cmd=0xc0406400, nr=0x00, dev 0xe200, auth=1
[drm:drm_release] open_count = 1
[drm:drm_release] pid = 14658, device = 0xe200, open_count = 1
[drm:drm_fasync] fd = -1, device = 0xe200
[drm:drm_takedown]
[drm:radeon_do_cleanup_cp]
[drm:drm_ati_pcigart_cleanup] *ERROR* no scatter/gather memory!
[drm:radeon_do_cleanup_cp] *ERROR* failed to cleanup PCI GART!
[drm:drm_stub_open]
[drm:drm_open_helper] pid = 14658, minor = 0
[drm:drm_setup]
[drm:drm_ioctl] pid=14658, cmd=0xc0406400, nr=0x00, dev 0xe200, auth=1
[drm:drm_ioctl] pid=14658, cmd=0xc0406400, nr=0x00, dev 0xe200, auth=1
[drm:drm_release] open_count = 1
[drm:drm_release] pid = 14658, device = 0xe200, open_count = 1
[drm:drm_fasync] fd = -1, device = 0xe200
[drm:drm_takedown]
[drm:radeon_do_cleanup_cp]
[drm:drm_ati_pcigart_cleanup] *ERROR* no scatter/gather memory!
[drm:radeon_do_cleanup_cp] *ERROR* failed to cleanup PCI GART!
[drm:drm_stub_open]
[drm:drm_open_helper] pid = 14658, minor = 0
[drm:drm_setup]
[drm:drm_ioctl] pid=14658, cmd=0xc0106407, nr=0x07, dev 0xe200, auth=1
[drm:drm_ioctl] pid=14658, cmd=0xc0106401, nr=0x01, dev 0xe200, auth=1
[drm:drm_ioctl] pid=14658, cmd=0xc0106401, nr=0x01, dev 0xe200, auth=1
[drm:drm_ioctl] pid=14658, cmd=0xc0106407, nr=0x07, dev 0xe200, auth=1
[drm:drm_ioctl] pid=14658, cmd=0xc0286415, nr=0x15, dev 0xe200, auth=1
[drm:drm_addmap] offset = 0x, size = 0x2000, type = 2
[drm:drm_addmap] 8192 13 ff1f6000
[drm:drm_mmap] start = 0x2a9e274000, end = 0x2a9e276000, offset =
0xff1f6000
[drm:drm_vm_open] 0x2a9e274000,0x2000
[drm:drm_do_vm_shm_nopage] shm_nopage 0x2a9e274000
[drm:drm_do_vm_shm_nopage] shm_nopage 0x2a9e275000
[drm:drm_ioctl] pid=14658, cmd=0xc0286415, nr=0x15, dev 0xe200, auth=1
[drm:drm_addmap] offset = 0xb000, size = 0x0800, type = 0
[drm:drm_addmap] Looking for: offset = 0xb000, size = 0x0800,
type = 0
[drm:drm_addmap] Checking: offset = 0xff1f6000, size =
0x2000, type = 2
[drm:drm_addmap] Checking: offset = 0xb000, size = 0x1000, type
= 0
[drm:drm_addmap] Found existing: offset = 0xb000, size = 0x0800,
type = 0
[drm:drm_ioctl] pid=14658, cmd=0xc0106426, nr=0x26, dev 0xe200, auth=1
[drm:drm_ioctl] pid=14658, cmd=0xc0106426, nr=0x26, dev 0xe200, auth=1
[drm:drm_ioctl] pid=14658, cmd=0xc0406400, nr=0x00, dev 0xe200, auth=1
[drm:drm_ioctl] pid=14658, cmd=0xc0406400, nr=0x00, dev 0xe200, auth=1
[drm:drm_ioctl] pid=14658, cmd=0x6430, nr=0x30, dev 0xe200, auth=1
[drm:drm_ioctl] pid=14658, cmd=0x80386433, nr=0x33, dev 0xe200, auth=1
[drm:drm_ioctl] pid=14658, cmd=0x80386433, nr=0x33, dev 0xe200, auth=1
[drm:drm_ioctl] pid=14658, cmd=0x80386433, nr=0x33, dev 0xe200, auth=1
[drm:drm_ioctl] pid=14658, cmd=0x40086432, nr=0x32, dev 0xe200, auth=1
agpgart: Found an AGP 3.0 compliant device at :00:00.0.
agpgart: X passes broken AGP3 flags (1f000a0f). Fixed.
agpgart: Putting AGP V3 device at :00:00.0 into 8x mode
agpgart: Putting AGP V3 device at :01:00.0 into 8x mode
[drm:drm_ioctl] pid=14658, cmd=0xc0206434, nr=0x34, dev 0xe200, auth=1
[drm:drm_ioctl] pid=14658, cmd=0x40106436, nr=0x36, dev 0xe200, auth=1
[drm:drm_agp_bind] base = 0xd000 entry-bound = 0xd000
[drm:drm_ioctl] pid=14658, cmd=0xc0286415, nr=0x15, dev 0xe200, auth=1
[drm:drm_addmap] offset = 0x, size = 0x00101000, type = 3
[drm:drm_mmap] start = 0x2a9e276000, end = 0x2a9e377000, offset =
0xd000
[drm:drm_mmap]Type = 3; start = 0x2a9e276000, end = 0x2a9e377000,
offset = 0xd000
[drm:drm_vm_open] 0x2a9e276000,0x00101000
[drm:drm_ioctl] pid=14658, cmd=0xc0286415, nr=0x15, dev 0xe200, auth=1
[drm:drm_addmap] offset = 0x00101000, size = 0x1000, type = 3
[drm:drm_mmap] start = 0x2a9e377000, end = 0x2a9e378000, offset =
0xd0101000
[drm:drm_mmap]Type = 3; start = 0x2a9e377000, end = 0x2a9e378000,
offset = 0xd0101000
[drm:drm_vm_open] 0x2a9e377000,0x1000
[drm:drm_ioctl] pid=14658, cmd=0xc0286415, nr=0x15, dev 0xe200, auth=1
[drm:drm_addmap] offset = 0x00102000, size = 0x0020, type = 3
[drm:drm_mmap] start = 0x2a9e378000, end = 0x2a9e578000, offset =
0xd0102000
[drm:drm_mmap]Type = 3; start = 0x2a9e378000, end = 0x2a9e578000,
offset = 0xd0102000
[drm:drm_vm_open] 0x2a9e378000,0x0020
[drm:drm_ioctl] pid=14658, cmd=0xc0286415, nr=0x15, dev 0xe200, auth=1
[drm:drm_addmap] offset = 0x00302000, size = 0x004e, type = 3
[drm:drm_mmap] start = 0x2a9e578000, end = 0x2a9ea58000, offset =
0xd0302000
[drm:drm_mmap]Type = 3; start = 0x2a9e578000, end = 0x2a9ea58000,
offset 

Re: DRM change for R300 DMA

2005-02-07 Thread Ben Skeggs
Hello Jan,
The patch to the drm shouldn't have actually done anything on it's
own.  It requires that r300_ioctl be modified to be of any use at all.
I'll have a look into it some more in the morning.
Ben Skeggs.
Jan Kreuzer wrote:
Hi Ben
your patch seems to solve some of the lockups I experienced (for example
without your patch i got random lockups after trying some of the nehe
lessons, now i can run most of them fine). However i noticed that xorg
cpu-usage went up to 10% (from around 1%) and that the screen rendering
(2D and 3D) stops a short time every second. Also nehe-lesson-16 still
produces a hardlock. I will test more with neverball and tuxracer (as i
am in x86_64 i could not test 32-bit legacy apps).
Greetings Jan

---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel
 


---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM change for R300 DMA

2005-02-07 Thread Ben Skeggs
Hello Vladimir,
 1) It does not appear to be R300 specific - why doesn't similar
Radeon ioctl work ? Also, I would imagine that this would
require a change in r300 driver to work, wouldn't it ?
No, I suspected that it wasn't r300 specific actually, all the code does is
write to a scratch register.  So perhaps I should've just hooked up
a R300_* ioctl number to the radeon code.
 2) I was able to play Quake for somewhat prolonged periods,
I don't think this would have really worked if aging was
truly broken, though, maybe, I am wrong on this one.
Would you have a test app that shows brokenness ? Perhaps
something that uses a lot of textures once.
It only seems to occur after the reference counts for all the dma 
buffers hit
zero.  After that, no more r300AllocDmaRegion calls are successful.

The code I was referring to was r300_dri hacked up to use r300AllocDmaRegion
to grab buffers for vertex data, rather than using rsp-gartTextures.map to
store the data.  It was just a little experiment I was trying, I've 
attached a diff
so you can see what happens for yourself, I may have missed something
important.

In r300ReleaseDmaRegion there is a #if/#endif to either use the ioctl or 
not.

One more thing, I'm not sure how to find the gart offset of the 
vertex/indirect
buffers, I've been hardcoding the value for now (r300_ioctl.h::GET_START),
you may need to look in your X log for the handle as it might differ 
from mine.

glxgears looks a little broken using this aswell, I have some random
colours across the faces of the gears.
Anyhow, it was a little experiment I was doing to see how to implement
vertex buffers.
Ben Skeggs.
diff -Nur orig/r300_context.h mod/r300_context.h
--- orig/r300_context.h 2005-02-04 04:48:32.0 +1100
+++ mod/r300_context.h  2005-02-05 04:08:09.0 +1100
@@ -96,7 +96,11 @@
drmBufPtr buf;
 };
 
-#define GET_START(rvb) (rmesa-radeon.radeonScreen-gart_buffer_offset +   
\
+//#define GET_START(rvb) (rmesa-radeon.radeonScreen-gart_buffer_offset + 
\
+   (rvb)-address - rmesa-dma.buf0_address +  \
+   (rvb)-start)
+
+#define GET_START(rvb) (0xe0102000 +   \
(rvb)-address - rmesa-dma.buf0_address +  \
(rvb)-start)
 
diff -Nur orig/r300_ioctl.c mod/r300_ioctl.c
--- orig/r300_ioctl.c   2005-02-02 02:46:23.0 +1100
+++ mod/r300_ioctl.c2005-02-08 03:02:21.0 +1100
@@ -414,7 +414,7 @@
 
if (rmesa-dma.flush)
rmesa-dma.flush(rmesa);
-
+#if 1
if (--region-buf-refcount == 0) {
drm_radeon_cmd_header_t *cmd;
 
@@ -424,13 +424,15 @@
 
cmd =
(drm_radeon_cmd_header_t *) r300AllocCmdBuf(rmesa,
-   sizeof(*cmd),
+   sizeof(*cmd) / 
4,
__FUNCTION__);
-   cmd-dma.cmd_type = RADEON_CMD_DMA_DISCARD;
+   cmd-dma.cmd_type = R300_CMD_DMA_DISCARD;
cmd-dma.buf_idx = region-buf-buf-idx;
FREE(region-buf);
+
rmesa-dma.nr_released_bufs++;
}
+#endif
 
region-buf = 0;
region-start = 0;
diff -Nur orig/r300_render.c mod/r300_render.c
--- orig/r300_render.c  2005-02-04 06:51:57.0 +1100
+++ mod/r300_render.c   2005-02-08 03:10:41.149064384 +1100
@@ -381,6 +381,8 @@
 /* vertex buffer implementation */
 
 /* We use the start part of GART texture buffer for vertices */
+   static struct r300_dma_region rvb[8];
+   static int nr_rvb = 0; 
 
 
 static void upload_vertex_buffer(r300ContextPtr rmesa, GLcontext *ctx)
@@ -394,33 +396,38 @@

/* A hack - we don't want to overwrite vertex buffers, so we
just use AGP space for them.. Fix me ! */
+#if 0 
static int offset=0;
if(offset2*1024*1024){
//fprintf(stderr, Wrapping agp vertex buffer offset\n);
offset=0;
}
+#endif
+
/* Not the most efficient implementation, but, for now, I just want 
something that
works */
/* to do - make single memcpy per column (is it possible ?) */
/* to do - use dirty flags to avoid redundant copies */
#define UPLOAD_VECTOR(v)\
{ \
+   r300AllocDmaRegion(rmesa, rvb[nr_rvb], v-stride*VB-Count, 
4); \
/* Is the data dirty ? */ \
if (v-flags  ((1v-size)-1)) { \
/* fprintf(stderr, size=%d vs stride=%d\n, v-size, 
v-stride); */ \
if(v-size*4==v-stride){\
/* fast path */  \
-   memcpy(rsp-gartTextures.map+offset, v-data, 
v-stride*VB-Count); \
+  

[Bug 1707] r200 Radeon driver and Wings 3D

2005-02-07 Thread bugzilla-daemon
Please do not reply to this email: if you want to comment on the bug, go to 
   
the URL shown below and enter yourcomments there. 
   
https://bugs.freedesktop.org/show_bug.cgi?id=1707  
 




--- Additional Comments From [EMAIL PROTECTED]  2005-02-07 10:04 ---
Likely causer can be found in r200_state.c starting around line 1995.
I dont see why GL_POLYGON_OFFSET_POINT and GL_POLYGON_OFFSET_LINE wouldnt work
if hardware renders lines and points, so you are better of just testing it. 
 
 
 
--   
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email 
 
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.


---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: OpenGL apps causes frequent system locks

2005-02-07 Thread Roland Scheidegger
Geller Sandor wrote:
I just removed the AGPFastWrite and DynamicClocks options. The crashes
still happen though.

Looks like not only I have problems with the radeon driver. I update the
X.org, drm, Mesa CVS once a week, but haven't found a working combination
since 4-5 months...
I don't intend to start a flame-war, but is there anybody who can use the
r200 driver without X crashes? I'm testing X.org CVS regularly (almost on
every weekend) with my RV280, with the latest linux 2.6 kernel.
I suspect that quite the contrary, almost noone has crashes. This is 
probably part of the problem, if they happen only for few people with 
very specific configurations, none of the developers can reproduce it 
and it will just remain unfixed.
For reference, I never get crashes with the r200 driver (on a rv250), at 
least none which I can't directly reference to my own faults when 
playing around with the driver... At least since the state submit fixes 
half a year ago the driver seems quite solid for me. Except the hard 
lockup I got for some very odd reason when I used a gart size of 64MB, 
though that was on a r100.

Roland

---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: OpenGL apps causes frequent system locks

2005-02-07 Thread Philipp Klaus Krause
Geller Sandor schrieb:
1. build and install everything from CVS, if the X server can be crashed,
 go to step 2, otherwise be happy :))
2. use the X.org CVS version with the stock kernel's drm, if X still
 crashes, go to step 3. Otherwise use the  X.org CVS, live without
 projected textures...
3. use the X.org and Mesa CVS versions. If X still crashes, then the bug
 can be in X.org or Mesa or in drm - I'm not able to trace down the
 problem.
Unfortunately all 3 scenarios give the same result: X crashes.
Is there any way I can help to track down the problem(s)? My machine
doesn't have network connection, so I can use only scripts which run in
the background. With expect and gdb maybe it is possible to get at least a
backtrace from my non-local-interactive machine.
I have the same problems with a rv250. It started about three or four
month ago. I always used the xlibmesa-gl1-dri-trunk Debian package and
the DRM that's included in the kernel. I'm now running 2.6.11-rc3
(which gave ma a ~10% increase in glxgears fps, but didn't help with
stability). Before these problems appeared (I don't remember if it
started with a kernel update or a DRI update) GL applciations wouldn't
crash, even when running for days. With fglrx stability back then was
as bad as it is today with DRI. I haven't tried fglrx since.
Maybe we should try to track down which changes introduced the
stability problems.
Philipp
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: OpenGL apps causes frequent system locks

2005-02-07 Thread Alan Swanson
On Mon, 2005-02-07 at 19:24 +0100, Roland Scheidegger wrote:
 Geller Sandor wrote:
  I don't intend to start a flame-war, but is there anybody who can use the
  r200 driver without X crashes? I'm testing X.org CVS regularly (almost on
  every weekend) with my RV280, with the latest linux 2.6 kernel.
 I suspect that quite the contrary, almost noone has crashes. This is 
 probably part of the problem, if they happen only for few people with 
 very specific configurations, none of the developers can reproduce it 
 and it will just remain unfixed.
 For reference, I never get crashes with the r200 driver (on a rv250), at 
 least none which I can't directly reference to my own faults when 
 playing around with the driver... At least since the state submit fixes 
 half a year ago the driver seems quite solid for me. Except the hard 
 lockup I got for some very odd reason when I used a gart size of 64MB, 
 though that was on a r100.

I'd have to agree with Roland. I don't have any problems on either my
R200 or my rv250 using Mesa/DRM CVS (with X.org 6.8.1 approximately).

Hell, I'm finding the R200 driver is just getting better and faster.

(Both of which use a GART of 64Mb without problems, though which
 is slightly pointless  currently after recent discussions. ;-)

-- 
Alan.

One must never be purposelessnessnesslessness.


signature.asc
Description: This is a digitally signed message part


Re: How to turn on direct rendering on Savage MX?

2005-02-07 Thread Dimitry Naldayev
Michel Dänzer [EMAIL PROTECTED] writes:

 On Sun, 2005-02-06 at 19:32 +0500, Dimitry Naldayev wrote:
 Michel Dänzer [EMAIL PROTECTED] writes:
 
  FWIW, the infamous radeon DRI reinit patch is at
  http://penguinppc.org/~daenzer/DRI/radeon-reinit.diff
 
 look like it is realy not best way do things right...
 
 so couple of questions:
 1) what happens when we do vt switch?

 You mean with this patch? If there are no clients using the DRM, the DRI
 is de-initialized and re-initialized again on return to the X server.

NO, I am not about the patch...
I am about the X server and drm module --- what happens when we do vt
switch? How this event dispatched? what parts of code I need to look to
understand this?


 2) what differences between vt switch and context switch from hardware/drm
 point of view?

 None, really. Without this patch (and even with it if there are clients
 using the DRM), the X server simply holds the hardware lock while
 switched away to prevent clients from touching the hardware.

No, I am not about this. X server holds hardware lock becouse drm not ready
to share hardware between different X sessions... but why? 

See my logic: 

first case:
1) we have window and render OpenGL inside it. DRM manage OpenGL data to
hardware...
2) now we HIDE the window by other window. Are the OpenGL data will go to
hardware? No, becouse the OpenGL window hiden...

Second case:
1) we have window and render OpenGL inside it. DRM manage OpenGL data to
hardware...
2) now we do vt switch...

What differences between first and second cases? why X server need to hold
lock on hardware? Why DRM cannot manage second case as it manage first one?
What we need to add to DRM for this?

--
Dimitry



---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_ide95alloc_id396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: OpenGL apps causes frequent system locks

2005-02-07 Thread Adam K Kirchhoff
Roland Scheidegger wrote:
Geller Sandor wrote:
I just removed the AGPFastWrite and DynamicClocks options. The crashes
still happen though.

Looks like not only I have problems with the radeon driver. I update the
X.org, drm, Mesa CVS once a week, but haven't found a working 
combination
since 4-5 months...

I don't intend to start a flame-war, but is there anybody who can use 
the
r200 driver without X crashes? I'm testing X.org CVS regularly 
(almost on
every weekend) with my RV280, with the latest linux 2.6 kernel.
I suspect that quite the contrary, almost noone has crashes. This is 
probably part of the problem, if they happen only for few people with 
very specific configurations, none of the developers can reproduce it 
and it will just remain unfixed.
For reference, I never get crashes with the r200 driver (on a rv250), 
at least none which I can't directly reference to my own faults when 
playing around with the driver... At least since the state submit 
fixes half a year ago the driver seems quite solid for me. Except the 
hard lockup I got for some very odd reason when I used a gart size of 
64MB, though that was on a r100.

Roland
Agreed, for the most part.  I use an 8500 and 9200 at work and at home.  
I regularly update my Mesa tree and build new version of the r200 
driver.  The only problems I've experienced is if I leave xscreensaver 
up and running all night, randomly choosing from the OpenGL 
screensavers...  I'll sometimes (once a week, maybe) find X locked 
solid, and only a reboot will get it working again.  The XiG drivers do 
this as well, the only difference being that I am able to just kill the 
GL screensavers that are locked up and get my display back :-)

I think perhaps the biggest culprit for this problem with the Mesa 
drivers is that I run a MergedFB desktop at 2560x1024, and each screen 
is supposed to display it's own screensaver, but that's just speculation.

Adam

---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Bug 701] Clipping problems in RTCW

2005-02-07 Thread bugzilla-daemon
Please do not reply to this email: if you want to comment on the bug, go to 
   
the URL shown below and enter yourcomments there. 
   
https://bugs.freedesktop.org/show_bug.cgi?id=701  
 




--- Additional Comments From [EMAIL PROTECTED]  2005-02-07 12:20 ---
Sounds like this could be related to bad glPolygonOffset implementation.
I coded a little test program to figure out how r300s z-bias works:
http://nu.rasterburn.org/~aet/offset.tar.bz2 (never mind about the coding 
style!)
Im suggesting something similar to be included in mesas test programs as i didnt
find any decent programs that would clearly show if glPolygonOffset is operating
correctly. Unless anyone isnt interested in creating a new test im willing to 
port  
this to glut and add tests for points and lines.  
 
 
--   
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email 
 
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: sis-20050205-linux snapshot - problems

2005-02-07 Thread Eric Anholt
On Mon, 2005-02-07 at 15:11 +0100, [EMAIL PROTECTED] wrote:
 Hardware P4 with SIS chipset:
 
 :00:01.0 PCI bridge: Silicon Integrated Systems [SiS] Virtual PCI-to-PC=
 I bridge (AGP) (prog-if 00 [Normal decode])
 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-=
  Stepping- SERR+ FastB2B-
 Status: Cap- 66Mhz- UDF- FastB2B- ParErr- DEVSEL=3Dfast TAbort- T=
 Abort-MAbort- SERR- PERR-
 Latency: 64
 Bus: primary=3D00, secondary=3D01, subordinate=3D01, sec-latency=3D=
 32
 I/O behind bridge: d000-dfff
 Memory behind bridge: ea00-ea0f
 Prefetchable memory behind bridge: e000-e7ff
 BridgeCtl: Parity- SERR+ NoISA+ VGA+ MAbort- Reset- FastB2B-
 
 
 :01:00.0 VGA compatible controller: Silicon Integrated Systems [SiS] 65=
 x/M650/740 PCI/AGP VGA Display Adapter (prog-if 00 [VGA])

...

 (II) SIS(0): Primary V_BIOS segment is: 0xc000=20
 (II) SIS(0): VESA BIOS detected
 (II) SIS(0): VESA VBE Version 3.0
 (II) SIS(0): VESA VBE Total Mem: 16384 kB
 (II) SIS(0): VESA VBE OEM: SiS
 (II) SIS(0): VESA VBE OEM Software Rev: 1.0
 (II) SIS(0): VESA VBE OEM Vendor: Silicon Integrated Systems Corp.
 (II) SIS(0): VESA VBE OEM Product: 6325
 (II) SIS(0): VESA VBE OEM Product Rev: 1.11.29
 (=3D=3D) SIS(0): Write-combining range (0xe000,0x100)
 (II) SIS(0): Setting standard mode 0x18
 (NI) SIS(0): DRI not supported on this chipset
 
 DRI never worked on this hardware. What is the reason for DRI not supported?

This is SiS 315-series hardware, which we have no 3d information on.
I've certainly looked and tried to get it.

-- 
Eric Anholt[EMAIL PROTECTED]  
http://people.freebsd.org/~anholt/ [EMAIL PROTECTED]


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Bug 2489] New: Invalid bound check of driver defined ioctls in drm_ioctl

2005-02-07 Thread bugzilla-daemon
Please do not reply to this email: if you want to comment on the bug, go to 
   
the URL shown below and enter yourcomments there. 
   
https://bugs.freedesktop.org/show_bug.cgi?id=2489  
 
   Summary: Invalid bound check of driver defined ioctls in
drm_ioctl
   Product: DRI
   Version: unspecified
  Platform: PC
OS/Version: Linux
Status: NEW
  Severity: normal
  Priority: P2
 Component: DRM modules
AssignedTo: dri-devel@lists.sourceforge.net
ReportedBy: [EMAIL PROTECTED]


--- drm_drv.c~  Mon Dec 13 11:17:28 2004
+++ drm_drv.c   Mon Dec 13 11:15:41 2004
@@ -595,7 +595,7 @@
if (nr  DRIVER_IOCTL_COUNT)
ioctl = drm_ioctls[nr];
else if ((nr = DRM_COMMAND_BASE)
-|| (nr  DRM_COMMAND_BASE + dev-driver-num_ioctls))
+ (nr  DRM_COMMAND_BASE + dev-driver-num_ioctls))
ioctl = dev-driver-ioctls[nr - DRM_COMMAND_BASE];
else
goto err_i1;  
 
 
--   
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email 
 
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: texturing performance local/gart on r100

2005-02-07 Thread Roland Scheidegger
Keith Whitwell wrote:

btw texdown showed that texture transfers to card memory are faster
 than to AGP memory, but not by very much (something like 100MB/s
vs. 140MB/s in the best case, though the numbers I got fluctuated
quite a bit).

How are AGP texture uploads being done?
The card memory uploads are actually done via agp buffers - ie the
data is written by the driver to agp memory, the card then copies
that to card memory.  If the AGP case is the same, the data probably
travels up to the card and then back down again to AGP memory,
accounting for the relative slowdown.
Yes, that's probably the reason. Actually, I was a bit surprised the
difference isn't that big, with AGP 2.66x and agp-vid mem transfers I
would have expected higher numbers, maybe something like half the
theoretical peak bandwidth, but I got only 1/4th. Though maybe the mesa 
overhead was just too big (some formats dropped to 22MB/s or so).

One benefit of using the card to do the up/downloads is
synchronization with the graphics engine - if you were to write the
texture data directly you'd have to have some extra mechanisms to
ensure that the memory wasn't being used by commands still
unprocessed by the GPU.  This actually wouldn't be that hard to
organize.
There are other benefits too, for instance if you use the gpu blitter it
can tile the textures itself (not that it would be a big deal to do that 
with the cpu) (with limits - it can't tile very small textures/mipmaps 
correctly for microtiling, or if it can I couldn't figure out how at 
least...).

Also, note that there is quite a bit of copying going on:
- Application calls glTexImage - Mesa allocates system memory and
copies image - Driver allocates agp buffers and copies image into
them - Card receives blit command and copies image to final
destination.
Currently Mesa needs to keep the system memory copy because texture 
images in card or agp memory can be clobbered by other apps at any
time - Ian's texture manager will address this.

In the via and sis drivers, texture allocations are permanent, so
I've been able to try a different strategy:
- Application calls glTexImage - Mesa allocates AGP/card memory and
copies texture directly to final destination (using memcpy().)
This resulted in an approximate 2x speedup in texture downloads
against a strategy similar to the first one outlined (but implemented
with cpu copies, not a hostdata blit).
That would be a good strategy. I'm not sure though you really get much 
of a speed improvement, I believe hostdata blits should be more 
efficient than cpu copies, at least for local memory. And in practice, I 
don't think texture upload speed is really that critical usually. Though 
it would be nice just to save some memory. I'm not sure how these 
drivers handle it when Mesa needs to access the texture again, but I 
guess since that's a slow path anyway it doesn't really matter if it's 
going to be a lot slower...

Roland
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: OpenGL apps causes frequent system locks

2005-02-07 Thread Richard Stellingwerff
I might be wrong, but the fglrx driver reported my card as being a
R280. It even ran at 8x AGP (A R280 feature, I've read). However, the
DRI radeon driver reports the cards as being a R250 running at 4x AGP.

I'm using an Acer laptop (Acer Ferrari 3000) with Ati Radeon Mobility
9200 M9+ 128MB.

I'd really like to have a stable system, but I'm not sure what to do
to figure out why it crashes. Are there some procedures I can follow
to determine the cause?


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: texturing performance local/gart on r100

2005-02-07 Thread Ian Romanick
Roland Scheidegger wrote:
Keith Whitwell wrote:
- Application calls glTexImage - Mesa allocates AGP/card memory and
copies texture directly to final destination (using memcpy().)
I have a couple questions about this.  How does this handle things like 
glGetTexImage?  What happens when there is memory pressure and a texture 
has to be kicked out?

This resulted in an approximate 2x speedup in texture downloads
against a strategy similar to the first one outlined (but implemented
with cpu copies, not a hostdata blit).
That would be a good strategy. I'm not sure though you really get much 
of a speed improvement, I believe hostdata blits should be more 
efficient than cpu copies, at least for local memory. And in practice, I 
don't think texture upload speed is really that critical usually. Though 
it would be nice just to save some memory. I'm not sure how these 
drivers handle it when Mesa needs to access the texture again, but I 
guess since that's a slow path anyway it doesn't really matter if it's 
going to be a lot slower...
You'd be surprised.  Another advantage of this strategy is that you can 
accelerate things like automatic mipmap generation and glCopyTexImage / 
glCopyTexSubImage.

---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: OpenGL apps causes frequent system locks

2005-02-07 Thread Philipp Klaus Krause
Adam K Kirchhoff schrieb:
Agreed, for the most part.  I use an 8500 and 9200 at work and at home.  
I regularly update my Mesa tree and build new version of the r200 
driver.  The only problems I've experienced is if I leave xscreensaver 
up and running all night, randomly choosing from the OpenGL 
screensavers...  I'll sometimes (once a week, maybe) find X locked 
solid, and only a reboot will get it working again.
I have the sam eproblem with the GL screensavers. It wasn't there three
month ago. Many OpenGL applications work fine, but some just crash.
To reproduce the crashes gl-117 (a free fighter plane simulator)
seems to be the fastest way. It usually crashes within a few seconds.
Three month ago I had never seen it crash with the DRI drivers, even
when it ran for an hour.
Philipp

---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: texturing performance local/gart on r100

2005-02-07 Thread Ian Romanick
Keith Whitwell wrote:
I'm still working on the age stuff, but the general strategy is to not 
release memory back into the pool until it is guarenteed no longer 
referenced.  This means hanging onto it for a little while until perhaps 
the end of a frame or until the next time you notice the engine is idle.

Note that the via doesn't provide any nice IRQ notification for tracking 
engine progress - you could do a lot better with that sort of mechanism.
One of the key elements in all of my memory management ideas is being 
able to efficiently implement something like GL_NV_fence.  After each 
batch of rendering commands (to some granularity) you set a fence.  Each 
object in memory tracks the most recently set fence that bounds its 
usage.  When that fence has past (i.e., TestFenceNV would return TRUE), 
that object can be booted.

Maybe implementing GL_NV_fence for a couple interesting cards would be 
a good idea?  The trick is implementing such that the driver and 
application can use it at the same time without causing fence ID 
conflicts.  The fence IDs are defined as GLuint, so maybe we could 
interally use uint64_t.  If the upper 32-bits are zero it's an 
application ID, otherwise it's a driver-private ID.  Dunno...

---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Bug 2490] New: Max anisotropy of less than 1.0 sets anisotropy to 16.0

2005-02-07 Thread bugzilla-daemon
Please do not reply to this email: if you want to comment on the bug, go to 
   
the URL shown below and enter yourcomments there. 
   
https://bugs.freedesktop.org/show_bug.cgi?id=2490  
 
   Summary: Max anisotropy of less than 1.0 sets anisotropy to 16.0
   Product: Mesa
   Version: unspecified
  Platform: PC
OS/Version: All
Status: NEW
  Severity: trivial
  Priority: P2
 Component: Drivers/DRI/r200
AssignedTo: dri-devel@lists.sourceforge.net
ReportedBy: [EMAIL PROTECTED]


I have only tested this bug with r300 drivers.

diff -uNr dri.orig/r200/r200_tex.c dri/r200/r200_tex.c
--- dri.orig/r200/r200_tex.cSun Feb 20 01:48:09 2005
+++ dri/r200/r200_tex.c Sun Feb 20 01:49:05 2005
@@ -182,7 +182,7 @@
 {
t-pp_txfilter = ~R200_MAX_ANISO_MASK;
 
-   if ( max == 1.0 ) {
+   if ( max = 1.0 ) {
   t-pp_txfilter |= R200_MAX_ANISO_1_TO_1;
} else if ( max = 2.0 ) {
   t-pp_txfilter |= R200_MAX_ANISO_2_TO_1;
diff -uNr dri.orig/radeon/radeon_tex.c dri/radeon/radeon_tex.c
--- dri.orig/radeon/radeon_tex.cSun Feb 20 01:48:10 2005
+++ dri/radeon/radeon_tex.c Sun Feb 20 01:48:45 2005
@@ -147,7 +147,7 @@
 {
t-pp_txfilter = ~RADEON_MAX_ANISO_MASK;
 
-   if ( max == 1.0 ) {
+   if ( max = 1.0 ) {
   t-pp_txfilter |= RADEON_MAX_ANISO_1_TO_1;
} else if ( max = 2.0 ) {
   t-pp_txfilter |= RADEON_MAX_ANISO_2_TO_1;  
 
 
--   
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email 
 
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Bug 2489] Invalid bound check of driver defined ioctls in drm_ioctl

2005-02-07 Thread bugzilla-daemon
Please do not reply to this email: if you want to comment on the bug, go to 
   
the URL shown below and enter yourcomments there. 
   
https://bugs.freedesktop.org/show_bug.cgi?id=2489  
 

[EMAIL PROTECTED] changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||FIXED




--- Additional Comments From [EMAIL PROTECTED]  2005-02-07 14:56 ---
committed...  
 
 
--   
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email 
 
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: texturing performance local/gart on r100

2005-02-07 Thread Keith Whitwell

Currently Mesa needs to keep the system memory copy because texture 
images in card or agp memory can be clobbered by other apps at any
time - Ian's texture manager will address this.

In the via and sis drivers, texture allocations are permanent, so
I've been able to try a different strategy:
- Application calls glTexImage - Mesa allocates AGP/card memory and
copies texture directly to final destination (using memcpy().)
This resulted in an approximate 2x speedup in texture downloads
against a strategy similar to the first one outlined (but implemented
with cpu copies, not a hostdata blit).
That would be a good strategy. I'm not sure though you really get much 
of a speed improvement, I believe hostdata blits should be more 
efficient than cpu copies, at least for local memory. And in practice, I 
don't think texture upload speed is really that critical usually. Though 
it would be nice just to save some memory. I'm not sure how these 
drivers handle it when Mesa needs to access the texture again, but I 
guess since that's a slow path anyway it doesn't really matter if it's 
going to be a lot slower...
Ideally you'd have the driver mlock() the user data and have the GPU 
blit it right out of that space.

You need to be able to copy the data back in low-texture-memory 
situations, so you could use the same mechanism for fallbacks.  At the 
moment I just leave it where it is in the via driver.

Keith
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: texturing performance local/gart on r100

2005-02-07 Thread Keith Whitwell
Ian Romanick wrote:
Roland Scheidegger wrote:
Keith Whitwell wrote:
- Application calls glTexImage - Mesa allocates AGP/card memory and
copies texture directly to final destination (using memcpy().)

I have a couple questions about this.  How does this handle things like 
glGetTexImage?  What happens when there is memory pressure and a texture 
has to be kicked out?
Well I've got some code there for the memory pressure situation, but 
it's really a placeholder for what I hope your kernel memory manager 
might do (hint hint...)...

This resulted in an approximate 2x speedup in texture downloads
against a strategy similar to the first one outlined (but implemented
with cpu copies, not a hostdata blit).

That would be a good strategy. I'm not sure though you really get much 
of a speed improvement, I believe hostdata blits should be more 
efficient than cpu copies, at least for local memory. And in practice, 
I don't think texture upload speed is really that critical usually. 
Though it would be nice just to save some memory. I'm not sure how 
these drivers handle it when Mesa needs to access the texture again, 
but I guess since that's a slow path anyway it doesn't really matter 
if it's going to be a lot slower...

You'd be surprised.  Another advantage of this strategy is that you can 
accelerate things like automatic mipmap generation and glCopyTexImage / 
glCopyTexSubImage.
Indeed - though as yet I haven't taken advantage of that.
I'm trying to work this as a development of what the userspace should 
look like in the situation where there is a reliable kernel memory 
manager, so I should probably get onto this.

Keith

---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[R300] Trying to get r300 working in Xen

2005-02-07 Thread Jacob Gorm Hansen
hi,
after getting r300 working on my Radeon 9600SE under Linux, I am trying
to make the same thing happen under Xen (in domain0).
However, I get the following error when loading drm.ko:
Linux agpgart interface v0.100 (c) Dave Jones
agpgart: Detected an Intel i875 Chipset.
agpgart: Maximum main memory to use for agp memory: 198M
agpgart: AGP aperture is 32M @ 0xe000
[drm] Initialized drm 1.0.0 20040925
PCI: Obtained IRQ 16 for device :01:00.0
[drm] Initialized radeon 1.12.1 20041216 on minor 0:
agpgart: Found an AGP 3.0 compliant device at :00:00.0.
agpgart: Putting AGP V3 device at :00:00.0 into 4x mode
agpgart: Putting AGP V3 device at :01:00.0 into 4x mode
[drm:radeon_cp_init] *ERROR* radeon_cp_init called without lock held,
held  0 owner  c9958f6c
[drm:drm_unlock] *ERROR* Process 6699 using kernel context 0
mtrr: reg: 1 has count=0
mtrr: reg: 1 has count=0
Does this look familiar to anyone?
Jacob

---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: texturing performance local/gart on r100

2005-02-07 Thread Keith Whitwell
Ian Romanick wrote:
Keith Whitwell wrote:
I'm still working on the age stuff, but the general strategy is to not 
release memory back into the pool until it is guarenteed no longer 
referenced.  This means hanging onto it for a little while until 
perhaps the end of a frame or until the next time you notice the 
engine is idle.

Note that the via doesn't provide any nice IRQ notification for 
tracking engine progress - you could do a lot better with that sort of 
mechanism.

One of the key elements in all of my memory management ideas is being 
able to efficiently implement something like GL_NV_fence.  After each 
batch of rendering commands (to some granularity) you set a fence.  Each 
object in memory tracks the most recently set fence that bounds its 
usage.  When that fence has past (i.e., TestFenceNV would return TRUE), 
that object can be booted.

Maybe implementing GL_NV_fence for a couple interesting cards would be 
a good idea?  The trick is implementing such that the driver and 
application can use it at the same time without causing fence ID 
conflicts.  The fence IDs are defined as GLuint, so maybe we could 
interally use uint64_t.  If the upper 32-bits are zero it's an 
application ID, otherwise it's a driver-private ID.  Dunno...
I've come to the same conclusion myself...
The via driver is currently using a 2d blit to write back a 32bit 
counter value to a per-context private piece of memory from which the 
driver can figure out which parts of the command stream have been 
processed.  This is a development of the aging mechanisms in the other 
drivers, but applied a little more rigorously across the driver.

Keith
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: OpenGL apps causes frequent system locks

2005-02-07 Thread Dave Airlie

Okay can we open a bug on this, attaching xorg.conf and Xorg.0.log to
it...

I'm running r200, both 8500LE and 9200 cards at home and work without
issue, except a recent bug report where if you run
gears/euphoria/gtk-pixbufdemo it locks after a good while (hours...) as
tracking that sort of bug takes forever, I'm waiting to until I have
forever time availabble to me :-).. but as I'm already tracking a
week-long crash on my M7 system for a job, I can't commit my test
system...

Dave.


On Mon, 7 Feb 2005, Philipp Klaus Krause wrote:

 Adam K Kirchhoff schrieb:

 
  Agreed, for the most part.  I use an 8500 and 9200 at work and at home.  I
  regularly update my Mesa tree and build new version of the r200 driver.  The
  only problems I've experienced is if I leave xscreensaver up and running all
  night, randomly choosing from the OpenGL screensavers...  I'll sometimes
  (once a week, maybe) find X locked solid, and only a reboot will get it
  working again.

 I have the sam eproblem with the GL screensavers. It wasn't there three
 month ago. Many OpenGL applications work fine, but some just crash.
 To reproduce the crashes gl-117 (a free fighter plane simulator)
 seems to be the fastest way. It usually crashes within a few seconds.
 Three month ago I had never seen it crash with the DRI drivers, even
 when it ran for an hour.

 Philipp



 ---
 SF email is sponsored by - The IT Product Guide
 Read honest  candid reviews on hundreds of IT Products from real users.
 Discover which products truly live up to the hype. Start reading now.
 http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
 --
 ___
 Dri-devel mailing list
 Dri-devel@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/dri-devel


-- 
David Airlie, Software Engineer
http://www.skynet.ie/~airlied / airlied at skynet.ie
pam_smb / Linux DECstation / Linux VAX / ILUG person



---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [R300] Trying to get r300 working in Xen

2005-02-07 Thread Roland Scheidegger
Jacob Gorm Hansen wrote:
hi,
after getting r300 working on my Radeon 9600SE under Linux, I am trying
to make the same thing happen under Xen (in domain0).
However, I get the following error when loading drm.ko:
Linux agpgart interface v0.100 (c) Dave Jones
agpgart: Detected an Intel i875 Chipset.
agpgart: Maximum main memory to use for agp memory: 198M
agpgart: AGP aperture is 32M @ 0xe000
[drm] Initialized drm 1.0.0 20040925
PCI: Obtained IRQ 16 for device :01:00.0
[drm] Initialized radeon 1.12.1 20041216 on minor 0:
agpgart: Found an AGP 3.0 compliant device at :00:00.0.
agpgart: Putting AGP V3 device at :00:00.0 into 4x mode
agpgart: Putting AGP V3 device at :01:00.0 into 4x mode
[drm:radeon_cp_init] *ERROR* radeon_cp_init called without lock held,
held  0 owner  c9958f6c
[drm:drm_unlock] *ERROR* Process 6699 using kernel context 0
mtrr: reg: 1 has count=0
mtrr: reg: 1 has count=0
Does this look familiar to anyone?
I'm not familiar with Xen, but I heard one of the few drivers which are 
problematic with it are the agp drivers. This _could_ be such an issue. 
If so the Xen guys are likely to know more about it. That's really just 
a guess though.

Roland
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [R300] Trying to get r300 working in Xen

2005-02-07 Thread Jacob Gorm Hansen
Roland Scheidegger wrote:
I'm not familiar with Xen, but I heard one of the few drivers which are 
problematic with it are the agp drivers. This _could_ be such an issue. 
If so the Xen guys are likely to know more about it. That's really just 
a guess though.
hi,
yes this is likely to do with Xen, I have sorted a few bugs related to 
that already. I was just curious if DRI-people might be better at 
decoding what goes wrong in this specific case.

thanks,
Jacob
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


[Bug 2490] Max anisotropy of less than 1.0 sets anisotropy to 16.0

2005-02-07 Thread bugzilla-daemon
Please do not reply to this email: if you want to comment on the bug, go to 
   
the URL shown below and enter yourcomments there. 
   
https://bugs.freedesktop.org/show_bug.cgi?id=2490  
 




--- Additional Comments From [EMAIL PROTECTED]  2005-02-07 15:57 ---
This is not a bug. Values below 1.0 are not valid according to the extension,
and in fact Mesa returns an error if you try setting it below 1.0 (at least the
code suggests that, I haven't actually tried...), so values below 1.0 will never
hit the driver code.  
 
 
--   
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email 
 
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [R300] Trying to get r300 working in Xen

2005-02-07 Thread Jon Smirl
On Tue, 08 Feb 2005 00:42:29 +0100, Roland Scheidegger
[EMAIL PROTECTED] wrote:
  Linux agpgart interface v0.100 (c) Dave Jones
  agpgart: Detected an Intel i875 Chipset.
  agpgart: Maximum main memory to use for agp memory: 198M
  agpgart: AGP aperture is 32M @ 0xe000
  [drm] Initialized drm 1.0.0 20040925
  PCI: Obtained IRQ 16 for device :01:00.0
  [drm] Initialized radeon 1.12.1 20041216 on minor 0:
  agpgart: Found an AGP 3.0 compliant device at :00:00.0.
  agpgart: Putting AGP V3 device at :00:00.0 into 4x mode
  agpgart: Putting AGP V3 device at :01:00.0 into 4x mode
  [drm:radeon_cp_init] *ERROR* radeon_cp_init called without lock held,
  held  0 owner  c9958f6c
  [drm:drm_unlock] *ERROR* Process 6699 using kernel context 0
  mtrr: reg: 1 has count=0
  mtrr: reg: 1 has count=0

My guess is that xen is not setting up the shared memory between
agp/drm and mesa. This is probably because agp/drm are running in the
supervisor kernel and X is in the user one. Mesa set the lock in the
user kernel copy, but the supervisor has a different copy of the
shared memory and the lock is not set there. Arbitrary sharing memory
between the user and supervisor kernel like DRM does is one of the
things Xen is designed to prevent.

I believe xen has a mode where you can assign hardware exclusively to
a user kernel. You'll need to do that and then run apg/drm/mesa all in
the same user kernel. Without that I think you need changes to xen or
drm. This is just a guess based on the log, I have not tried running
Xen and drm.

Running agp/drm in the supervisor kernel implies that you are trying
to share it between user kernels. Doing that would require the
implementation of virtual AGP/DRM devices.

-- 
Jon Smirl
[EMAIL PROTECTED]


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [R300] Trying to get r300 working in Xen

2005-02-07 Thread Jacob Gorm Hansen
Jon Smirl wrote:
I believe xen has a mode where you can assign hardware exclusively to
a user kernel. You'll need to do that and then run apg/drm/mesa all in
the same user kernel. Without that I think you need changes to xen or
drm. This is just a guess based on the log, I have not tried running
Xen and drm.
Running agp/drm in the supervisor kernel implies that you are trying
to share it between user kernels. Doing that would require the
implementation of virtual AGP/DRM devices.
All this is running in domain0, the privileged one, and I am not trying 
to share the devices, so that should not be the issue.

thanks,
Jacob
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM change for R300 DMA

2005-02-07 Thread Vladimir Dergachev

On Tue, 8 Feb 2005, Ben Skeggs wrote:
Hello Vladimir,
 1) It does not appear to be R300 specific - why doesn't similar
Radeon ioctl work ? Also, I would imagine that this would
require a change in r300 driver to work, wouldn't it ?
No, I suspected that it wasn't r300 specific actually, all the code does is
write to a scratch register.  So perhaps I should've just hooked up
a R300_* ioctl number to the radeon code.
The thing we can (and do) use radeon ioctls from within the driver. So we 
can just call the Radeon ioctls directly - no need for R300 version.

This did bite us in the past, and ,probably, still does because of the 
need for different engine idle sequence for R300.


 2) I was able to play Quake for somewhat prolonged periods,
I don't think this would have really worked if aging was
truly broken, though, maybe, I am wrong on this one.
Would you have a test app that shows brokenness ? Perhaps
something that uses a lot of textures once.
It only seems to occur after the reference counts for all the dma buffers hit
zero.  After that, no more r300AllocDmaRegion calls are successful.
AFAIK, r300AllocDMA region allocates one of a several predefined buffers 
(you can see them printed by r300_demo), so if you do not free them there 
is nothing more to allocate.

I could be completely off on this one though - I can't look at the actual 
code at the moment and might have confused functions.

With regard to vertex buffers - these are in framebuffer memory, not in 
AGP one on R200 hardware (at least this was my understanding).

  best
 Vladimir Dergachev
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [R300] Trying to get r300 working in Xen

2005-02-07 Thread Jon Smirl
On Mon, 07 Feb 2005 16:00:54 -0800, Jacob Gorm Hansen [EMAIL PROTECTED] wrote:
 All this is running in domain0, the privileged one, and I am not trying
 to share the devices, so that should not be the issue.

The error most likely has to do with mesa not finding DRM's shared
memory segment. There could be other reasons but that one is 90%
probable. In the DRM CVS tree there is a drmtest program that might
give you something smaller to test with.


-- 
Jon Smirl
[EMAIL PROTECTED]


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: texturing performance local/gart on r100

2005-02-07 Thread Dave Airlie

 I fully support the idea of enabling gart texturing on the r200 driver.  If
 the old client texturing code can be kept around as an X config option, so
 much the better, but it shouldn't stand in the way of gart texturing given the
 data above.

I think this is all in the client side of the driver if I'm not mistaken,
the DDX and DRM look  the same to me.. so we could probably do it using a
driconf option... or did I miss some secret incantation in the drm/ddx..

Dave.

-- 
David Airlie, Software Engineer
http://www.skynet.ie/~airlied / airlied at skynet.ie
pam_smb / Linux DECstation / Linux VAX / ILUG person



---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM change for R300 DMA

2005-02-07 Thread Ben Skeggs

The thing we can (and do) use radeon ioctls from within the driver. So 
we can just call the Radeon ioctls directly - no need for R300 version.

This did bite us in the past, and ,probably, still does because of the 
need for different engine idle sequence for R300.
Ah, I did not realise that we could (and do) just call the radeon ioctls.
AFAIK, r300AllocDMA region allocates one of a several predefined 
buffers (you can see them printed by r300_demo), so if you do not free 
them there is nothing more to allocate.

They do get freed at the end of r300_run_vb_render.  
r300ReleaseDmaRegion is the function which calls the ioctl.

I could be completely off on this one though - I can't look at the 
actual code at the moment and might have confused functions.

With regard to vertex buffers - these are in framebuffer memory, not 
in AGP one on R200 hardware (at least this was my understanding).
r200EmitArrays (which does much that same as upload_vertex_buffer in 
r300_dri) uses the alloc/release dma calls to grab the regions.

Could someone with knowledge of r200_dri explain how vertex buffer 
uploads are put into framebuffer memory on r200?  I had
just assumed that the driver told r200 of the address of the buffer 
acquired from AllocDma.  I've mostly likely got something
very confused here.

Thanks,
Ben Skeggs.
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [R300] Trying to get r300 working in Xen

2005-02-07 Thread Jacob Gorm Hansen
Jacob Gorm Hansen wrote:
Roland Scheidegger wrote:
I'm not familiar with Xen, but I heard one of the few drivers which 
are problematic with it are the agp drivers. This _could_ be such an 
issue. If so the Xen guys are likely to know more about it. That's 
really just a guess though.

hi,
yes this is likely to do with Xen, I have sorted a few bugs related to 
that already. I was just curious if DRI-people might be better at 
decoding what goes wrong in this specific case.
For now, I went back to the fglrx driver. I got that to work by applying 
the following diff (which is not a clean patch and not intended for 
inclusion anywhere before it is cleaned up), to the fglrx open source 
wrapper.

I now a accellerated OpenGL using the ATI libs, in Xen's domain 0. I 
will probably revisit r300 in Xen at a later stage.

Jacob
 agpgart_be.c --
140a141
 
1053c1054
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
1162c1163
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
1167c1168
   agp_bridge.gatt_table = ioremap_nocache(virt_to_phys(table),
---
   agp_bridge.gatt_table = ioremap_nocache(virt_to_bus(table),
1173c1174
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
1248c1249
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
1405c1406
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
1446c1447
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
3179c3180
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
3199c3200
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
3204c3205
 page_map-remapped = ioremap_nocache(virt_to_phys(page_map-real), 
---
 page_map-remapped = ioremap_nocache(virt_to_bus(page_map-real), 
3209c3210
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
3238c3239
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
4407c4408
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
4485c4486
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
4506c4507
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
4600c4601
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
4621c4622
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
4626c4627
   page_map-remapped = ioremap_nocache(virt_to_phys(page_map-real), 
---
   page_map-remapped = ioremap_nocache(virt_to_bus(page_map-real), 
4631c4632
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
4658c4659
 #if defined(CONFIG_X86)  (PAGE_ATTR_FIX != 0)
---
 #if defined(CONFIG_XEN)  (PAGE_ATTR_FIX != 0)
 firegl_public.c --
34a35
 #include asm/pgtable.h
41,48c42,49
 #if !defined(CONFIG_X86_PC) 
 #if !defined(CONFIG_X86_64)
 #if !defined(CONFIG_X86_VOYAGER)
 #if !defined(CONFIG_X86_NUMAQ)
 #if !defined(CONFIG_X86_SUMMIT)
 #if !defined(CONFIG_X86_BIGSMP)
 #if !defined(CONFIG_X86_VISWS)
 #if !defined(CONFIG_X86_GENERICARCH)
---
 #if !defined(CONFIG_XEN) 
 #if !defined(CONFIG_XEN_64)
 #if !defined(CONFIG_XEN_VOYAGER)
 #if !defined(CONFIG_XEN_NUMAQ)
 #if !defined(CONFIG_XEN_SUMMIT)
 #if !defined(CONFIG_XEN_BIGSMP)
 #if !defined(CONFIG_XEN_VISWS)
 #if !defined(CONFIG_XEN_GENERICARCH)
2556a2558
 
2558c2560
 __KE_DEBUG3(start=0x%08lx, 
---
 printk(start=0x%08lx, 
2564a2567,2569
   printk(__ke_vm_map\n);
   //return -EPERM;
 
2589c2594
 if (remap_page_range(FGL_VMA_API_PASS
---
 if (io_remap_page_range(vma,
2595c2600
 __KE_DEBUG(remap_page_range failed\n);
---
 __KE_DEBUG(io_remap_page_range failed\n);
2656c2661
   if (remap_page_range(FGL_VMA_API_PASS
---
   if (io_remap_page_range(vma,
2662c2667
   __KE_DEBUG(remap_page_range failed\n);
---
   __KE_DEBUG(io_remap_page_range 
 failed\n);
2693c2698
   if (remap_page_range(FGL_VMA_API_PASS
---
   if (io_remap_page_range(vma,
2699c2704
   __KE_DEBUG(remap_page_range failed\n);
---
   __KE_DEBUG(io_remap_page_range 
 failed\n);


Re: OpenGL apps causes frequent system locks

2005-02-07 Thread Stephane Marchesin
Roland Scheidegger wrote:
I suspect that quite the contrary, almost noone has crashes. This is 
probably part of the problem, if they happen only for few people with 
very specific configurations, none of the developers can reproduce it 
and it will just remain unfixed.
For reference, I never get crashes with the r200 driver (on a rv250), 
at least none which I can't directly reference to my own faults when 
playing around with the driver... At least since the state submit 
fixes half a year ago the driver seems quite solid for me. Except the 
hard lockup I got for some very odd reason when I used a gart size of 
64MB, though that was on a r100.
Here (rv100 with 64MB gart, no fast writes) playing torcs for too long 
causes gpu lockups. I think everything else (ut2003, quake 3, ) runs 
fine.

Stephane


---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel