Re: Getting DRI working on PCI MGA cards

2005-05-12 Thread Dave Airlie

Just missed most of this, but we do have drm_device_is_agp(dev) in the
drm, only the CVS radeon uses this at present as the DDX can tell it
also...

But on Linux it just does..
pci_find_capability(dev-pdev, PCI_CAP_ID_AGP);

which sounds like your chip says it is AGP but is connected over a PCI
bus..

also you could use postinit to do stuff from the driver and fail there...
I don't mind adding a preinit if needed.. but I think postinit should be
fine for your purposes... also at the moment the kernel is different than
CVS, it doesn't take over the PCI device.. so drm_get_dev is called over
the pciids... this shouldn't affect anything but if you were to modify the
mga_drv.c:probe function it would cause issues...

Dave.

-- 
David Airlie, Software Engineer
http://www.skynet.ie/~airlied / airlied at skynet.ie
Linux kernel - DRI, VAX / pam_smb / ILUG



---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7393alloc_id=16281op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Getting DRI working on PCI MGA cards

2005-05-12 Thread Adam Jackson
On Thursday 12 May 2005 03:13, Dave Airlie wrote:
 Just missed most of this, but we do have drm_device_is_agp(dev) in the
 drm, only the CVS radeon uses this at present as the DDX can tell it
 also...

 But on Linux it just does..
 pci_find_capability(dev-pdev, PCI_CAP_ID_AGP);

 which sounds like your chip says it is AGP but is connected over a PCI
 bus..

The PCI G450s are funky.  The chip itself is AGP, but the AGP bus it's on only 
extends out to the PCI-AGP bridge chip on the card itself.  In other words:

Host Bridge --[PCI]-- G450 Bridge ==[AGP]== G450

So pci_find_capability isn't right, because it really is an AGP device, 
there's just no accessible GART.  Fortunately the bridge chip is known to be 
sane (ie, appears topologically between the GPU chip and the host bridge), so 
he should be able to walk the bus towards the root, find his bridge, and fall 
back to PCI operation based on that.

Or at least that's how I remember the discussion going, right Ian?

- ajax


pgpXm83BKc9wr.pgp
Description: PGP signature


Re: Getting DRI working on PCI MGA cards

2005-05-12 Thread Dave Jones
On Thu, May 12, 2005 at 10:59:50AM -0400, Adam Jackson wrote:
  On Thursday 12 May 2005 03:13, Dave Airlie wrote:
   Just missed most of this, but we do have drm_device_is_agp(dev) in the
   drm, only the CVS radeon uses this at present as the DDX can tell it
   also...
  
   But on Linux it just does..
   pci_find_capability(dev-pdev, PCI_CAP_ID_AGP);
  
   which sounds like your chip says it is AGP but is connected over a PCI
   bus..
  
  The PCI G450s are funky.  The chip itself is AGP, but the AGP bus it's on 
  only 
  extends out to the PCI-AGP bridge chip on the card itself.  In other words:
  
  Host Bridge --[PCI]-- G450 Bridge ==[AGP]== G450
  
  So pci_find_capability isn't right, because it really is an AGP device, 
  there's just no accessible GART.  Fortunately the bridge chip is known to be 
  sane (ie, appears topologically between the GPU chip and the host bridge), 
  so 
  he should be able to walk the bus towards the root, find his bridge, and 
  fall 
  back to PCI operation based on that.
  
  Or at least that's how I remember the discussion going, right Ian?

This rang a bell.. The ATI FireGL drivers have some funky
agpgart workaround, though it looks prone to false-positives..

agp_generic_agp_v2_enable() contains this addition..

#ifdef FGL_FIX
/* AGP 1x or 2x or 4x - at least one of this list */
/* mga g450 pci can be uncovered this way */
if (!(scratch  7))
continue;
#endif /* FGL_FIX */

...

#ifdef FGL_FIX
/* set AGP enable bit - only if a valid mode was determined */
/* (a way to unhide mga g450 pci) */
if (command  7)
#endif


Dave



---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7393alloc_id=16281op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Getting DRI working on PCI MGA cards

2005-05-12 Thread Ian Romanick
Dave Airlie wrote:
Just missed most of this, but we do have drm_device_is_agp(dev) in the
drm, only the CVS radeon uses this at present as the DDX can tell it
also...
But on Linux it just does..
pci_find_capability(dev-pdev, PCI_CAP_ID_AGP);
which sounds like your chip says it is AGP but is connected over a PCI
bus..
I tried that at Eric Anholt's suggestion, but, as you've guessed, it 
didn't work.  What I ended up doing is looking that the device ID of the 
bus the card is connected to.  If it matches the chip known to be used 
as the AGP-to-PCI bridge, I assume it's the PCI G450.

I have a patch.  Could you review it?
https://bugs.freedesktop.org/show_bug.cgi?id=3248
also you could use postinit to do stuff from the driver and fail there...
I don't mind adding a preinit if needed.. but I think postinit should be
fine for your purposes... also at the moment the kernel is different than
CVS, it doesn't take over the PCI device.. so drm_get_dev is called over
the pciids... this shouldn't affect anything but if you were to modify the
mga_drv.c:probe function it would cause issues...


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7393alloc_id=16281op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Getting DRI working on PCI MGA cards

2005-05-11 Thread Keith Whitwell
Ian Romanick wrote:
I've started working to get PCI MGA cards, the PCI G450 specifically, 
working with DRI.  My initial goal is to just get it working with crummy 
performance, then improve it by adding support for IOMMUs (to simulate 
AGP texturing) on systems like pSeries and AMD64 that have them.

I've started by digging through the DRI init process in the X.org MGA 
DDX.  As near as I can tell, the driver uses AGP memory for four things. 
 To make the PCI cards work, I'll need to make it do without AGP for 
these things.

1. WARP microcode.  This seems *really* odd to me.  The DDX carves off a 
32KiB chunk of AGP space and gives it to the kernel to use to store the 
WARP microcode.  Why is the DDX involved in this *at all*?  The 
microcode exists only in the kernel module.  It seems that the DRM could 
just as easily drm_pci_alloc a chunk of memory large enough to hold the 
microcode for the card (which is different for G400-class cards and 
G200-class cards).

2. Primary DMA buffer.  The DDX carves of 1MB for the primary DMA 
buffer.  I don't think that's outside the reasonable realm for 
drm_pci_alloc.  If it is, can this work with a smaller buffer?

3. Secondary DMA buffers.  The DDX carves off room for 128 64KiB DMA 
buffers.  I haven't dug that deeply, but I seem to recall that the DRI 
driver uses these buffers as non-contiguous.  That is, it treats them as 
128 separate buffers and not a big 8MB buffer that it cards 64KiB chunks 
from.  If that's the case, then it should be easy enough to modify the 
driver the drm_pci_alloc (upto) 128 64KiB chunks for PCI cards.  Is 
there any actual performance benefit to having this be in AGP space at 
all or do they just have to be in the same address space as the 
primary DMA buffer?

4. AGP textures.  Without an IOMMU, we pretty much have to punt here. 
Performance will be bad, but I can live with that.
I think this is all pretty much correct.  I don't think the primary dma 
buffer needs to be anything like that large, secondary buffers can be 
non-contiguous from a hardware interaction point of view, but who knows 
what assumptions are coded into the drm dma code, etc.

Keith
---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7393alloc_id=16281op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Getting DRI working on PCI MGA cards

2005-05-11 Thread Ville Syrjälä
On Tue, May 10, 2005 at 02:59:49PM -0700, Ian Romanick wrote:
 I've started working to get PCI MGA cards, the PCI G450 specifically, 
 working with DRI.  My initial goal is to just get it working with crummy 
 performance, then improve it by adding support for IOMMUs (to simulate 
 AGP texturing) on systems like pSeries and AMD64 that have them.
 
 I've started by digging through the DRI init process in the X.org MGA 
 DDX.  As near as I can tell, the driver uses AGP memory for four things. 
  To make the PCI cards work, I'll need to make it do without AGP for 
 these things.
 
 1. WARP microcode.  This seems *really* odd to me.  The DDX carves off a 
 32KiB chunk of AGP space and gives it to the kernel to use to store the 
 WARP microcode.  Why is the DDX involved in this *at all*?  The 
 microcode exists only in the kernel module.  It seems that the DRM could 
 just as easily drm_pci_alloc a chunk of memory large enough to hold the 
 microcode for the card (which is different for G400-class cards and 
 G200-class cards).

You don't even have to allocate a single 32KiB chunk. Instead you could 
allocate smaller chunks for each of the microcode images but 32KiB should 
be small enough for a single allocation, right?

 2. Primary DMA buffer.  The DDX carves of 1MB for the primary DMA 
 buffer.  I don't think that's outside the reasonable realm for 
 drm_pci_alloc.  If it is, can this work with a smaller buffer?

I haven't measured how much of the buffer is actaully used under normal 
circumstances.

Currently the blit, swap, etc. ioctls directly write to the primary 
buffer. You could get some small gains from using secondary buffers for 
those as well but I'm not sure if it's really worth it.

 3. Secondary DMA buffers.  The DDX carves off room for 128 64KiB DMA 
 buffers.  I haven't dug that deeply, but I seem to recall that the DRI 
 driver uses these buffers as non-contiguous.  That is, it treats them as 
 128 separate buffers and not a big 8MB buffer that it cards 64KiB chunks 
 from.  If that's the case, then it should be easy enough to modify the 
 driver the drm_pci_alloc (upto) 128 64KiB chunks for PCI cards.  Is 
 there any actual performance benefit to having this be in AGP space at 

AGP reads are faster than PCI reads. I haven't actually measured if there 
is any real world difference.

 all or do they just have to be in the same address space as the 
 primary DMA buffer?

If by address space you mean AGP aperture vs. other memory then no they 
don't have be in the same address space. You can choose to use PCI or AGP 
transfers every time you submit a new buffer to the hardware.

 4. AGP textures.  Without an IOMMU, we pretty much have to punt here. 
 Performance will be bad, but I can live with that.
 
 
 If these assumptions are at least /mostly/ correct, I think I have a 
 pretty good idea how I'll change the init process around.  I'd like to, 
 basically, pull most of MGADRIAgpInit into the kernel.  There will be a 
 single device-specific command called something like 
 DRM_MGA_DMA_BOOTSTRAP.  The DDX will pass in the desired AGP mode and 
 size.  The DRM will do some magic and fill in the rest of the structure. 
  The structure used will probably look something like below.  Notice 
 that the DDX *never* needs to know anything about the WARP microcode in 
 this arrangement.

Why would the DDX need to know anything about the DMA buffers or AGP mode?

 struct drm_mga_dma_bootstrap {
   /**
* 1MB region of primary DMA space.  This is AGP space if
* \c agp_mode is non-zero and PCI space otherwise.
*/
   drmRegion   primary_dma;
 
   /**
* Region for holding textures.  If \c agp_mode is zero and
* there is no IOMMU available, this will be zero size.
*/
   drmRegion   textures;
 
   /**
* Upto 128 secondary DMA buffers.  Each region will be a
* multiple of 64KiB.  If \c agp_mode is non-zero, typically
* only the first region will be configured.  Otherwise,
* each region will be used and allocated for 64KiB.
*/

Why make this behave differently for AGP and PCI?

   drmRegion   secondary_dma[128];
 
   u8  agp_size;   /** Size of AGP region in MB. */
   u8  agp_mode;   /** Set AGP mode.  0 for PCI. */
 };
 
 Does this look good, or should I try to get more sleep before designing 
 interfaces like this? ;)

-- 
Ville Syrjälä
[EMAIL PROTECTED]
http://www.sci.fi/~syrjala/


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_ids93alloc_id281op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Getting DRI working on PCI MGA cards

2005-05-11 Thread Alan Cox
On Mer, 2005-05-11 at 02:16, Ian Romanick wrote:
 I was afraid of that. :(  The problem is that the MGA can *only* DMA 
 commands  vertex data from PCI memory or AGP.  In the case of the 
 G200 (typically only 8MB), you don't want to use 1/8th of your on-card 
 memory for commands either.  I'll have to dig deeper and see if there's 
 another way around this.

Getting physically linear allocations out of the kernel after boot for
anything more than about 64K is very touch and go. If you can use the
IOMMU for it then you can get 1Mb of virtual space fairly sanely and map
it. If you actually need that much. Given your card will be chugging
along far more slowly and the transfer rate is far slower I doubt 1Mb
would be needed. You've got a good chance of grabbing 128K linear early
on at least for testing purposes.

Alan



---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7393alloc_id=16281op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Getting DRI working on PCI MGA cards

2005-05-11 Thread Ian Romanick
Ville Syrjälä wrote:
On Tue, May 10, 2005 at 02:59:49PM -0700, Ian Romanick wrote:
I've started working to get PCI MGA cards, the PCI G450 specifically, 
working with DRI.  My initial goal is to just get it working with crummy 
performance, then improve it by adding support for IOMMUs (to simulate 
AGP texturing) on systems like pSeries and AMD64 that have them.

I've started by digging through the DRI init process in the X.org MGA 
DDX.  As near as I can tell, the driver uses AGP memory for four things. 
To make the PCI cards work, I'll need to make it do without AGP for 
these things.

1. WARP microcode.  This seems *really* odd to me.  The DDX carves off a 
32KiB chunk of AGP space and gives it to the kernel to use to store the 
WARP microcode.  Why is the DDX involved in this *at all*?  The 
microcode exists only in the kernel module.  It seems that the DRM could 
just as easily drm_pci_alloc a chunk of memory large enough to hold the 
microcode for the card (which is different for G400-class cards and 
G200-class cards).
You don't even have to allocate a single 32KiB chunk. Instead you could 
allocate smaller chunks for each of the microcode images but 32KiB should 
be small enough for a single allocation, right?
Right.  Since the allocation would be moved to the kernel, I would only 
allocate as much as is needed for the microcode actually used.  The G200 
microcode is quite a bit smaller than the G400 microcode, for example.

2. Primary DMA buffer.  The DDX carves of 1MB for the primary DMA 
buffer.  I don't think that's outside the reasonable realm for 
drm_pci_alloc.  If it is, can this work with a smaller buffer?
I haven't measured how much of the buffer is actaully used under normal 
circumstances.

Currently the blit, swap, etc. ioctls directly write to the primary 
buffer. You could get some small gains from using secondary buffers for 
those as well but I'm not sure if it's really worth it.
I'll have to see if I can add some instrumentation to the driver to 
measure how full the primary buffer is.

3. Secondary DMA buffers.  The DDX carves off room for 128 64KiB DMA 
buffers.  I haven't dug that deeply, but I seem to recall that the DRI 
driver uses these buffers as non-contiguous.  That is, it treats them as 
128 separate buffers and not a big 8MB buffer that it cards 64KiB chunks 
from.  If that's the case, then it should be easy enough to modify the 
driver the drm_pci_alloc (upto) 128 64KiB chunks for PCI cards.  Is 
there any actual performance benefit to having this be in AGP space at 
AGP reads are faster than PCI reads. I haven't actually measured if there 
is any real world difference.
Okay.
all or do they just have to be in the same address space as the 
primary DMA buffer?
If by address space you mean AGP aperture vs. other memory then no they 
don't have be in the same address space. You can choose to use PCI or AGP 
transfers every time you submit a new buffer to the hardware.
Yeah, that's what I meant.  The selection is made by setting bit one of 
the address to 0 for PCI or 1 for AGP, right?

4. AGP textures.  Without an IOMMU, we pretty much have to punt here. 
Performance will be bad, but I can live with that.

If these assumptions are at least /mostly/ correct, I think I have a 
pretty good idea how I'll change the init process around.  I'd like to, 
basically, pull most of MGADRIAgpInit into the kernel.  There will be a 
single device-specific command called something like 
DRM_MGA_DMA_BOOTSTRAP.  The DDX will pass in the desired AGP mode and 
size.  The DRM will do some magic and fill in the rest of the structure. 
The structure used will probably look something like below.  Notice 
that the DDX *never* needs to know anything about the WARP microcode in 
this arrangement.
Why would the DDX need to know anything about the DMA buffers or AGP mode?
Two reasons, I think.  The DDX tells the DRI driver where this stuff is. 
 Doesn't the DDX also use the DMA buffers for 2D drawing commands?

struct drm_mga_dma_bootstrap {
/**
 * 1MB region of primary DMA space.  This is AGP space if
 * \c agp_mode is non-zero and PCI space otherwise.
 */
drmRegion   primary_dma;
/**
 * Region for holding textures.  If \c agp_mode is zero and
 * there is no IOMMU available, this will be zero size.
 */
drmRegion   textures;
/**
 * Upto 128 secondary DMA buffers.  Each region will be a
 * multiple of 64KiB.  If \c agp_mode is non-zero, typically
 * only the first region will be configured.  Otherwise,
 * each region will be used and allocated for 64KiB.
 */
Why make this behave differently for AGP and PCI?
My thinking was that it was better to use fewer drmRegions whenever 
possible.  This wasn't to treat AGP and PCI different, it was to treat 
the case where a single 128*64KiB mapping was available (e.g., AGP or 
PCI w/an IOMMU) differently from the case where a 

Re: Getting DRI working on PCI MGA cards

2005-05-11 Thread Ian Romanick
Benjamin Herrenschmidt wrote:
On Tue, 2005-05-10 at 14:59 -0700, Ian Romanick wrote:
I've started working to get PCI MGA cards, the PCI G450 specifically, 
working with DRI.  My initial goal is to just get it working with crummy 
performance, then improve it by adding support for IOMMUs (to simulate 
AGP texturing) on systems like pSeries and AMD64 that have them.

I've started by digging through the DRI init process in the X.org MGA 
DDX.  As near as I can tell, the driver uses AGP memory for four things. 
 To make the PCI cards work, I'll need to make it do without AGP for 
these things.
Note that most of these issues can be more easily dealt with if you
assume an iommu. What you basically want to do is create a virtual
mapping, build an sglist, and have the iommu coalesce that into a single
PCI DMA mapping.
Right.  I want to try to walk before I run, though.  Right now the card 
is in a PPC32 box anyway, so I don't have an IOMMU.  Physical access to 
the PPC32 box is much easier for me than to the pSeries, so I'm going to 
/try/ and keep it there as long as I can.  Besides, a Mac reboots a lot 
faster than a p275. ;)

Unfortunately, the current iommu API doesn't really have a way to
enforce the iommu driver to create a single mapping. It will happen most
of the time, but can't be enforced. We may have to add something to the
DMA APIs to add this ability, or do a quick hack in the meantime as a
proof of concept. I could do something for pSeries if you need that.
Okay.  I /will/ need that at some point, but that point is still a ways 
off.  I'll let you know when it gets closer.  Thanks. :)


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7393alloc_id=16281op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Getting DRI working on PCI MGA cards

2005-05-11 Thread Ville Syrjälä
On Wed, May 11, 2005 at 09:59:25AM -0700, Ian Romanick wrote:
 Ville Syrjälä wrote:
 On Tue, May 10, 2005 at 02:59:49PM -0700, Ian Romanick wrote:
 
 all or do they just have to be in the same address space as the 
 primary DMA buffer?
 
 If by address space you mean AGP aperture vs. other memory then no they 
 don't have be in the same address space. You can choose to use PCI or AGP 
 transfers every time you submit a new buffer to the hardware.
 
 Yeah, that's what I meant.  The selection is made by setting bit one of 
 the address to 0 for PCI or 1 for AGP, right?

Yep.

 4. AGP textures.  Without an IOMMU, we pretty much have to punt here. 
 Performance will be bad, but I can live with that.
 
 If these assumptions are at least /mostly/ correct, I think I have a 
 pretty good idea how I'll change the init process around.  I'd like to, 
 basically, pull most of MGADRIAgpInit into the kernel.  There will be a 
 single device-specific command called something like 
 DRM_MGA_DMA_BOOTSTRAP.  The DDX will pass in the desired AGP mode and 
 size.  The DRM will do some magic and fill in the rest of the structure. 
 The structure used will probably look something like below.  Notice 
 that the DDX *never* needs to know anything about the WARP microcode in 
 this arrangement.
 
 Why would the DDX need to know anything about the DMA buffers or AGP mode?
 
 Two reasons, I think.  The DDX tells the DRI driver where this stuff is. 

Ok. I forgot how weird the current system is :(

  Doesn't the DDX also use the DMA buffers for 2D drawing commands?

Last time I looked the DDX only did MMIO. That was quite a long time ago 
though so maybe things have changed.

-- 
Ville Syrjälä
[EMAIL PROTECTED]
http://www.sci.fi/~syrjala/


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_ids93alloc_id281op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Getting DRI working on PCI MGA cards

2005-05-10 Thread Ian Romanick
I've started working to get PCI MGA cards, the PCI G450 specifically, 
working with DRI.  My initial goal is to just get it working with crummy 
performance, then improve it by adding support for IOMMUs (to simulate 
AGP texturing) on systems like pSeries and AMD64 that have them.

I've started by digging through the DRI init process in the X.org MGA 
DDX.  As near as I can tell, the driver uses AGP memory for four things. 
 To make the PCI cards work, I'll need to make it do without AGP for 
these things.

1. WARP microcode.  This seems *really* odd to me.  The DDX carves off a 
32KiB chunk of AGP space and gives it to the kernel to use to store the 
WARP microcode.  Why is the DDX involved in this *at all*?  The 
microcode exists only in the kernel module.  It seems that the DRM could 
just as easily drm_pci_alloc a chunk of memory large enough to hold the 
microcode for the card (which is different for G400-class cards and 
G200-class cards).

2. Primary DMA buffer.  The DDX carves of 1MB for the primary DMA 
buffer.  I don't think that's outside the reasonable realm for 
drm_pci_alloc.  If it is, can this work with a smaller buffer?

3. Secondary DMA buffers.  The DDX carves off room for 128 64KiB DMA 
buffers.  I haven't dug that deeply, but I seem to recall that the DRI 
driver uses these buffers as non-contiguous.  That is, it treats them as 
128 separate buffers and not a big 8MB buffer that it cards 64KiB chunks 
from.  If that's the case, then it should be easy enough to modify the 
driver the drm_pci_alloc (upto) 128 64KiB chunks for PCI cards.  Is 
there any actual performance benefit to having this be in AGP space at 
all or do they just have to be in the same address space as the 
primary DMA buffer?

4. AGP textures.  Without an IOMMU, we pretty much have to punt here. 
Performance will be bad, but I can live with that.

If these assumptions are at least /mostly/ correct, I think I have a 
pretty good idea how I'll change the init process around.  I'd like to, 
basically, pull most of MGADRIAgpInit into the kernel.  There will be a 
single device-specific command called something like 
DRM_MGA_DMA_BOOTSTRAP.  The DDX will pass in the desired AGP mode and 
size.  The DRM will do some magic and fill in the rest of the structure. 
 The structure used will probably look something like below.  Notice 
that the DDX *never* needs to know anything about the WARP microcode in 
this arrangement.

struct drm_mga_dma_bootstrap {
/**
 * 1MB region of primary DMA space.  This is AGP space if
 * \c agp_mode is non-zero and PCI space otherwise.
 */
drmRegion   primary_dma;
/**
 * Region for holding textures.  If \c agp_mode is zero and
 * there is no IOMMU available, this will be zero size.
 */
drmRegion   textures;
/**
 * Upto 128 secondary DMA buffers.  Each region will be a
 * multiple of 64KiB.  If \c agp_mode is non-zero, typically
 * only the first region will be configured.  Otherwise,
 * each region will be used and allocated for 64KiB.
 */
drmRegion   secondary_dma[128];
u8  agp_size;   /** Size of AGP region in MB. */
u8  agp_mode;   /** Set AGP mode.  0 for PCI. */
};
Does this look good, or should I try to get more sleep before designing 
interfaces like this? ;)


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7393alloc_id=16281op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Getting DRI working on PCI MGA cards

2005-05-10 Thread Alan Cox
On Maw, 2005-05-10 at 22:59, Ian Romanick wrote:
 2. Primary DMA buffer.  The DDX carves of 1MB for the primary DMA 
 buffer.  I don't think that's outside the reasonable realm for 
 drm_pci_alloc.  If it is, can this work with a smaller buffer?

You'll have trouble grabbing that linearly from main memory, on the
other hand I'm assuming most traffic is outgoing so you want the buffer
in video card memory if possible and set write combining ?



---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7393alloc_id=16281op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Getting DRI working on PCI MGA cards

2005-05-10 Thread Ian Romanick
Alan Cox wrote:
On Maw, 2005-05-10 at 22:59, Ian Romanick wrote:
2. Primary DMA buffer.  The DDX carves of 1MB for the primary DMA 
buffer.  I don't think that's outside the reasonable realm for 
drm_pci_alloc.  If it is, can this work with a smaller buffer?
You'll have trouble grabbing that linearly from main memory, on the
other hand I'm assuming most traffic is outgoing so you want the buffer
in video card memory if possible and set write combining ?
I was afraid of that. :(  The problem is that the MGA can *only* DMA 
commands  vertex data from PCI memory or AGP.  In the case of the 
G200 (typically only 8MB), you don't want to use 1/8th of your on-card 
memory for commands either.  I'll have to dig deeper and see if there's 
another way around this.


---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7393alloc_id=16281op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Getting DRI working on PCI MGA cards

2005-05-10 Thread Benjamin Herrenschmidt
On Tue, 2005-05-10 at 14:59 -0700, Ian Romanick wrote:
 I've started working to get PCI MGA cards, the PCI G450 specifically, 
 working with DRI.  My initial goal is to just get it working with crummy 
 performance, then improve it by adding support for IOMMUs (to simulate 
 AGP texturing) on systems like pSeries and AMD64 that have them.
 
 I've started by digging through the DRI init process in the X.org MGA 
 DDX.  As near as I can tell, the driver uses AGP memory for four things. 
   To make the PCI cards work, I'll need to make it do without AGP for 
 these things.

  ../..

Note that most of these issues can be more easily dealt with if you
assume an iommu. What you basically want to do is create a virtual
mapping, build an sglist, and have the iommu coalesce that into a single
PCI DMA mapping.

Unfortunately, the current iommu API doesn't really have a way to
enforce the iommu driver to create a single mapping. It will happen most
of the time, but can't be enforced. We may have to add something to the
DMA APIs to add this ability, or do a quick hack in the meantime as a
proof of concept. I could do something for pSeries if you need that.

Ben.




---
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7393alloc_id=16281op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel