[Dri-devel] Re: Private DMA and ring buffers on DDX for mach64

2002-11-21 Thread José Fonseca
On Thu, Nov 21, 2002 at 05:41:55PM -0500, Leif Delgass wrote:
> On Thu, 21 Nov 2002, [iso-8859-15] José Fonseca wrote:
> 
> > Leif,
> > 
> > Attached is the patch to allocate private DMA and ring buffers on the
> > DDX - the DDX part only. I'll now focus on the DRM part.
> 
> I didn't get the attachment, could you send it again?

Oops.. I always forget to actually attach the attachments!!

> > The code compiles but I'll only be able to test it when the changes on 
> > DRM are also done. I would appreciate if you used those falcon eyes of
> > yours and see if you notice any mistake! ;-)
> > 
> > My time is still rather limited. If you want I can stop the snapshots
> > and commit this to CVS so that we both can hack at this.
> 
> Sure, that works for me.
> 

> BTW, I was looking at the code in glint_dri.c and it looks like it is
> incorrectly using drmAddBufs to allocate the page table.  The last

No. That's actually the way to do it - at least in the current driver
model - with AGP, the DMA buffers are taken directly from the memory
given by drmAgpAlloc, while with PCI it's done implicitly with the call
to drmAddBufs.

> parameter is actually agp_start and is only used by DRM(addbufs_[agp,sg]). 
> Passing DRM_RESTRICTED has no effect since that parameter isn't used for
> DRM(addbufs_pci).  The flags parameter (the fourth parameter) is only to
> specify AGP, SG, or PCI.  So I think there's actually a potential security
> bug in the glint driver.  In fact, afaict, the only thing that prevents

I haven't looked to deep in the DRM API but it appears that you may be
right, assuming that there is a security risk for Glint in exposing the
pgt.

> the page table from being handed out as a DMA buffer by drmDMA is the fact
> that it's a different order (8K vs. 4K for the DMA buffers), and the gamma
> Mesa client no longer passes the DRM_DMA_LARGER_OK flag (it's commented
> out).  From what I can tell, all buffers allocated with drmAddBufs go on
> the dev->dma->buflist, and all of these buffers are mapped by drmMapBufs,
> so they are all client accesible even if none of the DRM ioctls return the
> index of a "private" buffer to a client. I think what we need is a way to
> have multiple buffer lists and map (or not map) each list separately
> (maybe with a flag to map as shared for client DMA buffers and private for
> the X server indirect buffers?).
> 

I don't know. If we need to make changes to the DRM API then I prefer go
all the way and do something that allows to cleanly seperate mapping
from allocation via different API calls.

> As far as the descriptor ring buffer goes, for the PCI case it might make
> sense to have a DRM wrapper around pci_alloc_consistent for
> general-purpose PCI DMA memory for drivers to use for things like page
> tables and ring buffers when agp/pcigart aren't used, since we don't need
> the extra bookeeping fields in drm_buf_t and drmAddMap only works for AGP
> or scatter-gather PCI mem.  What do you think?

I undoubtly agree that we should use pci_alloc_* inside the DRM API,
either replacing the allocation code inside DRM(addbufs_pci), or - much
cleaner - do it on a PCI dedicated API call (e.g., drmPciAlloc, to be
the PCI counterpart of drmAgpAlloc).

José Fonseca



mach64-private-ring.diff.bz2
Description: Binary data


[Dri-devel] Re: Private DMA and ring buffers on DDX for mach64

2002-11-21 Thread Leif Delgass
On Thu, 21 Nov 2002, [iso-8859-15] José Fonseca wrote:

> Leif,
> 
> Attached is the patch to allocate private DMA and ring buffers on the
> DDX - the DDX part only. I'll now focus on the DRM part.

I didn't get the attachment, could you send it again?
 
> The code compiles but I'll only be able to test it when the changes on 
> DRM are also done. I would appreciate if you used those falcon eyes of
> yours and see if you notice any mistake! ;-)
> 
> My time is still rather limited. If you want I can stop the snapshots
> and commit this to CVS so that we both can hack at this.

Sure, that works for me.

BTW, I was looking at the code in glint_dri.c and it looks like it is
incorrectly using drmAddBufs to allocate the page table.  The last
parameter is actually agp_start and is only used by DRM(addbufs_[agp,sg]).  
Passing DRM_RESTRICTED has no effect since that parameter isn't used for
DRM(addbufs_pci).  The flags parameter (the fourth parameter) is only to
specify AGP, SG, or PCI.  So I think there's actually a potential security
bug in the glint driver.  In fact, afaict, the only thing that prevents
the page table from being handed out as a DMA buffer by drmDMA is the fact
that it's a different order (8K vs. 4K for the DMA buffers), and the gamma
Mesa client no longer passes the DRM_DMA_LARGER_OK flag (it's commented
out).  From what I can tell, all buffers allocated with drmAddBufs go on
the dev->dma->buflist, and all of these buffers are mapped by drmMapBufs,
so they are all client accesible even if none of the DRM ioctls return the
index of a "private" buffer to a client. I think what we need is a way to
have multiple buffer lists and map (or not map) each list separately
(maybe with a flag to map as shared for client DMA buffers and private for
the X server indirect buffers?).

As far as the descriptor ring buffer goes, for the PCI case it might make
sense to have a DRM wrapper around pci_alloc_consistent for
general-purpose PCI DMA memory for drivers to use for things like page
tables and ring buffers when agp/pcigart aren't used, since we don't need
the extra bookeeping fields in drm_buf_t and drmAddMap only works for AGP
or scatter-gather PCI mem.  What do you think?

-- 
Leif Delgass 
http://www.retinalburn.net







---
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
___
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel