On Sun, 2005-03-13 at 23:21 +0100, Felix Kühling wrote: > [slightly off topic] > > Am Sonntag, den 13.03.2005, 12:56 -0500 schrieb Jon Smirl: > > On Sun, 13 Mar 2005 19:47:14 +0200, Ville Syrjälä <[EMAIL PROTECTED]> wrote: > > > I don't understand why we have "GART memory" anyway. It's just main memory > > > and I don't see any point going through the GART to access it with the > > > CPU. Only the graphics card needs to use the GART. > > > > I see no need to for the CPU to go through the GART either. The main > > CPU page tables can provide the same rearranging that the GART does. > > > > We do need specially marked GART memory because of caching issues. If > > the CPU writes to GART RAM the write may still be on the CPU chip in a > > cache. We have to make sure it gets pushed into physical memory so > > that the GPU can see it. > > If this is true, then I'm surprised that PCI-DMA with normal cacheable > memory works. All practical experience with the Savage driver teaches me > that a memory barrier is sufficient. Or does a memory barrier really > flush all CPU caches?
Normal PCI DMA, on most architecture, is snooped by the bridge/cpu and thus is fully cache coherent. On architectures where it is no, the kernel pci_dma_* and pci_alloc_consistent functions will take care of dealing with cache issues (allocating non-cacheable space for "consistent" memory and doing caches flush/invalidates for the rest). AGP GART access tend to be implemented in a weird way in the host bridges that bypasses the cache coherency protocol, I suppose for performances reasons. Ben. ------------------------------------------------------- SF email is sponsored by - The IT Product Guide Read honest & candid reviews on hundreds of IT Products from real users. Discover which products truly live up to the hype. Start reading now. http://ads.osdn.com/?ad_ide95&alloc_id396&op=click -- _______________________________________________ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel