Andrew Gallatin wrote:

This means we invent a new mblk data allocator which allocates buffers
which are "optimized" for network drivers.  On IOMMU-less systems,
these buffers have their physical address associated with them
(somehow).  On IOMMU-full systems, these buffers are a static pool
which is pre-allocated at boot time, and pre-mapped in the IOMMU.
Rather than their physical addresses, they have their IOMMU address
associated with them.  Drivers continue to use the existing DDI DMA
interface, but it just gets much, much faster because 90% of the code
path goes away for data blocks allocated by this fastpath.  There are
a few cases where this can break down (multiple IOMMUs per system,
IOMMU exhaustion), and then things would only devolve to what we have
now.


I suggested much the same as this scheme years ago, and even have a PSARC spec. for it although I think my plan was to defer mapping through the IOMMU to a first-use case... It was a scheme where each block of DMAable memory had an associated cookie which could be passed around with it. The cookie could then be used to cache mapping info. and cookies could also be 'remapped' which would optimize to essentially a dvma_kaddr_load() on systems with IOMMUs. The current DDI interface as essentially non-optimal on all platforms as opposed to being optimal on some.

  Paul

--
===================================
Paul Durrant
http://www.linkedin.com/in/pdurrant
===================================
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to