Andrew Gallatin wrote:
How about what I've suggested in the past? Let's copy the one networking thing that MacOSX got right (at least before Adi started at Apple :) and keep network buffers pre-mapped in the IOMMU at a system level. This means we invent a new mblk data allocator which allocates buffers which are "optimized" for network drivers. On IOMMU-less systems, these buffers have their physical address associated with them (somehow). On IOMMU-full systems, these buffers are a static pool which is pre-allocated at boot time, and pre-mapped in the IOMMU. Rather than their physical addresses, they have their IOMMU address associated with them. Drivers continue to use the existing DDI DMA interface, but it just gets much, much faster because 90% of the code path goes away for data blocks allocated by this fastpath. There are a few cases where this can break down (multiple IOMMUs per system, IOMMU exhaustion), and then things would only devolve to what we have now.
So a global cache of pre-allocated and mapped mblk's? If the NIC doesn't support a 64-bit DMA address, you make it go through the slow path.. The tx path uses the cache at the top of the stack and gld or a new ddi_dma_mblk_bind knows how to get to the cookies? MRJ _______________________________________________ driver-discuss mailing list driver-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/driver-discuss