Hi there,

I have a question here:
Why all of the NIC drivers have to bcopy the MBLKs for transmit? (some of them bcopy always, and some others bcopy under a threshold of the packet length).

I think one of the reason is the overhead of the setup of dma on the fly is greater than the overhead of bcopy for short packets. I want to know if this is the case and if there are any other reasons.

If what I guess is the major cause, I have a proposal and I want to hear your advice whether it makes sense.

The most time-consuming action for the dma setup is the dma bind, more specific, calling into the VM layer to get the PFN for the vaddr(hat_getpfnum()), since it need to search the huge page table. While for the MBLKs, essentially which are slab objects, the PFN has already been determined in the slab layer, and for most of their usage, we only touch the magazine layer, where the PFN is a pre determined one. That is, the PFN should be considered as a constructed state, but we don't leverage it for dma bind.

In storage, we have a field 'b_shadow' in buf(9S) to store the pages which are recently used, through which the PFNs can be easily got. so in the case that b_shadow works, ddi_dma_buf_bind_handle() is much faster than the ddi_dma_mem_bind_handle(). Another example, moving the dma bind of the HBA driver(mpt) from Tx path to the kmem cache constrcutor, mpt driver got 26% throughput increment. See CR6707308.

If the mblk could store the PFN info and we had a ddi_dma_mblk_bind_handle() like interface, then I think it will benefit the performance of the NIC drivers. I consulted the PAE, and got a answer that the bcopy is typically about 10-15% of a NIC TX workload.

Thanks,
Brian

_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to