Interesting ideas to explore here... I'd considered doing some changes
like this in the GLDv3 a while back - letting GLDv3 do much of the
buffer management could help simplify device drivers as well (always a
good thing).
However, at the moment I don't have the spare cycles to properly
investigate this. Perhaps someone else here can do some investigation?
-- Garrett
Andrew Gallatin wrote:
> Mark Johnson wrote:
>
>> One of the big hits is the DDI has to take the virtual
>> address passed in and find the physical address for each
>> page. That is a pretty expensive operation.
>>
>> In the storage stack years ago, since they were already
>> looking at the PA earlier in the stack, they passed
>> down the PAs in a shadow list so the DDI could skip
>> calling into the VM.
>>
>> I'm not sure if there is opportunity to do this in
>> the networking stack (e.g. sockfs, etc.). i.e. if
>> sockfs does a copy for tx (i have no idea if it does,
>> just and example), why not copy into a buffer
>> which you already know the PA. If you could add a shadow
>> list to a mblk, and have gld take care of building a
>> cookie list if the shadow list is present, I think
>> you could get a decent perf increase.
>
> Yes! Exactly!
>
> Earlier, in a different thread, I proposed having the
> transmit copy path (which goes through
> common/io/strsun.c:mcopyinuio()), reserve a pool of
> memory, and pre-map it into the IOMMU(s). That way,
> getting the DMA address on sparc would be simple.
> If the virtual to physical translation is really that
> expensive on amd64, maybe you could do the same
> thing, except just store the physical address...
>
> Drew
_______________________________________________
driver-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/driver-discuss