Andrew Gallatin wrote:
Darren Reed wrote:
Min Miles Xu wrote:
...
Hi Jason,
I've been working on the buffer management on the rx side,
consolidating the dma rx buffer pool of all the GLD instances
(port/ring). The driver can simply ask for a number of buffer from
the framework and use it, pass it up to the stack. It's the
framework's responsibility to recycle the buffer returned. So
everything is transparent to the drivers. Another prominent
advantage of doing so is that the buffer can be shared among
instances. New Intel 10G NICs have 128 rings. The existing way of
allocating buffer for each ring is a big waste of memory.
I already have a prototype for e1000g and ixgbe. But I need some
more time to conduct experiments and refine it. Then I will handle
it out for reviews. The code to be integrated may be applied to
ixgbe only, then applies to other NIC drivers.
Something that we need to do work on is zero-copy receive.
And for it to be available with all socket types.
In this case, all the buffers are mapped into the kernel memory
space from the application memory space.
At present we only have a limited form of zero-copy transmit for
TCP for some special network interfaces.
They're not special.. They just set the DL_CAPAB_ZEROCOPY flag.
Which is done in GLDv2 by setting GLD_CAP_ZEROCOPY (GLDv2), or
in GLDv3 by *not* setting MAC_CAPAB_NO_ZCOPY.
However, one of the primary uses of zero-copy sends is sendfile().
and thanks to 6459866, it is faster to copy. For context, see
http://markmail.org/message/oeowmlvfqi3kzttf
Thanks for pointing this out -- we will look into this. However I
suggest that you give sendfile another try, it has been updated to use
vpm memory interfaces which allow larger mappings and are more efficient.
Rao.
_______________________________________________
networking-discuss mailing list
[email protected]