Hello to everyone,

my name is Alessandro Rosetti and I'd like some feedback on my work.
I wrote a patch that implements a new netdev that uses Netmap framework for
fast packet I/O in ovs.
I'm working on this for University of Pisa. My group develops Netmap and
intends to mantain the support in ovs. (Giuseppe Lettieri:g.lettieri@
iet.unipi.it, Vincenzo Maffione:[email protected])

This is the diffstat:

 acinclude.m4         |   34 +-
 configure.ac         |    1 +
 lib/automake.mk      |   11 +
 lib/dp-packet.c      |   22 ++
 lib/dp-packet.h      |   13 +
 lib/dpif-netdev.c    |   13 +-
 lib/netdev-netmap.c  | 1014 ++++++++++++++++++++++++++++++
++++++++++++++++++++
 lib/netdev-netmap.h  |   13 +
 lib/netmap-stub.c    |   21 ++
 lib/netmap.c         |   76 ++++
 lib/netmap.h         |   27 ++
 vswitchd/bridge.c    |    2 +
 vswitchd/vswitch.xml |   38 ++
 13 files changed, 1278 insertions(+), 7 deletions(-)

Let me briefly explain some key points of the implementation.

The prototype I've implemented is using the pmd threads like DPDK.
To achieve better performance the netdev has a netmap-specific dp-packet
allocator to avoid mallocs.
Every netmap dp-packet has a source type of DPBUF_NETMAP and points to a
Netmap buffer that is not linked to any netmap TX/RX ring.
When receiving packets from netmap, a batch of dp/packets is allocated with
from a thread local netmap dp-packet allocator.
Consumed packets are returned to a thread local allocator. Thread local
allocators can exchange dp-packets (in batch) wth each other by means of a
global allocator protected by a global lock. In many cases there is no need
to use the global allocator at all.
When we forward netmap packets to another netmap port we can zero copy
swapping netmap buffer indexes, otherwise a data copy happens.

The initialization of the first netmap port requires to mmap some memory
and it is done only once.
For the next netmap ports we can reuse this information to setup the netmap
descriptor.

Current performance of a simple bridging between two physical interfaces
using dpdk pktgen:
ovs-dpdk    [tx/rx] : 14 Mpps   / 12.8 Mpps
ovs-netmap  [tx/rx] : 14.8 Mpps / 9.7  Mpps
I've tested bridging between a pair of veth ports using Netmap's packet
generator and the performance is still around 10Mpps on my system.
I've also tested packet output to multiple ports, multiple senders to a
single port and simple flow-mods that outputs to another port on basis of
the udp port value.

Thanks in advance for your response,
Alessandro.
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to