This RFC introduces an approach for implementing jumbo frame support for
OvS-DPDK with multi-segment mbufs.
== Overview ==
Currently, jumbo frame support for OvS-DPDK is implemented by increasing
the size of mbufs within a mempool, such that each mbuf within the pool is
large enough to contain an entire jumbo frame of a user-defined size.
Typically, for each user-defined MTU 'requested_mtu', a new mempool is created,
containing mbufs of size ~requested_mtu.
With the multi-segment approach, all ports share the same mempool, in which
each mbuf is of standard/default size (~2k MB). To accommodate jumbo frames,
mbufs may be chained together, each mbuf storing a portion of the jumbo frame;
each mbuf in the chain is termed a segment, hence the name.
== Enabling multi-segment mbufs ==
Multi-segment and single-segment mbufs are mutually exclusive, and the user
must decide on which approach to adopt on init. The introduction of a new
optional OVSDB field, 'dpdk-multi-seg-mbufs', facilitates this; this is a
boolean field, which defaults to false. Setting the field is identical to
setting existing DPDPK-specific OVSDB fields:
sudo $OVS_DIR/utilities/ovs-vsctl --no-wait set Open_vSwitch .
other_config:dpdk-init=true
sudo $OVS_DIR/utilities/ovs-vsctl --no-wait set Open_vSwitch .
other_config:dpdk-lcore-mask=0x10
sudo $OVS_DIR/utilities/ovs-vsctl --no-wait set Open_vSwitch .
other_config:dpdk-socket-mem=4096,0
==> sudo $OVS_DIR/utilities/ovs-vsctl --no-wait set Open_vSwitch .
other_config:dpdk-multi-seg-mbufs=true
== Code base ==
This patch is dependent on the multi-segment mbuf patch submitted by Michael
Qiu (currently V2):
https://mail.openvswitch.org/pipermail/ovs-dev/2017-May/331792.html.
The upstream base commit against which this patch was generated is 1e96502;
to test this patch, check out that branch, apply Michael's patchset, and then
apply this patch:
3. netdev-dpdk: enable multi-segment jumbo frames
2. DPDK multi-segment mbuf support (Michael Qiu)
1. 1e96502 tests: Only run python SSL test if SSL support is configur...
(OvS upstream)
The DPDK version used during testing is v17.02, although v16.11 should work
equally well.
== Testing ==
As this is an RFC, only a subset of the total traffic paths/vSwitch
configurations/actions have been tested - a summary of traffic paths tested
thus far is included below. The action tested in all cases is OUTPUT. Tests
in which issues were observed are summarized beneath the table.
+-------------------------------------------------------------------------------------+
| Traffic Path
|
+-------------------------------------------------------------------------------------+
| DPDK Phy 0 -> OvS -> DPDK Phy 1
|
| DPDK Phy 0 -> OvS -> Kernel Phy 0
[1] |
| Kernel Phy 0 -> OvS -> DPDK Phy 0
|
|
|
| DPDK Phy 0 -> OvS -> vHost User 0 -> vHost User 1 -> OvS -> DPDK Phy 1 *
|
| DPDK Phy 0 -> OvS -> vHost User 0 -> vHost User 1 -> OvS -> Kernel Phy 0 *
[1] |
| Kernel Phy 0 -> OvS -> vHost User 1 -> vHost User 0 -> OvS -> -> DPDK Phy 0 *
[2] |
|
|
| vHost0 -> OvS -> vHost1
|
+-------------------------------------------------------------------------------------+
* = guest kernel IP forwarding
[1] = incorrect L4 checksum
[2] = traffic not forwarded in guest kernel. This behaviour is also observed on
OvS master.
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev