>From: Darrell Ball [mailto:[email protected]]
>Sent: Thursday, June 1, 2017 1:14 AM
>To: Kavanagh, Mark B <[email protected]>; Chandran, Sugesh
><[email protected]>; [email protected]; [email protected]
>Subject: Re: [ovs-dev] [RFC PATCH v2 0/1] netdev-dpdk: multi-segment mbuf 
>jumbo frame support
>
>
>
>On 5/31/17, 6:38 AM, "[email protected] on behalf of Kavanagh, 
>Mark B" <ovs-
>[email protected] on behalf of [email protected]> wrote:
>
>    >From: Chandran, Sugesh
>    >Sent: Friday, May 26, 2017 8:06 PM
>    >To: Kavanagh, Mark B <[email protected]>; [email protected];
>[email protected]
>    >Subject: RE: [ovs-dev] [RFC PATCH v2 0/1] netdev-dpdk: multi-segment mbuf 
> jumbo frame
>support
>    >
>    >Hi Mark,
>    >
>    >
>    >Thank you for working on this!
>    >For some reason it was failing for me while trying to apply the first set 
> of patches
>from
>    >Michael.
>    >I tried the latest patches from the patchwork.
>
>    Thanks Sugesh - I've relayed this back to Michael, and asked him to rebase 
> his patchset.
>
>    Responses to your comments are inline - please let me know if you have any 
> other
>questions.
>
>    Thanks,
>    Mark
>
>    >
>    >Here are few high level comments as below.
>    >
>    >
>    >Regards
>    >_Sugesh
>    >
>    >
>    >> -----Original Message-----
>    >> From: [email protected] [mailto:ovs-dev-
>    >> [email protected]] On Behalf Of Mark Kavanagh
>    >> Sent: Monday, May 15, 2017 11:17 AM
>    >> To: [email protected]; [email protected]
>    >> Subject: [ovs-dev] [RFC PATCH v2 0/1] netdev-dpdk: multi-segment mbuf
>    >> jumbo frame support
>    >>
>    >> This RFC introduces an approach for implementing jumbo frame support for
>    >> OvS-DPDK with multi-segment mbufs.
>    >>
>    >> == Overview ==
>    >> Currently, jumbo frame support for OvS-DPDK is implemented by increasing
>    >> the size of mbufs within a mempool, such that each mbuf within the pool 
> is
>    >> large enough to contain an entire jumbo frame of a user-defined size.
>    >> Typically, for each user-defined MTU 'requested_mtu', a new mempool is
>    >> created, containing mbufs of size ~requested_mtu.
>    >>
>    >> With the multi-segment approach, all ports share the same mempool, in
>    >> which each mbuf is of standard/default size (~2k MB). To accommodate
>    >> jumbo frames, mbufs may be chained together, each mbuf storing a portion
>    >> of the jumbo frame; each mbuf in the chain is termed a segment, hence 
> the
>    >> name.
>    >>
>    >>
>    >> == Enabling multi-segment mbufs ==
>    >> Multi-segment and single-segment mbufs are mutually exclusive, and the
>    >> user must decide on which approach to adopt on init. The introduction 
> of a
>    >> new optional OVSDB field, 'dpdk-multi-seg-mbufs', facilitates this; 
> this is a
>    >> boolean field, which defaults to false. Setting the field is identical 
> to setting
>    >> existing DPDPK-specific OVSDB fields:
>    >>
>    >>     sudo $OVS_DIR/utilities/ovs-vsctl --no-wait set Open_vSwitch .
>    >> other_config:dpdk-init=true
>    >>     sudo $OVS_DIR/utilities/ovs-vsctl --no-wait set Open_vSwitch .
>    >> other_config:dpdk-lcore-mask=0x10
>    >>     sudo $OVS_DIR/utilities/ovs-vsctl --no-wait set Open_vSwitch .
>    >> other_config:dpdk-socket-mem=4096,0
>    >> ==> sudo $OVS_DIR/utilities/ovs-vsctl --no-wait set Open_vSwitch .
>    >> other_config:dpdk-multi-seg-mbufs=true
>    >>
>    >[Sugesh] May be I am missing something here. Why do we need configuration 
> option to
>    >enable the multi segment. If the MTU is larger than the mbuf size, it 
> will automatically
>    >create
>    >chained mbufs.
>
>    True; however, in order to allow jumbo frames to traverse a given port, 
> that port's MTU
>needs to be increased. As it stands currently, when the user specifies a 
>larger-than-standard
>MTU for a DPDK port (say 9000B), the size of each mbuf in that port's mempool 
>is increased to
>accommodate a packet of that size. In that case, since the 9000B packet can 
>fit into a single
>mbuf, there is no need to use multi-segment mbufs. So, this implementation 
>offers the user
>the flexibility to choose how they would like jumbo frames to be represented 
>in OvS-DPDK.
>     Otherwise it uses the normal single mbufs. We will keep it enable by 
> default.
>
>Single buffer by default makes sense

Yes, that is the default mode, and is implicit in the absence of 
'other_config:dpdk_multi-seg-mbufs=true'.

>
>
>    >Are you going to support jumbo frames with larger mbuf (when 
> dpdk-multi-seg-mbufs=false)
>    >and also with chained mbufs(dpdk-multi-seg-mbufs = True).
>
>    Yes. Chained mbufs may be advantageous on low-memory systems where the 
> amount of
>contiguous memory required for a single-segment jumbo frame mempool is an 
>issue. Furthermore,
>intuitively, single-segment jumbo frames may out-perform their multi-segment 
>counterparts on
>account of the increased data-to-overhead ratio that they provide.
>
>
>Yes, not just bcoz of buffer overhead though.

Yup - the reduced number of memcopy and rte_pktmbuf_alloc operations are also 
factors in the improved performance offered by single-segment jumbo frames.

>
>
>    >
>    >
>    >>
>    >> == Code base ==
>    >> This patch is dependent on the multi-segment mbuf patch submitted by
>    >> Michael Qiu (currently V2): 
> https://urldefense.proofpoint.com/v2/url?u=https-
>3A__mail.openvswitch.org_pipermail_ovs-
>2D&d=DwICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-
>uZnsw&m=qqzZ1f8cOadqxB51jt93R9_mMOfCMtdYfKkKGZtAHkQ&s=TvFVQ9rN_wDXYPXtotPyL3d9eJxlyq7zFotit21
>q15c&e=
>    >> dev/2017-May/331792.html.
>    >> The upstream base commit against which this patch was generated is
>    >> 1e96502; to test this patch, check out that branch, apply Michael's 
> patchset,
>    >> and then apply this patch:
>    >>
>    >>     3.  netdev-dpdk: enable multi-segment jumbo frames
>    >>     2.  DPDK multi-segment mbuf support (Michael Qiu)
>    >>     1.  1e96502 tests: Only run python SSL test if SSL support is 
> configur... (OvS
>    >> upstream)
>    >>
>    >> The DPDK version used during testing is v17.02, although v16.11 should 
> work
>    >> equally well.
>    >>
>    >>
>    >> == Testing ==
>    >> As this is an RFC, only a subset of the total traffic paths/vSwitch
>    >> configurations/actions have been tested - a summary of traffic paths 
> tested
>    >> thus far is included below. The action tested in all cases is OUTPUT. 
> Tests in
>    >> which issues were observed are summarized beneath the table.
>    >>
>    >> 
> +-------------------------------------------------------------------------------------
>+
>    >> |  Traffic Path
>|
>    >> 
> +-------------------------------------------------------------------------------------
>+
>    >> | DPDK Phy 0   -> OvS -> DPDK Phy 1
>|
>    >> | DPDK Phy 0   -> OvS -> Kernel Phy 0                                   
>           [1]
>|
>    >> | Kernel Phy 0 -> OvS -> DPDK Phy 0
>|
>    >> |
>|
>    >> | DPDK Phy 0   -> OvS -> vHost User 0 -> vHost User 1 -> OvS -> DPDK 
> Phy 1 *
>    >> |
>    >> | DPDK Phy 0   -> OvS -> vHost User 0 -> vHost User 1 -> OvS -> Kernel 
> Phy 0 *
>    >> [1] |
>    >> | Kernel Phy 0 -> OvS -> vHost User 1 -> vHost User 0 -> OvS -> -> DPDK 
> Phy 0
>    >> *   [2] |
>    >> |
>|
>    >> | vHost0       -> OvS -> vHost1
>|
>    >> 
> +-------------------------------------------------------------------------------------
>+
>    >>
>    >>   * = guest kernel IP forwarding
>    >> [1] = incorrect L4 checksum
>    >> [2] = traffic not forwarded in guest kernel. This behaviour is also 
> observed on
>    >> OvS master.
>    >>
>    >> _______________________________________________
>    >> dev mailing list
>    >> [email protected]
>    >> https://urldefense.proofpoint.com/v2/url?u=https-
>3A__mail.openvswitch.org_mailman_listinfo_ovs-
>2Ddev&d=DwICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-
>uZnsw&m=qqzZ1f8cOadqxB51jt93R9_mMOfCMtdYfKkKGZtAHkQ&s=MIE-OMQyz7BiWT0cuBmPbODfaG-
>CNs9NJ7jSL8ra-1s&e=
>    _______________________________________________
>    dev mailing list
>    [email protected]
>    https://urldefense.proofpoint.com/v2/url?u=https-
>3A__mail.openvswitch.org_mailman_listinfo_ovs-
>2Ddev&d=DwICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-
>uZnsw&m=qqzZ1f8cOadqxB51jt93R9_mMOfCMtdYfKkKGZtAHkQ&s=MIE-OMQyz7BiWT0cuBmPbODfaG-
>CNs9NJ7jSL8ra-1s&e=
>

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to