On 06/07/2018 09:51, Eelco Chaudron wrote: > > > On 5 Jul 2018, at 18:29, Tiago Lam wrote: >
[snip] >> >> >> Performance notes (based on v8) >> ================= >> In order to test for regressions in performance, tests were run on top >> of master 88125d6 and v8 of this patchset, both with the multi-segment >> mbufs option enabled and disabled. >> >> VSperf was used to run the phy2phy_cont and pvp_cont tests with >> varying >> packet sizes of 64B, 1500B and 7000B, on a 10Gbps interface. >> >> Test | Size | Master | Multi-seg disabled | Multi-seg enabled >> ------------------------------------------------------------- >> p2p | 64 | ~22.7 | ~22.65 | ~18.3 >> p2p | 1500 | ~1.6 | ~1.6 | ~1.6 >> p2p | 7000 | ~0.36 | ~0.36 | ~0.36 >> pvp | 64 | ~6.7 | ~6.7 | ~6.3 >> pvp | 1500 | ~1.6 | ~1.6 | ~1.6 >> pvp | 7000 | ~0.36 | ~0.36 | ~0.36 >> >> Packet size is in bytes, while all packet rates are reported in mpps >> (aggregated). >> >> No noticeable regression has been observed (certainly everything is >> within the ± 5% margin of existing performance), aside from the 64B >> packet size case when multi-segment mbuf is enabled. This is >> expected, however, because of how Tx vectoriszed functions are >> incompatible with multi-segment mbufs on some PMDs. The PMD under >> use during these tests was the i40e (on a Intel X710 NIC), which >> indeed doesn't support vectorized Tx functions with multi-segment >> mbufs. >> > Thanks for all the work Tiago! It all looks good to me, so hereby I > would like to ack the series. > > Cheers, > > Eelco > > > Acked-by: Eelco Chaudron <[email protected]> > > <SNIP> > And thanks again for putting the time for reviewing and confirming the results, Eelco. A side note on the series: It will need rebasing once Ian's work on "dpdk: Support both shared and per port mempools" goes in. Tiago. _______________________________________________ dev mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-dev
