Hi Bhanu,
Went over the full patch set, and the changes look good to me.
All my previous concerns are addressed, and therefore I'm acking this
series.
I do have one small remark regarding the dpdk_tx_queue struct, see
individual
patch email.
Here are some numbers with this patch on a none tuned system, single run.
This just to make sure we still benefit with both patches applied.
Throughput for PV scenario, with 64 byte packets
Number
flows MASTER With PATCH
====== ========= ==========
10 4,531,424 7,884,607
32 3,137,300 6,367,643
50 2,552,725 6,649,985
100 2,473,835 5,876,677
500 2,308,840 5,265,986
1000 2,380,755 5,001,081
Throughput for PVP scenario, with 64 byte packets
Number
flows MASTER With PATCH
====== ========= ==========
10 2,309,254 3,800,747
32 1,626,380 3,324,561
50 1,538,879 3,092,792
100 1,429,028 2,887,488
500 1,271,773 2,537,624
1000 1,268,430 2,442,405
Latency test
MASTER
=======
Pkt size min(ns) avg(ns) max(ns)
512 9,947 12,381 264,131
1024 7,662 9,445 194,463
1280 7,790 9,115 196,059
1518 8,103 9,599 197,646
PATCH
=====
Pkt size min(ns) avg(ns) max(ns)
512 10,195 12,551 199,699
1024 7,838 9,612 206,378
1280 8,151 9,575 187,848
1518 8,095 9,643 198,552
Throughput for PP scenario, with 64 byte packets:
Number
flows MASTER With PATCH
====== ========= ==========
10 7,430,616 8,853,037
32 4,770,190 6,774,006
50 4,736,259 7,336,776
100 4,699,237 6,146,151
500 3,870,019 5,242,781
1000 3,853,883 5,121,911
Latency test
MASTER
=======
Pkt size min(ns) avg(ns) max(ns)
512 4,887 5,596 165,246
1024 5,801 6,447 170,842
1280 6,355 7,056 159,056
1518 6,860 7,634 160,860
PATCH
=====
Pkt size min(ns) avg(ns) max(ns)
512 4,783 5,521 158,134
1024 5,801 6,359 170,859
1280 6,315 6,878 150,301
1518 6,579 7,398 143,068
Acked-by: Eelco Chaudron <[email protected]>
On 07/06/17 11:20, Bhanuprakash Bodireddy wrote:
After packet classification, packets are queued in to batches depending
on the matching netdev flow. Thereafter each batch is processed to
execute the related actions. This becomes particularly inefficient if
there are few packets in each batch as rte_eth_tx_burst() incurs expensive
MMIO writes.
This patch series implements intermediate queue for DPDK and vHost User ports.
Packets are queued and burst when the packet count exceeds threshold. Also
drain logic is implemented to handle cases where packets can get stuck in
the tx queues at low rate traffic conditions. Care has been taken to see
that latency is well with in the acceptable limits. Testing shows significant
performance gains with this implementation.
This path series combines the earlier 2 patches posted below.
DPDK patch:
https://mail.openvswitch.org/pipermail/ovs-dev/2017-April/331039.html
vHost User patch:
https://mail.openvswitch.org/pipermail/ovs-dev/2017-May/332271.html
Also this series proposes to disable the retries on vHost User ports and make it
configurable via ovsdb.(controversial?)
Performance Numbers with intermediate queue:
DPDK ports
===========
Throughput for P2P scenario, for two 82599ES 10G port with 64 byte packets
Number
flows MASTER With PATCH
====== ========= =========
10 10727283 13393844
32 7042253 11228799
50 7515491 9607791
100 5838699 9430730
500 5285066 7845807
1000 5226477 7135601
Latency test
MASTER
=======
Pkt size min(ns) avg(ns) max(ns)
512 4,631 5,022 309,914
1024 5,545 5,749 104,294
1280 5,978 6,159 45,306
1518 6,419 6,774 946,850
PATCH
=====
Pkt size min(ns) avg(ns) max(ns)
512 4,711 5,064 182,477
1024 5,601 5,888 701,654
1280 6,018 6,491 533,037
1518 6,467 6,734 312,471
vHost User ports
==================
Throughput for PV scenario, with 64 byte packets
Number
flows MASTER With PATCH
======== ========= =========
10 5945899 7833914
32 3872211 6530133
50 3283713 6618711
100 3132540 5857226
500 2964499 5273006
1000 2931952 5178038
Latency test.
MASTER
=======
Pkt size min(ns) avg(ns) max(ns)
512 10,011 12,100 281,915
1024 7,870 9,313 193,116
1280 7,862 9,036 194,439
1518 8,215 9,417 204,782
PATCH
=======
Pkt size min(ns) avg(ns) max(ns)
512 10,492 13,655 281,538
1024 8,407 9,784 205,095
1280 8,399 9,750 194,888
1518 8,367 9,722 196,973
Performance number reported by Eelco Chaudron <[email protected]> at
https://mail.openvswitch.org/pipermail/ovs-dev/2017-April/331039.html
https://mail.openvswitch.org/pipermail/ovs-dev/2017-May/332271.html
Bhanuprakash Bodireddy (8):
netdev: Add netdev_txq_drain function.
netdev-dpdk: Add netdev_dpdk_txq_drain function.
netdev-dpdk: Add intermediate queue support.
dpif-netdev: Drain the packets in intermediate queue.
netdev-dpdk: Add netdev_dpdk_vhost_txq_drain function.
netdev-dpdk: Enable intermediate queue for vHost User port.
netdev-dpdk: Configurable retries while enqueuing to vHost User ports.
Documentation: Update DPDK doc with vhost-enqueue-retry option.
Documentation/howto/dpdk.rst | 17 ++++
lib/dpdk.c | 10 +++
lib/dpdk.h | 1 +
lib/dpif-netdev.c | 49 ++++++++++-
lib/netdev-bsd.c | 1 +
lib/netdev-dpdk.c | 190 ++++++++++++++++++++++++++++++++++++-------
lib/netdev-dummy.c | 1 +
lib/netdev-linux.c | 1 +
lib/netdev-provider.h | 8 ++
lib/netdev-vport.c | 3 +-
lib/netdev.c | 9 ++
lib/netdev.h | 1 +
vswitchd/vswitch.xml | 12 +++
13 files changed, 271 insertions(+), 32 deletions(-)
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev