Re: [ovs-dev] [PATCH 0/8] netdev-dpdk: Use intermediate queue during packet transmission.

2017-06-13 Thread Bodireddy, Bhanuprakash
Hi Eelco

>Hi Bhanu,
>
>Went over the full patch set, and the changes look good to me.
>All my previous concerns are addressed, and therefore I'm acking this series.

Thanks for reviewing the series and acking it.

>
>I do have one small remark regarding the dpdk_tx_queue struct, see
>individual patch email.

I agree with what you suggested.
I have to send out v2 anyways as Ben suggested to rename the API from 
netdev_txq_drain() to netdev_txq_flush(). I will factor in your suggestion in 
V2. 

>
>Here are some numbers with this patch on a none tuned system, single run.
>This just to make sure we still benefit with both patches applied.
>
>Throughput for PV scenario, with 64 byte packets
>
>Number
>flows   MASTER With PATCH
>=====
>   10  4,531,4247,884,607
>   32  3,137,3006,367,643
>   50  2,552,7256,649,985
>  100  2,473,8355,876,677
>  500  2,308,8405,265,986
>1000  2,380,7555,001,081
>
>
>Throughput for PVP scenario, with 64 byte packets
>
>Number
>flows   MASTER With PATCH
>=====
>   10  2,309,2543,800,747
>   32  1,626,3803,324,561
>   50  1,538,8793,092,792
>  100  1,429,0282,887,488
>  500  1,271,7732,537,624
>1000  1,268,4302,442,405
>
>Latency test
>
>  MASTER
>  ===
>  Pkt size  min(ns)  avg(ns)  max(ns)
>   512  9,94712,381   264,131
>  1024  7,662 9,445   194,463
>  1280  7,790 9,115   196,059
>  1518  8,103 9,599   197,646
>
>  PATCH
>  =
>  Pkt size  min(ns)  avg(ns)  max(ns)
>   512  10,195   12,551   199,699
>  1024  7,838 9,612   206,378
>  1280  8,151 9,575   187,848
>  1518  8,095 9,643   198,552
>
>
>Throughput for PP scenario, with 64 byte packets:
>
>Number
>flows   MASTER With PATCH
>=====
>   10  7,430,6168,853,037
>   32  4,770,1906,774,006
>   50  4,736,2597,336,776
>  100  4,699,2376,146,151
>  500  3,870,0195,242,781
>1000  3,853,8835,121,911
>
>
>Latency test
>
>  MASTER
>  ===
>  Pkt size  min(ns)  avg(ns)  max(ns)
>   512  4,8875,596165,246
>  1024  5,8016,447170,842
>  1280  6,3557,056159,056
>  1518  6,8607,634160,860
>
>  PATCH
>  =
>  Pkt size  min(ns)  avg(ns)  max(ns)
>   512  4,7835,521158,134
>  1024  5,8016,359170,859
>  1280  6,3156,878150,301
>  1518  6,5797,398143,068
>
>
>Acked-by: Eelco Chaudron 

Thanks for your time in testing and sharing the numbers here.

Bhanuprakash.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 0/8] netdev-dpdk: Use intermediate queue during packet transmission.

2017-06-13 Thread Eelco Chaudron

Hi Bhanu,

Went over the full patch set, and the changes look good to me.
All my previous concerns are addressed, and therefore I'm acking this 
series.


I do have one small remark regarding the dpdk_tx_queue struct, see 
individual

patch email.

Here are some numbers with this patch on a none tuned system, single run.
This just to make sure we still benefit with both patches applied.

Throughput for PV scenario, with 64 byte packets

Number
flows   MASTER With PATCH
=====
  10  4,531,4247,884,607
  32  3,137,3006,367,643
  50  2,552,7256,649,985
 100  2,473,8355,876,677
 500  2,308,8405,265,986
1000  2,380,7555,001,081


Throughput for PVP scenario, with 64 byte packets

Number
flows   MASTER With PATCH
=====
  10  2,309,2543,800,747
  32  1,626,3803,324,561
  50  1,538,8793,092,792
 100  1,429,0282,887,488
 500  1,271,7732,537,624
1000  1,268,4302,442,405

Latency test

 MASTER
 ===
 Pkt size  min(ns)  avg(ns)  max(ns)
  512  9,94712,381   264,131
 1024  7,662 9,445   194,463
 1280  7,790 9,115   196,059
 1518  8,103 9,599   197,646

 PATCH
 =
 Pkt size  min(ns)  avg(ns)  max(ns)
  512  10,195   12,551   199,699
 1024  7,838 9,612   206,378
 1280  8,151 9,575   187,848
 1518  8,095 9,643   198,552


Throughput for PP scenario, with 64 byte packets:

Number
flows   MASTER With PATCH
=====
  10  7,430,6168,853,037
  32  4,770,1906,774,006
  50  4,736,2597,336,776
 100  4,699,2376,146,151
 500  3,870,0195,242,781
1000  3,853,8835,121,911


Latency test

 MASTER
 ===
 Pkt size  min(ns)  avg(ns)  max(ns)
  512  4,8875,596165,246
 1024  5,8016,447170,842
 1280  6,3557,056159,056
 1518  6,8607,634160,860

 PATCH
 =
 Pkt size  min(ns)  avg(ns)  max(ns)
  512  4,7835,521158,134
 1024  5,8016,359170,859
 1280  6,3156,878150,301
 1518  6,5797,398143,068


Acked-by: Eelco Chaudron 



On 07/06/17 11:20, Bhanuprakash Bodireddy wrote:

After packet classification, packets are queued in to batches depending
on the matching netdev flow. Thereafter each batch is processed to
execute the related actions. This becomes particularly inefficient if
there are few packets in each batch as rte_eth_tx_burst() incurs expensive
MMIO writes.

This patch series implements intermediate queue for DPDK and vHost User ports.
Packets are queued and burst when the packet count exceeds threshold. Also
drain logic is implemented to handle cases where packets can get stuck in
the tx queues at low rate traffic conditions. Care has been taken to see
that latency is well with in the acceptable limits. Testing shows significant
performance gains with this implementation.

This path series combines the earlier 2 patches posted below.
   DPDK patch: 
https://mail.openvswitch.org/pipermail/ovs-dev/2017-April/331039.html
   vHost User patch: 
https://mail.openvswitch.org/pipermail/ovs-dev/2017-May/332271.html

Also this series proposes to disable the retries on vHost User ports and make it
configurable via ovsdb.(controversial?)

Performance Numbers with intermediate queue:

   DPDK ports
  ===

   Throughput for P2P scenario, for two 82599ES 10G port with 64 byte packets

   Number
   flows   MASTER With PATCH
   ====
 10   1072728313393844
 32704225311228799
 507515491 9607791
1005838699 9430730
5005285066 7845807
   10005226477 7135601

Latency test

MASTER
===
Pkt size  min(ns)  avg(ns)  max(ns)
 512  4,631  5,022309,914
1024  5,545  5,749104,294
1280  5,978  6,159 45,306
1518  6,419  6,774946,850

PATCH
=
Pkt size  min(ns)  avg(ns)  max(ns)
 512  4,711  5,064182,477
1024  5,601  5,888701,654
1280  6,018  6,491533,037
1518  6,467  6,734312,471

vHost User ports
   ==

   Throughput for PV scenario, with 64 byte packets

Number
flows   MASTERWith PATCH
     =   =
 105945899 7833914
 323872211 6530133
 503283713 6618711
1003132540 5857226
5002964499 5273006
   10002931952 5178038

   Latency test.

   MASTER
   ===
   Pkt size  min(ns)  avg(ns)  max(ns)
512  10,011   12,100   281,915
   1024   7,8709,313   193,116
   1280   7,8629,036   194,439
   1518   8,2159,417