Hi,

I didn't test this patch yet. Bhanu, could you please describe
your test scenario and performance results in more details.
It'll be nice if you provide throughput and latency measurement
results for different scenarios and packet sizes. Latency is
important here.

About the patch itself:

  1. 'drain' called only for PMD threads. This will lead to
     broken connection with non-PMD ports.

  2. 'xps_get_tx_qid()' called twice. First time on send and
     the second time on drain. This may lead to different
     returned 'tx_qid's and packets will stuck forever in
     tx buffer.

  3. 'txq_drain()' must take the 'tx_lock' for queue in case
     of dynamic tx queues.

  4. Waiting for 1024 polling cycles of PMD thread may cause
     a huge latency if we have few packets per second on one
     port and intensive traffic on others.

  5. This patch breaks the counting of 'tx_dropped' packets
     in netdev-dpdk.

  6. Comments in netdev-provider.h should be fixed to reflect
     all the changes.

  7. At last, I agree with Aaron that explicit allowing only
     'dpdk' ports is not a good style. Also, mentioning name of
     exact netdev inside the common code is a bad style too.

Best regards, Ilya Maximets.

On 16.12.2016 22:24, Aaron Conole wrote:
> Hi Bhanu,
> 
> Bhanuprakash Bodireddy <bhanuprakash.bodire...@intel.com> writes:
> 
>> In exact match cache processing on an EMC hit, packets are queued in to
>> batches matching the flow. Thereafter, packets are processed in batches
>> for faster packet processing. This particularly is inefficient if there
>> are fewer packets in a batch as rte_eth_tx_burst() incurs expensive MMIO
>> write.
>>
>> This commit adds back intermediate queue implementation. Packets are
>> queued and burst when the packet count >= NETDEV_MAX_BURST. Also drain
>> logic is refactored to handle fewer packets in the tx queues. Testing
>> shows significant performance gains with queueing.
>>
>> Fixes: b59cc14e032d("netdev-dpdk: Use instant sending instead of
>> queueing of packets.")
>> Signed-off-by: Bhanuprakash Bodireddy <bhanuprakash.bodire...@intel.com>
>> Signed-off-by: Antonio Fischetti <antonio.fische...@intel.com>
>> Co-authored-by: Antonio Fischetti <antonio.fische...@intel.com>
>> Signed-off-by: Markus Magnusson <markus.magnus...@ericsson.com>
>> Co-authored-by: Markus Magnusson <markus.magnus...@ericsson.com>
>> ---
> 
> I've Cc'd Ilya just in hopes that the patch gets a better review than I
> could give.  As a general comment, I like the direction - batched
> operations are usually a better way of going.
> 
> Just minor below.
> 
> ... snip ...
>>
>> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
>> index 3509493..65dff83 100644
>> --- a/lib/dpif-netdev.c
>> +++ b/lib/dpif-netdev.c
>> @@ -622,6 +622,9 @@ static int dpif_netdev_xps_get_tx_qid(const struct 
>> dp_netdev_pmd_thread *pmd,
>>  static inline bool emc_entry_alive(struct emc_entry *ce);
>>  static void emc_clear_entry(struct emc_entry *ce);
>>  
>> +static struct tx_port *pmd_send_port_cache_lookup
>> +    (const struct dp_netdev_pmd_thread *pmd, odp_port_t port_no);
>> +
>>  static void
>>  emc_cache_init(struct emc_cache *flow_cache)
>>  {
>> @@ -2877,6 +2880,31 @@ cycles_count_end(struct dp_netdev_pmd_thread *pmd,
>>  }
>>  
>>  static void
>> +dp_netdev_drain_txq_port(struct dp_netdev_pmd_thread *pmd,
>> +                         struct dp_netdev_port *port,
>> +                         uint64_t now)
>> +{
>> +    int tx_qid;
>> +
>> +    if (!strcmp(port->type, "dpdk")) {
> 
> Any reason to restrict this only to dpdk ports?  It looks like you've
> added a new netdev operation, so why not just call the netdev_txq_drain
> unconditionally?
> 
> Also, bit of a nit, but tq_qid can be reduced in scope down to the if
> block below.
> 
>> +        struct tx_port *tx = pmd_send_port_cache_lookup(pmd,
>> +                 u32_to_odp(port->port_no));
>> +
>> +        if (OVS_LIKELY(tx)) {
>> +            bool dynamic_txqs = tx->port->dynamic_txqs;
>> +
>> +            if (dynamic_txqs) {
>> +                tx_qid = dpif_netdev_xps_get_tx_qid(pmd, tx, now);
>> +            } else {
>> +                tx_qid = pmd->static_tx_qid;
>> +            }
>> +
>> +            netdev_txq_drain(port->netdev, tx_qid);
>> +        }
>> +    }
>> +}
>> +
> 
> 
> 
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to