Re: [ovs-dev] [PATCH v11 2/3] netdev-dpdk : Detailed packet drop statistics

2019-11-11 Thread Sriram Vatala via dev
Thanks Ilya and kevin.

Regards,
Sriram.

-Original Message-
From: Ilya Maximets 
Sent: 12 November 2019 00:08
To: Ilya Maximets ; Kevin Traynor ; 
Sriram Vatala ; ovs-dev@openvswitch.org
Subject: Re: [PATCH v11 2/3] netdev-dpdk : Detailed packet drop statistics

On 11.11.2019 17:11, Ilya Maximets wrote:
>>> I'm not sure if clang annotations will work with rte_spinlock.
>>> DPDK doesn't have proper annotations for locking functions.
>>>
>>
>> Ah, good point, I didn't check the lock type. In that case nevermind,
>> patch+incremental LGTM as is.
>>
>> Acked-by: Kevin Traynor 
>
> Thanks. In this case, I think, there is no need to re-spin the series.
> I'll just squash the incremental with this patch and give it another try.
> If it'll work fine, I'll apply the series.

Thanks Sriram and Kevin! Series applied to master.

Best regards, Ilya Maximets.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v11 2/3] netdev-dpdk : Detailed packet drop statistics

2019-11-11 Thread Ilya Maximets
On 11.11.2019 17:11, Ilya Maximets wrote:
>>> I'm not sure if clang annotations will work with rte_spinlock.
>>> DPDK doesn't have proper annotations for locking functions.
>>>
>>
>> Ah, good point, I didn't check the lock type. In that case nevermind,
>> patch+incremental LGTM as is.
>>
>> Acked-by: Kevin Traynor 
> 
> Thanks. In this case, I think, there is no need to re-spin the series.
> I'll just squash the incremental with this patch and give it another try.
> If it'll work fine, I'll apply the series.

Thanks Sriram and Kevin! Series applied to master.

Best regards, Ilya Maximets.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v11 2/3] netdev-dpdk : Detailed packet drop statistics

2019-11-11 Thread Ilya Maximets
On 11.11.2019 17:06, Kevin Traynor wrote:
> On 11/11/2019 15:59, Ilya Maximets wrote:
>> On 11.11.2019 16:55, Kevin Traynor wrote:
>>> On 10/11/2019 23:20, Ilya Maximets wrote:
 On 29.10.2019 15:50, Sriram Vatala wrote:> @@ -2388,12 +2412,16 @@ 
 __netdev_dpdk_vhost_send(struct netdev *netdev, int qid,
>   }
>   } while (cnt && (retries++ < max_retries));
>   
> +tx_failure = cnt;
>   rte_spinlock_unlock(>tx_q[qid].tx_lock);
>   
>   rte_spinlock_lock(>stats_lock);
>   netdev_dpdk_vhost_update_tx_counters(>stats, pkts, total_pkts,
>cnt + dropped);
> -dev->tx_retries += MIN(retries, max_retries);
> +sw_stats->tx_retries += MIN(retries, max_retries);
> +sw_stats->tx_failure_drops += tx_failure;
> +sw_stats->tx_mtu_exceeded_drops += mtu_drops;
> +sw_stats->tx_qos_drops += qos_drops;

 Kevin pointed to this part of code in hope that we can move this to
 a separate function and reuse in his review for v9.  This code catches
 my eyes too.  I don't think that we can reuse this part, at least this
 will be not very efficient in current situation (clearing of the unused
 fields in a stats structure will be probably needed before calling such
 a common function, but ETH tx uses only half of the struct).

 But there is another thing here.  We already have special functions
 for vhost tx/rx counters.  And it looks strange when we're not using
 these functions to update tx/rx failure counters.

 Suggesting following incremental.
 Kevin, Sriram, please, share your thoughts.

>>>
>>> The incremental patch looks good, thanks. One additional thing is that
>>> OVS_REQUIRES(dev->stats_lock) annotations can be used for the vhost
>>> rx/tx update counter functions now (even if it seems unlikely someone
>>> would miss doing that).
>>>
>>> @Sriram, see below or you can check similar OVS_REQUIRES usage elsewhere
>>> in the file.
>>
>> I'm not sure if clang annotations will work with rte_spinlock.
>> DPDK doesn't have proper annotations for locking functions.
>>
> 
> Ah, good point, I didn't check the lock type. In that case nevermind,
> patch+incremental LGTM as is.
> 
> Acked-by: Kevin Traynor 

Thanks. In this case, I think, there is no need to re-spin the series.
I'll just squash the incremental with this patch and give it another try.
If it'll work fine, I'll apply the series.

Best regards, Ilya Maximets.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v11 2/3] netdev-dpdk : Detailed packet drop statistics

2019-11-11 Thread Kevin Traynor
On 11/11/2019 15:59, Ilya Maximets wrote:
> On 11.11.2019 16:55, Kevin Traynor wrote:
>> On 10/11/2019 23:20, Ilya Maximets wrote:
>>> On 29.10.2019 15:50, Sriram Vatala wrote:> @@ -2388,12 +2412,16 @@ 
>>> __netdev_dpdk_vhost_send(struct netdev *netdev, int qid,
   }
   } while (cnt && (retries++ < max_retries));
   
 +tx_failure = cnt;
   rte_spinlock_unlock(>tx_q[qid].tx_lock);
   
   rte_spinlock_lock(>stats_lock);
   netdev_dpdk_vhost_update_tx_counters(>stats, pkts, total_pkts,
cnt + dropped);
 -dev->tx_retries += MIN(retries, max_retries);
 +sw_stats->tx_retries += MIN(retries, max_retries);
 +sw_stats->tx_failure_drops += tx_failure;
 +sw_stats->tx_mtu_exceeded_drops += mtu_drops;
 +sw_stats->tx_qos_drops += qos_drops;
>>>
>>> Kevin pointed to this part of code in hope that we can move this to
>>> a separate function and reuse in his review for v9.  This code catches
>>> my eyes too.  I don't think that we can reuse this part, at least this
>>> will be not very efficient in current situation (clearing of the unused
>>> fields in a stats structure will be probably needed before calling such
>>> a common function, but ETH tx uses only half of the struct).
>>>
>>> But there is another thing here.  We already have special functions
>>> for vhost tx/rx counters.  And it looks strange when we're not using
>>> these functions to update tx/rx failure counters.
>>>
>>> Suggesting following incremental.
>>> Kevin, Sriram, please, share your thoughts.
>>>
>>
>> The incremental patch looks good, thanks. One additional thing is that
>> OVS_REQUIRES(dev->stats_lock) annotations can be used for the vhost
>> rx/tx update counter functions now (even if it seems unlikely someone
>> would miss doing that).
>>
>> @Sriram, see below or you can check similar OVS_REQUIRES usage elsewhere
>> in the file.
> 
> I'm not sure if clang annotations will work with rte_spinlock.
> DPDK doesn't have proper annotations for locking functions.
> 

Ah, good point, I didn't check the lock type. In that case nevermind,
patch+incremental LGTM as is.

Acked-by: Kevin Traynor 



thanks,
Kevin.

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v11 2/3] netdev-dpdk : Detailed packet drop statistics

2019-11-11 Thread Ilya Maximets
On 11.11.2019 16:55, Kevin Traynor wrote:
> On 10/11/2019 23:20, Ilya Maximets wrote:
>> On 29.10.2019 15:50, Sriram Vatala wrote:> @@ -2388,12 +2412,16 @@ 
>> __netdev_dpdk_vhost_send(struct netdev *netdev, int qid,
>>>   }
>>>   } while (cnt && (retries++ < max_retries));
>>>   
>>> +tx_failure = cnt;
>>>   rte_spinlock_unlock(>tx_q[qid].tx_lock);
>>>   
>>>   rte_spinlock_lock(>stats_lock);
>>>   netdev_dpdk_vhost_update_tx_counters(>stats, pkts, total_pkts,
>>>cnt + dropped);
>>> -dev->tx_retries += MIN(retries, max_retries);
>>> +sw_stats->tx_retries += MIN(retries, max_retries);
>>> +sw_stats->tx_failure_drops += tx_failure;
>>> +sw_stats->tx_mtu_exceeded_drops += mtu_drops;
>>> +sw_stats->tx_qos_drops += qos_drops;
>>
>> Kevin pointed to this part of code in hope that we can move this to
>> a separate function and reuse in his review for v9.  This code catches
>> my eyes too.  I don't think that we can reuse this part, at least this
>> will be not very efficient in current situation (clearing of the unused
>> fields in a stats structure will be probably needed before calling such
>> a common function, but ETH tx uses only half of the struct).
>>
>> But there is another thing here.  We already have special functions
>> for vhost tx/rx counters.  And it looks strange when we're not using
>> these functions to update tx/rx failure counters.
>>
>> Suggesting following incremental.
>> Kevin, Sriram, please, share your thoughts.
>>
> 
> The incremental patch looks good, thanks. One additional thing is that
> OVS_REQUIRES(dev->stats_lock) annotations can be used for the vhost
> rx/tx update counter functions now (even if it seems unlikely someone
> would miss doing that).
> 
> @Sriram, see below or you can check similar OVS_REQUIRES usage elsewhere
> in the file.

I'm not sure if clang annotations will work with rte_spinlock.
DPDK doesn't have proper annotations for locking functions.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v11 2/3] netdev-dpdk : Detailed packet drop statistics

2019-11-11 Thread Kevin Traynor
On 10/11/2019 23:20, Ilya Maximets wrote:
> On 29.10.2019 15:50, Sriram Vatala wrote:> @@ -2388,12 +2412,16 @@ 
> __netdev_dpdk_vhost_send(struct netdev *netdev, int qid,
>>   }
>>   } while (cnt && (retries++ < max_retries));
>>   
>> +tx_failure = cnt;
>>   rte_spinlock_unlock(>tx_q[qid].tx_lock);
>>   
>>   rte_spinlock_lock(>stats_lock);
>>   netdev_dpdk_vhost_update_tx_counters(>stats, pkts, total_pkts,
>>cnt + dropped);
>> -dev->tx_retries += MIN(retries, max_retries);
>> +sw_stats->tx_retries += MIN(retries, max_retries);
>> +sw_stats->tx_failure_drops += tx_failure;
>> +sw_stats->tx_mtu_exceeded_drops += mtu_drops;
>> +sw_stats->tx_qos_drops += qos_drops;
> 
> Kevin pointed to this part of code in hope that we can move this to
> a separate function and reuse in his review for v9.  This code catches
> my eyes too.  I don't think that we can reuse this part, at least this
> will be not very efficient in current situation (clearing of the unused
> fields in a stats structure will be probably needed before calling such
> a common function, but ETH tx uses only half of the struct).
> 
> But there is another thing here.  We already have special functions
> for vhost tx/rx counters.  And it looks strange when we're not using
> these functions to update tx/rx failure counters.
> 
> Suggesting following incremental.
> Kevin, Sriram, please, share your thoughts.
> 

The incremental patch looks good, thanks. One additional thing is that
OVS_REQUIRES(dev->stats_lock) annotations can be used for the vhost
rx/tx update counter functions now (even if it seems unlikely someone
would miss doing that).

@Sriram, see below or you can check similar OVS_REQUIRES usage elsewhere
in the file.

> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
> index 3cb7023a8..02120a379 100644
> --- a/lib/netdev-dpdk.c
> +++ b/lib/netdev-dpdk.c
> @@ -2169,16 +2169,18 @@ netdev_dpdk_vhost_update_rx_size_counters(struct 
> netdev_stats *stats,
>   }
>   
>   static inline void
> -netdev_dpdk_vhost_update_rx_counters(struct netdev_stats *stats,
> +netdev_dpdk_vhost_update_rx_counters(struct netdev_dpdk *dev,
>struct dp_packet **packets, int count,
> - int dropped)
> + int qos_drops)
OVS_REQUIRES(dev->stats_lock)
>   {
> -int i;
> -unsigned int packet_size;
> +struct netdev_dpdk_sw_stats *sw_stats = dev->sw_stats;
> +struct netdev_stats *stats = >stats;
>   struct dp_packet *packet;
> +unsigned int packet_size;
> +int i;
>   
>   stats->rx_packets += count;
> -stats->rx_dropped += dropped;
> +stats->rx_dropped += qos_drops;
>   for (i = 0; i < count; i++) {
>   packet = packets[i];
>   packet_size = dp_packet_size(packet);
> @@ -2201,6 +2203,8 @@ netdev_dpdk_vhost_update_rx_counters(struct 
> netdev_stats *stats,
>   
>   stats->rx_bytes += packet_size;
>   }
> +
> +sw_stats->rx_qos_drops += qos_drops;
>   }
>   
>   /*
> @@ -2213,7 +2217,7 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq,
>   struct netdev_dpdk *dev = netdev_dpdk_cast(rxq->netdev);
>   struct ingress_policer *policer = netdev_dpdk_get_ingress_policer(dev);
>   uint16_t nb_rx = 0;
> -uint16_t dropped = 0;
> +uint16_t qos_drops = 0;
>   int qid = rxq->queue_id * VIRTIO_QNUM + VIRTIO_TXQ;
>   int vid = netdev_dpdk_get_vid(dev);
>   
> @@ -2240,17 +2244,16 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq,
>   }
>   
>   if (policer) {
> -dropped = nb_rx;
> +qos_drops = nb_rx;
>   nb_rx = ingress_policer_run(policer,
>   (struct rte_mbuf **) batch->packets,
>   nb_rx, true);
> -dropped -= nb_rx;
> +qos_drops -= nb_rx;
>   }
>   
>   rte_spinlock_lock(>stats_lock);
> -netdev_dpdk_vhost_update_rx_counters(>stats, batch->packets,
> - nb_rx, dropped);
> -dev->sw_stats->rx_qos_drops += dropped;
> +netdev_dpdk_vhost_update_rx_counters(dev, batch->packets,
> + nb_rx, qos_drops);
>   rte_spinlock_unlock(>stats_lock);
>   
>   batch->count = nb_rx;
> @@ -2360,13 +2363,18 @@ netdev_dpdk_filter_packet_len(struct netdev_dpdk 
> *dev, struct rte_mbuf **pkts,
>   }
>   
>   static inline void
> -netdev_dpdk_vhost_update_tx_counters(struct netdev_stats *stats,
> +netdev_dpdk_vhost_update_tx_counters(struct netdev_dpdk *dev,
>struct dp_packet **packets,
>int attempted,
> - int dropped)
> + struct netdev_dpdk_sw_stats 
> *sw_stats_add)
OVS_REQUIRES(dev->stats_lock)
>   {
> -int i;
> +struct 

Re: [ovs-dev] [PATCH v11 2/3] netdev-dpdk : Detailed packet drop statistics

2019-11-11 Thread Sriram Vatala via dev
Hi Ilya,
Thanks for the review. I agree with your proposal to move the stats update 
code to existing special functions. Thanks for the incremental patch, it looks 
good to me. Will wait for Kevin's taught on this.

Thanks & Regards,
Sriram.

-Original Message-
From: Ilya Maximets 
Sent: 11 November 2019 04:50
To: Sriram Vatala ; ovs-dev@openvswitch.org; 
ktray...@redhat.com; i.maxim...@ovn.org
Subject: Re: [PATCH v11 2/3] netdev-dpdk : Detailed packet drop statistics

On 29.10.2019 15:50, Sriram Vatala wrote:> @@ -2388,12 +2412,16 @@ 
__netdev_dpdk_vhost_send(struct netdev *netdev, int qid,
>   }
>   } while (cnt && (retries++ < max_retries));
>
> +tx_failure = cnt;
>   rte_spinlock_unlock(>tx_q[qid].tx_lock);
>
>   rte_spinlock_lock(>stats_lock);
>   netdev_dpdk_vhost_update_tx_counters(>stats, pkts, total_pkts,
>cnt + dropped);
> -dev->tx_retries += MIN(retries, max_retries);
> +sw_stats->tx_retries += MIN(retries, max_retries);
> +sw_stats->tx_failure_drops += tx_failure;
> +sw_stats->tx_mtu_exceeded_drops += mtu_drops;
> +sw_stats->tx_qos_drops += qos_drops;

Kevin pointed to this part of code in hope that we can move this to a separate 
function and reuse in his review for v9.  This code catches my eyes too.  I 
don't think that we can reuse this part, at least this will be not very 
efficient in current situation (clearing of the unused fields in a stats 
structure will be probably needed before calling such a common function, but 
ETH tx uses only half of the struct).

But there is another thing here.  We already have special functions for vhost 
tx/rx counters.  And it looks strange when we're not using these functions to 
update tx/rx failure counters.

Suggesting following incremental.
Kevin, Sriram, please, share your thoughts.

diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index 3cb7023a8..02120a379 
100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -2169,16 +2169,18 @@ netdev_dpdk_vhost_update_rx_size_counters(struct 
netdev_stats *stats,
  }

  static inline void
-netdev_dpdk_vhost_update_rx_counters(struct netdev_stats *stats,
+netdev_dpdk_vhost_update_rx_counters(struct netdev_dpdk *dev,
   struct dp_packet **packets, int count,
- int dropped)
+ int qos_drops)
  {
-int i;
-unsigned int packet_size;
+struct netdev_dpdk_sw_stats *sw_stats = dev->sw_stats;
+struct netdev_stats *stats = >stats;
  struct dp_packet *packet;
+unsigned int packet_size;
+int i;

  stats->rx_packets += count;
-stats->rx_dropped += dropped;
+stats->rx_dropped += qos_drops;
  for (i = 0; i < count; i++) {
  packet = packets[i];
  packet_size = dp_packet_size(packet); @@ -2201,6 +2203,8 @@ 
netdev_dpdk_vhost_update_rx_counters(struct netdev_stats *stats,

  stats->rx_bytes += packet_size;
  }
+
+sw_stats->rx_qos_drops += qos_drops;
  }

  /*
@@ -2213,7 +2217,7 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq,
  struct netdev_dpdk *dev = netdev_dpdk_cast(rxq->netdev);
  struct ingress_policer *policer = netdev_dpdk_get_ingress_policer(dev);
  uint16_t nb_rx = 0;
-uint16_t dropped = 0;
+uint16_t qos_drops = 0;
  int qid = rxq->queue_id * VIRTIO_QNUM + VIRTIO_TXQ;
  int vid = netdev_dpdk_get_vid(dev);

@@ -2240,17 +2244,16 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq,
  }

  if (policer) {
-dropped = nb_rx;
+qos_drops = nb_rx;
  nb_rx = ingress_policer_run(policer,
  (struct rte_mbuf **) batch->packets,
  nb_rx, true);
-dropped -= nb_rx;
+qos_drops -= nb_rx;
  }

  rte_spinlock_lock(>stats_lock);
-netdev_dpdk_vhost_update_rx_counters(>stats, batch->packets,
- nb_rx, dropped);
-dev->sw_stats->rx_qos_drops += dropped;
+netdev_dpdk_vhost_update_rx_counters(dev, batch->packets,
+ nb_rx, qos_drops);
  rte_spinlock_unlock(>stats_lock);

  batch->count = nb_rx;
@@ -2360,13 +2363,18 @@ netdev_dpdk_filter_packet_len(struct netdev_dpdk *dev, 
struct rte_mbuf **pkts,
  }

  static inline void
-netdev_dpdk_vhost_update_tx_counters(struct netdev_stats *stats,
+netdev_dpdk_vhost_update_tx_counters(struct netdev_dpdk *dev,
   struct dp_packet **packets,
   int attempted,
- int dropped)
+ struct netdev_dpdk_sw_stats
+ *sw_stats_add)
  {
-int i;
+struct netdev_dpdk_sw_stats *sw_stats = dev->sw_stats;
+int dropped = sw_stats_add->tx_mtu_exceeded_drops +
+  sw_stats_add->tx_qos_drops +
+  

Re: [ovs-dev] [PATCH v11 2/3] netdev-dpdk : Detailed packet drop statistics

2019-11-10 Thread Ilya Maximets
On 11.11.2019 0:44, Ilya Maximets wrote:
> Sorry if you've received incremental patch with shifted lines.
> Thunderbird behaves weirdly.  For some reason it adds a single
> space for every line that starts with a space.  I'm not sure
> how to fix that.  I could re-send incremental patch with some
> different tool (git-send-email) if needed.

It looks like I found the reason.  If someone will ever catch
the same issue, disable 'mailnews.send_plaintext_flowed' in
thunderbird config editor.

Re-sending the incremental:

diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index 3cb7023a8..02120a379 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -2169,16 +2169,18 @@ netdev_dpdk_vhost_update_rx_size_counters(struct 
netdev_stats *stats,
 }
 
 static inline void
-netdev_dpdk_vhost_update_rx_counters(struct netdev_stats *stats,
+netdev_dpdk_vhost_update_rx_counters(struct netdev_dpdk *dev,
  struct dp_packet **packets, int count,
- int dropped)
+ int qos_drops)
 {
-int i;
-unsigned int packet_size;
+struct netdev_dpdk_sw_stats *sw_stats = dev->sw_stats;
+struct netdev_stats *stats = >stats;
 struct dp_packet *packet;
+unsigned int packet_size;
+int i;
 
 stats->rx_packets += count;
-stats->rx_dropped += dropped;
+stats->rx_dropped += qos_drops;
 for (i = 0; i < count; i++) {
 packet = packets[i];
 packet_size = dp_packet_size(packet);
@@ -2201,6 +2203,8 @@ netdev_dpdk_vhost_update_rx_counters(struct netdev_stats 
*stats,
 
 stats->rx_bytes += packet_size;
 }
+
+sw_stats->rx_qos_drops += qos_drops;
 }
 
 /*
@@ -2213,7 +2217,7 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq,
 struct netdev_dpdk *dev = netdev_dpdk_cast(rxq->netdev);
 struct ingress_policer *policer = netdev_dpdk_get_ingress_policer(dev);
 uint16_t nb_rx = 0;
-uint16_t dropped = 0;
+uint16_t qos_drops = 0;
 int qid = rxq->queue_id * VIRTIO_QNUM + VIRTIO_TXQ;
 int vid = netdev_dpdk_get_vid(dev);
 
@@ -2240,17 +2244,16 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq,
 }
 
 if (policer) {
-dropped = nb_rx;
+qos_drops = nb_rx;
 nb_rx = ingress_policer_run(policer,
 (struct rte_mbuf **) batch->packets,
 nb_rx, true);
-dropped -= nb_rx;
+qos_drops -= nb_rx;
 }
 
 rte_spinlock_lock(>stats_lock);
-netdev_dpdk_vhost_update_rx_counters(>stats, batch->packets,
- nb_rx, dropped);
-dev->sw_stats->rx_qos_drops += dropped;
+netdev_dpdk_vhost_update_rx_counters(dev, batch->packets,
+ nb_rx, qos_drops);
 rte_spinlock_unlock(>stats_lock);
 
 batch->count = nb_rx;
@@ -2360,13 +2363,18 @@ netdev_dpdk_filter_packet_len(struct netdev_dpdk *dev, 
struct rte_mbuf **pkts,
 }
 
 static inline void
-netdev_dpdk_vhost_update_tx_counters(struct netdev_stats *stats,
+netdev_dpdk_vhost_update_tx_counters(struct netdev_dpdk *dev,
  struct dp_packet **packets,
  int attempted,
- int dropped)
+ struct netdev_dpdk_sw_stats *sw_stats_add)
 {
-int i;
+struct netdev_dpdk_sw_stats *sw_stats = dev->sw_stats;
+int dropped = sw_stats_add->tx_mtu_exceeded_drops +
+  sw_stats_add->tx_qos_drops +
+  sw_stats_add->tx_failure_drops;
+struct netdev_stats *stats = >stats;
 int sent = attempted - dropped;
+int i;
 
 stats->tx_packets += sent;
 stats->tx_dropped += dropped;
@@ -2374,6 +2382,11 @@ netdev_dpdk_vhost_update_tx_counters(struct netdev_stats 
*stats,
 for (i = 0; i < sent; i++) {
 stats->tx_bytes += dp_packet_size(packets[i]);
 }
+
+sw_stats->tx_retries+= sw_stats_add->tx_retries;
+sw_stats->tx_failure_drops  += sw_stats_add->tx_failure_drops;
+sw_stats->tx_mtu_exceeded_drops += sw_stats_add->tx_mtu_exceeded_drops;
+sw_stats->tx_qos_drops  += sw_stats_add->tx_qos_drops;
 }
 
 static void
@@ -2382,12 +2395,9 @@ __netdev_dpdk_vhost_send(struct netdev *netdev, int qid,
 {
 struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
 struct rte_mbuf **cur_pkts = (struct rte_mbuf **) pkts;
-struct netdev_dpdk_sw_stats *sw_stats = dev->sw_stats;
-unsigned int total_pkts = cnt;
-unsigned int dropped = 0;
-unsigned int tx_failure;
-unsigned int mtu_drops;
-unsigned int qos_drops;
+struct netdev_dpdk_sw_stats sw_stats_add;
+unsigned int n_packets_to_free = cnt;
+unsigned int total_packets = cnt;
 int i, retries = 0;
 int max_retries = VHOST_ENQ_RETRY_MIN;
 int vid = netdev_dpdk_get_vid(dev);
@@ -2408,12 +2418,14 @@ 

Re: [ovs-dev] [PATCH v11 2/3] netdev-dpdk : Detailed packet drop statistics

2019-11-10 Thread Ilya Maximets

Sorry if you've received incremental patch with shifted lines.
Thunderbird behaves weirdly.  For some reason it adds a single
space for every line that starts with a space.  I'm not sure
how to fix that.  I could re-send incremental patch with some
different tool (git-send-email) if needed.

  Test. (this line should be prefixed with exactly 2 spaces.)

Best regards, Ilya Maximets.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v11 2/3] netdev-dpdk : Detailed packet drop statistics

2019-11-10 Thread Ilya Maximets

On 29.10.2019 15:50, Sriram Vatala wrote:> @@ -2388,12 +2412,16 @@ 
__netdev_dpdk_vhost_send(struct netdev *netdev, int qid,

  }
  } while (cnt && (retries++ < max_retries));
  
+tx_failure = cnt;

  rte_spinlock_unlock(>tx_q[qid].tx_lock);
  
  rte_spinlock_lock(>stats_lock);

  netdev_dpdk_vhost_update_tx_counters(>stats, pkts, total_pkts,
   cnt + dropped);
-dev->tx_retries += MIN(retries, max_retries);
+sw_stats->tx_retries += MIN(retries, max_retries);
+sw_stats->tx_failure_drops += tx_failure;
+sw_stats->tx_mtu_exceeded_drops += mtu_drops;
+sw_stats->tx_qos_drops += qos_drops;


Kevin pointed to this part of code in hope that we can move this to
a separate function and reuse in his review for v9.  This code catches
my eyes too.  I don't think that we can reuse this part, at least this
will be not very efficient in current situation (clearing of the unused
fields in a stats structure will be probably needed before calling such
a common function, but ETH tx uses only half of the struct).

But there is another thing here.  We already have special functions
for vhost tx/rx counters.  And it looks strange when we're not using
these functions to update tx/rx failure counters.

Suggesting following incremental.
Kevin, Sriram, please, share your thoughts.

diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index 3cb7023a8..02120a379 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -2169,16 +2169,18 @@ netdev_dpdk_vhost_update_rx_size_counters(struct 
netdev_stats *stats,
 }
 
 static inline void

-netdev_dpdk_vhost_update_rx_counters(struct netdev_stats *stats,
+netdev_dpdk_vhost_update_rx_counters(struct netdev_dpdk *dev,
  struct dp_packet **packets, int count,
- int dropped)
+ int qos_drops)
 {
-int i;
-unsigned int packet_size;
+struct netdev_dpdk_sw_stats *sw_stats = dev->sw_stats;
+struct netdev_stats *stats = >stats;
 struct dp_packet *packet;
+unsigned int packet_size;
+int i;
 
 stats->rx_packets += count;

-stats->rx_dropped += dropped;
+stats->rx_dropped += qos_drops;
 for (i = 0; i < count; i++) {
 packet = packets[i];
 packet_size = dp_packet_size(packet);
@@ -2201,6 +2203,8 @@ netdev_dpdk_vhost_update_rx_counters(struct netdev_stats 
*stats,
 
 stats->rx_bytes += packet_size;

 }
+
+sw_stats->rx_qos_drops += qos_drops;
 }
 
 /*

@@ -2213,7 +2217,7 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq,
 struct netdev_dpdk *dev = netdev_dpdk_cast(rxq->netdev);
 struct ingress_policer *policer = netdev_dpdk_get_ingress_policer(dev);
 uint16_t nb_rx = 0;
-uint16_t dropped = 0;
+uint16_t qos_drops = 0;
 int qid = rxq->queue_id * VIRTIO_QNUM + VIRTIO_TXQ;
 int vid = netdev_dpdk_get_vid(dev);
 
@@ -2240,17 +2244,16 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq,

 }
 
 if (policer) {

-dropped = nb_rx;
+qos_drops = nb_rx;
 nb_rx = ingress_policer_run(policer,
 (struct rte_mbuf **) batch->packets,
 nb_rx, true);
-dropped -= nb_rx;
+qos_drops -= nb_rx;
 }
 
 rte_spinlock_lock(>stats_lock);

-netdev_dpdk_vhost_update_rx_counters(>stats, batch->packets,
- nb_rx, dropped);
-dev->sw_stats->rx_qos_drops += dropped;
+netdev_dpdk_vhost_update_rx_counters(dev, batch->packets,
+ nb_rx, qos_drops);
 rte_spinlock_unlock(>stats_lock);
 
 batch->count = nb_rx;

@@ -2360,13 +2363,18 @@ netdev_dpdk_filter_packet_len(struct netdev_dpdk *dev, 
struct rte_mbuf **pkts,
 }
 
 static inline void

-netdev_dpdk_vhost_update_tx_counters(struct netdev_stats *stats,
+netdev_dpdk_vhost_update_tx_counters(struct netdev_dpdk *dev,
  struct dp_packet **packets,
  int attempted,
- int dropped)
+ struct netdev_dpdk_sw_stats *sw_stats_add)
 {
-int i;
+struct netdev_dpdk_sw_stats *sw_stats = dev->sw_stats;
+int dropped = sw_stats_add->tx_mtu_exceeded_drops +
+  sw_stats_add->tx_qos_drops +
+  sw_stats_add->tx_failure_drops;
+struct netdev_stats *stats = >stats;
 int sent = attempted - dropped;
+int i;
 
 stats->tx_packets += sent;

 stats->tx_dropped += dropped;
@@ -2374,6 +2382,11 @@ netdev_dpdk_vhost_update_tx_counters(struct netdev_stats 
*stats,
 for (i = 0; i < sent; i++) {
 stats->tx_bytes += dp_packet_size(packets[i]);
 }
+
+sw_stats->tx_retries+= sw_stats_add->tx_retries;
+sw_stats->tx_failure_drops  += sw_stats_add->tx_failure_drops;
+

[ovs-dev] [PATCH v11 2/3] netdev-dpdk : Detailed packet drop statistics

2019-10-30 Thread Sriram Vatala via dev
OVS may be unable to transmit packets for multiple reasons on
the userspace datapath and today there is a single counter to
track packets dropped due to any of those reasons. This patch
adds custom software stats for the different reasons packets
may be dropped during tx/rx on the userspace datapath in OVS.

- MTU drops : drops that occur due to a too large packet size
- Qos drops : drops that occur due to egress/ingress QOS
- Tx failures: drops as returned by the DPDK PMD send function

Note that the reason for tx failures is not specified in OVS.
In practice for vhost ports it is most common that tx failures
are because there are not enough available descriptors,
which is usually caused by misconfiguration of the guest queues
and/or because the guest is not consuming packets fast enough
from the queues.

These counters are displayed along with other stats in
"ovs-vsctl get interface  statistics" command and are
available for dpdk and vhostuser/vhostuserclient ports.

Also the existing "tx_retries" counter for vhost ports has been
renamed to "ovs_tx_retries", so that all the custom statistics
that OVS accumulates itself will have the prefix "ovs_". This
will prevent any custom stats names overlapping with
driver/HW stats.

Signed-off-by: Sriram Vatala 
---
 Documentation/topics/dpdk/bridge.rst |  6 ++
 Documentation/topics/dpdk/vhost-user.rst |  2 +-
 lib/netdev-dpdk.c| 82 +++-
 3 files changed, 72 insertions(+), 18 deletions(-)

diff --git a/Documentation/topics/dpdk/bridge.rst 
b/Documentation/topics/dpdk/bridge.rst
index d9bc7eba4..f0ef42ecc 100644
--- a/Documentation/topics/dpdk/bridge.rst
+++ b/Documentation/topics/dpdk/bridge.rst
@@ -75,6 +75,12 @@ OpenFlow14`` option::
 
 $ ovs-ofctl -O OpenFlow14 dump-ports br0
 
+There are custom statistics that OVS accumulates itself and these stats has
+``ovs_`` as prefix. These custom stats are shown along with other stats
+using the following command::
+
+$ ovs-vsctl get Interface  statistics
+
 EMC Insertion Probability
 -
 
diff --git a/Documentation/topics/dpdk/vhost-user.rst 
b/Documentation/topics/dpdk/vhost-user.rst
index cda5b122f..ec0caeb16 100644
--- a/Documentation/topics/dpdk/vhost-user.rst
+++ b/Documentation/topics/dpdk/vhost-user.rst
@@ -551,7 +551,7 @@ processing packets at the required rate.
 The amount of Tx retries on a vhost-user or vhost-user-client interface can be
 shown with::
 
-  $ ovs-vsctl get Interface dpdkvhostclient0 statistics:tx_retries
+  $ ovs-vsctl get Interface dpdkvhostclient0 statistics:ovs_tx_retries
 
 vhost-user Dequeue Zero Copy (experimental)
 ---
diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index 2cc2516a9..6922e61ca 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -174,6 +174,20 @@ static const struct vhost_device_ops virtio_net_device_ops 
=
 .destroy_connection = destroy_connection,
 };
 
+/* Custom software stats for dpdk ports */
+struct netdev_dpdk_sw_stats {
+/* No. of retries when unable to transmit. */
+uint64_t tx_retries;
+/* Packet drops when unable to transmit; Probably Tx queue is full. */
+uint64_t tx_failure_drops;
+/* Packet length greater than device MTU. */
+uint64_t tx_mtu_exceeded_drops;
+/* Packet drops in egress policer processing. */
+uint64_t tx_qos_drops;
+/* Packet drops in ingress policer processing. */
+uint64_t rx_qos_drops;
+};
+
 enum { DPDK_RING_SIZE = 256 };
 BUILD_ASSERT_DECL(IS_POW2(DPDK_RING_SIZE));
 enum { DRAIN_TSC = 20ULL };
@@ -416,11 +430,10 @@ struct netdev_dpdk {
 
 PADDED_MEMBERS(CACHE_LINE_SIZE,
 struct netdev_stats stats;
-/* Custom stat for retries when unable to transmit. */
-uint64_t tx_retries;
+struct netdev_dpdk_sw_stats *sw_stats;
 /* Protects stats */
 rte_spinlock_t stats_lock;
-/* 4 pad bytes here. */
+/* 36 pad bytes here. */
 );
 
 PADDED_MEMBERS(CACHE_LINE_SIZE,
@@ -1176,7 +1189,8 @@ common_construct(struct netdev *netdev, dpdk_port_t 
port_no,
 dev->rte_xstats_ids = NULL;
 dev->rte_xstats_ids_size = 0;
 
-dev->tx_retries = (dev->type == DPDK_DEV_VHOST) ? 0 : UINT64_MAX;
+dev->sw_stats = xzalloc(sizeof *dev->sw_stats);
+dev->sw_stats->tx_retries = (dev->type == DPDK_DEV_VHOST) ? 0 : UINT64_MAX;
 
 return 0;
 }
@@ -1362,6 +1376,7 @@ common_destruct(struct netdev_dpdk *dev)
 ovs_list_remove(>list_node);
 free(ovsrcu_get_protected(struct ingress_policer *,
   >ingress_policer));
+free(dev->sw_stats);
 ovs_mutex_destroy(>mutex);
 }
 
@@ -2212,6 +2227,7 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq,
 rte_spinlock_lock(>stats_lock);
 netdev_dpdk_vhost_update_rx_counters(>stats, batch->packets,
  nb_rx, dropped);
+dev->sw_stats->rx_qos_drops += dropped;