On Wed, Aug 16, 2017 at 10:56 PM, Ilya Maximets <[email protected]>
wrote:
> > The per packets stats are presently overlapping in that
> > miss stats include lost stats; make these stats non-overlapping
> > for clarity and make this clear in the dp_stat_type enum. This
> > also eliminates the need to increment two 'miss' stats for a
> > single packet.
> >
> > The subtable lookup stats is renamed to make it
> > clear that it relates to masked lookups.
> > The stats that total to the number of packets seen are defined
> > in dp_stat_type and an api is created to total the stats in case
> > these stats are further divided in the future.
> >
> > The pmd stats test is enhanced to include megaflow stats
> > counting and checking.
> > Also, miss and lost stats are annotated to make it clear
> > what they mean.
> >
> > Signed-off-by: Darrell Ball <dlu998 at gmail.com>
> > ---
> > lib/dpif-netdev.c | 78 ++++++++++++++++++++++++++++++
> ++++---------------------
> > tests/pmd.at | 31 +++++++++++++++++-----
> > 2 files changed, 74 insertions(+), 35 deletions(-)
> >
> > diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
> > index 17e1666..dfc6684 100644
> > --- a/lib/dpif-netdev.c
> > +++ b/lib/dpif-netdev.c
> > @@ -323,12 +323,19 @@ static struct dp_netdev_port
> *dp_netdev_lookup_port(const struct dp_netdev *dp,
> > OVS_REQUIRES(dp->port_mutex);
> >
> > enum dp_stat_type {
> > - DP_STAT_EXACT_HIT, /* Packets that had an exact match
> (emc). */
> > - DP_STAT_MASKED_HIT, /* Packets that matched in the flow
> table. */
> > - DP_STAT_MISS, /* Packets that did not match. */
> > - DP_STAT_LOST, /* Packets not passed up to the client.
> */
> > - DP_STAT_LOOKUP_HIT, /* Number of subtable lookups for flow
> table
> > - hits */
> > + DP_STAT_EXACT_HIT, /* Packets that had an exact match (emc). */
> > + DP_STAT_MASKED_HIT, /* Packets that matched in the flow table. */
> > + DP_STAT_MISS, /* Packets that did not match and upcall was
> > + done. */
> > + DP_STAT_LOST, /* Packets that did not match and upcall was
> > + not done. */
>
> Packets not passed up to the client and packets for which upcall wasn't
> executed
> are different sets. Overlapping but different.
> See below.
>
I am using 'miss' for 'ovs miss upcall', where the packet is processed at
the ofproto layer.
I am using 'lost' for a failed ovs upcall, meaning the packet never gets to
the ofproto layer
due to some error and hence has been 'lost'.
>
> > + DP_N_PER_PKT_CNT, /* The above statistics account for the total
> > + number of packets seen and should not be
> > + overlapping with each other. */
> > + DP_STAT_MASKED_LOOKUP = DP_N_PER_PKT_CNT, /* Number of subtable
> lookups
> > + for flow table hits.
> Each
> > + MASKED_HIT hit will
> have
> > + >= 1
> MASKED_LOOKUP(s). */
> > DP_N_STATS
> > };
> >
> > @@ -749,13 +756,22 @@ enum pmd_info_type {
> > PMD_INFO_SHOW_RXQ /* Show poll-lists of pmd threads. */
> > };
> >
> > +static unsigned long long
> > +dp_netdev_calcul_total_packets(unsigned long long *stats)
> > +{
> > + unsigned long long total_packets = 0;
> > + for (uint8_t i = 0; i < DP_N_PER_PKT_CNT; i++) {
> > + total_packets += stats[i];
> > + }
> > + return total_packets;
> > +}
> > +
> > static void
> > pmd_info_show_stats(struct ds *reply,
> > struct dp_netdev_pmd_thread *pmd,
> > unsigned long long stats[DP_N_STATS],
> > uint64_t cycles[PMD_N_CYCLES])
> > {
> > - unsigned long long total_packets;
> > uint64_t total_cycles = 0;
> > int i;
> >
> > @@ -773,10 +789,6 @@ pmd_info_show_stats(struct ds *reply,
> > }
> > }
> >
> > - /* Sum of all the matched and not matched packets gives the total.
> */
> > - total_packets = stats[DP_STAT_EXACT_HIT] + stats[DP_STAT_MASKED_HIT]
> > - + stats[DP_STAT_MISS];
> > -
> > for (i = 0; i < PMD_N_CYCLES; i++) {
> > if (cycles[i] > pmd->cycles_zero[i]) {
> > cycles[i] -= pmd->cycles_zero[i];
> > @@ -800,11 +812,12 @@ pmd_info_show_stats(struct ds *reply,
> >
> > ds_put_format(reply,
> > "\temc hits:%llu\n\tmegaflow hits:%llu\n"
> > - "\tavg. subtable lookups per hit:%.2f\n"
> > - "\tmiss:%llu\n\tlost:%llu\n",
> > + "\tavg. subtable lookups per megaflow hit:%.2f\n"
> > + "\tmiss(upcall done):%llu\n\tlost(upcall not
> done):%llu\n",
> > stats[DP_STAT_EXACT_HIT], stats[DP_STAT_MASKED_HIT],
> > stats[DP_STAT_MASKED_HIT] > 0
> > - ? (1.0*stats[DP_STAT_LOOKUP_HIT]
> )/stats[DP_STAT_MASKED_HIT]
> > + ? (1.0 * stats[DP_STAT_MASKED_LOOKUP])
> > + / stats[DP_STAT_MASKED_HIT]
> > : 0,
> > stats[DP_STAT_MISS], stats[DP_STAT_LOST]);
> >
> > @@ -820,6 +833,9 @@ pmd_info_show_stats(struct ds *reply,
> > cycles[PMD_CYCLES_PROCESSING],
> > cycles[PMD_CYCLES_PROCESSING] / (double)total_cycles
> * 100);
> >
> > + /* Sum of all the matched and not matched packets gives the total.
> */
> > + unsigned long long total_packets =
> > + dp_netdev_calcul_total_packets(stats);
> > if (total_packets == 0) {
> > return;
> > }
> > @@ -4724,18 +4740,18 @@ emc_processing(struct dp_netdev_pmd_thread *pmd,
> > return dp_packet_batch_size(packets_);
> > }
> >
> > -static inline void
> > +static inline int
> > handle_packet_upcall(struct dp_netdev_pmd_thread *pmd,
> > struct dp_packet *packet,
> > const struct netdev_flow_key *key,
> > struct ofpbuf *actions, struct ofpbuf *put_actions,
> > - int *lost_cnt, long long now)
> > + int *miss_no_upcall, long long now)
> > {
> > struct ofpbuf *add_actions;
> > struct dp_packet_batch b;
> > struct match match;
> > ovs_u128 ufid;
> > - int error;
> > + int error = 0;
> >
> > match.tun_md.valid = false;
> > miniflow_expand(&key->mf, &match.flow);
> > @@ -4749,8 +4765,8 @@ handle_packet_upcall(struct dp_netdev_pmd_thread
> *pmd,
> > put_actions);
> > if (OVS_UNLIKELY(error && error != ENOSPC)) {
> > dp_packet_delete(packet);
> > - (*lost_cnt)++;
> > - return;
> > + (*miss_no_upcall)++;
>
> This is not correct. Upcall was done but failed. Why you're reporting that
> there was no upcalls?
>
The idea is:
'Miss' upcall means that the packet went thru the ofproto layer; in this
case shown here an
'upcall was attempted' but was 'not done' since the packet never passed
thru. the ofproto layer.
At any rate, I changed the wording to 'upcall failed' to make this more
clear.
>
> > + return error;
> > }
> >
> > /* The Netlink encoding of datapath flow keys cannot express
> > @@ -4790,6 +4806,7 @@ handle_packet_upcall(struct dp_netdev_pmd_thread
> *pmd,
> > ovs_mutex_unlock(&pmd->flow_mutex);
> > emc_probabilistic_insert(pmd, key, netdev_flow);
> > }
> > + return error;
> > }
> >
> > static inline void
> > @@ -4811,7 +4828,7 @@ fast_path_processing(struct dp_netdev_pmd_thread
> *pmd,
> > struct dpcls *cls;
> > struct dpcls_rule *rules[PKT_ARRAY_SIZE];
> > struct dp_netdev *dp = pmd->dp;
> > - int miss_cnt = 0, lost_cnt = 0;
> > + int miss_upcall_cnt = 0, miss_no_upcall_cnt = 0;
> > int lookup_cnt = 0, add_lookup_cnt;
> > bool any_miss;
> > size_t i;
> > @@ -4853,9 +4870,12 @@ fast_path_processing(struct dp_netdev_pmd_thread
> *pmd,
> > continue;
> > }
> >
> > - miss_cnt++;
> > - handle_packet_upcall(pmd, packets[i], &keys[i], &actions,
> > - &put_actions, &lost_cnt, now);
> > + int error = handle_packet_upcall(pmd, packets[i], &keys[i],
> > + &actions, &put_actions, &miss_no_upcall_cnt,
> now);
> > +
> > + if (OVS_LIKELY(!error)) {
> > + miss_upcall_cnt++;
> > + }
> > }
> >
> > ofpbuf_uninit(&actions);
> > @@ -4865,8 +4885,7 @@ fast_path_processing(struct dp_netdev_pmd_thread
> *pmd,
> > for (i = 0; i < cnt; i++) {
> > if (OVS_UNLIKELY(!rules[i])) {
> > dp_packet_delete(packets[i]);
> > - lost_cnt++;
> > - miss_cnt++;
> > + miss_no_upcall_cnt++;
> > }
> > }
> > }
> > @@ -4885,10 +4904,11 @@ fast_path_processing(struct dp_netdev_pmd_thread
> *pmd,
> > dp_netdev_queue_batches(packet, flow, &keys[i].mf, batches,
> n_batches);
> > }
> >
> > - dp_netdev_count_packet(pmd, DP_STAT_MASKED_HIT, cnt - miss_cnt);
> > - dp_netdev_count_packet(pmd, DP_STAT_LOOKUP_HIT, lookup_cnt);
> > - dp_netdev_count_packet(pmd, DP_STAT_MISS, miss_cnt);
> > - dp_netdev_count_packet(pmd, DP_STAT_LOST, lost_cnt);
> > + dp_netdev_count_packet(pmd, DP_STAT_MASKED_HIT, cnt -
> miss_upcall_cnt -
> > + miss_no_upcall_cnt);
> > + dp_netdev_count_packet(pmd, DP_STAT_MASKED_LOOKUP, lookup_cnt);
> > + dp_netdev_count_packet(pmd, DP_STAT_MISS, miss_upcall_cnt);
> > + dp_netdev_count_packet(pmd, DP_STAT_LOST, miss_no_upcall_cnt);
> > }
> >
> > /* Packets enter the datapath from a port (or from recirculation) here.
> > diff --git a/tests/pmd.at b/tests/pmd.at
> > index b6732ea..1faa977 100644
> > --- a/tests/pmd.at
> > +++ b/tests/pmd.at
> > @@ -174,9 +174,9 @@ AT_CHECK([ovs-appctl dpif-netdev/pmd-stats-show |
> sed SED_NUMA_CORE_PATTERN | se
> > pmd thread numa_id <cleared> core_id <cleared>:
> > emc hits:0
> > megaflow hits:0
> > - avg. subtable lookups per hit:0.00
> > - miss:0
> > - lost:0
> > + avg. subtable lookups per megaflow hit:0.00
> > + miss(upcall done):0
> > + lost(upcall not done):0
> > ])
> >
> > ovs-appctl time/stop
> > @@ -201,9 +201,28 @@ AT_CHECK([ovs-appctl dpif-netdev/pmd-stats-show |
> sed SED_NUMA_CORE_PATTERN | se
> > pmd thread numa_id <cleared> core_id <cleared>:
> > emc hits:19
> > megaflow hits:0
> > - avg. subtable lookups per hit:0.00
> > - miss:1
> > - lost:0
> > + avg. subtable lookups per megaflow hit:0.00
> > + miss(upcall done):1
> > + lost(upcall not done):0
> > +])
> > +
> > +AT_CHECK([ovs-vsctl set Open_vSwitch . other_config:emc-insert-inv-pr
> ob=0])
> > +(
> > +for i in `seq 0 5`;
> > + do
> > + pkt="in_port(7),eth(src=50:54:00:00:00:77,dst=50:54:00:00:01
> :78),eth_type(0x0800),ipv4(src=10.0.0.2,dst=10.0.1.1,proto=
> 1,tos=0,ttl=64,frag=no),icmp(type=8,code=0)"
> > + AT_CHECK([ovs-appctl netdev-dummy/receive p0 $pkt])
> > + done
> > +)
> > +ovs-appctl time/warp 100
> > +
> > +AT_CHECK([ovs-appctl dpif-netdev/pmd-stats-show | sed
> SED_NUMA_CORE_PATTERN | sed '/cycles/d' | grep pmd -A 5], [0], [dnl
> > +pmd thread numa_id <cleared> core_id <cleared>:
> > + emc hits:19
> > + megaflow hits:6
> > + avg. subtable lookups per megaflow hit:1.00
> > + miss(upcall done):1
> > + lost(upcall not done):0
> > ])
> >
> > OVS_VSWITCHD_STOP
> > --
> > 1.9.1
> _______________________________________________
> dev mailing list
> [email protected]
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev