Hi All,

I know a v6 is under preparation so I just wanted to give some initial thoughts 
early before I review further.

/Billy.

> -----Original Message-----
> From: [email protected] [mailto:ovs-dev-
> [email protected]] On Behalf Of Ilya Maximets
> Sent: Tuesday, January 9, 2018 8:35 AM
> To: Jan Scheurich <[email protected]>; [email protected]
> Subject: Re: [ovs-dev] [ovs-dev, v5, 1/3] dpif-netdev: Refactor PMD
> performance into dpif-netdev-perf
> 
> I'm a bit lost with all the data structures in lib/dpif-netdev-perf.h.
> It's too complex, especially with other patches applied. And no comments
> there.
> Can we simplify? I didn't review this part.
> 
> Comments inline.
> 
> Best regards, Ilya Maximets.
> 
> On 04.01.2018 15:07, Jan Scheurich wrote:
> > Add module dpif-netdev-perf to host all PMD performance-related data
> > structures and functions in dpif-netdev. Refactor the PMD stats
> > handling in dpif-netdev and delegate whatever possible into the new
> > module, using clean interfaces to shield dpif-netdev from the
> > implementation details. Accordingly, the all PMD statistics members
[[BO'M]] typo "the all"
> > are moved from the main struct dp_netdev_pmd_thread into a dedicated
> > member of type struct pmd_perf_stats.
> >
> > Include Darrel's prior refactoring of PMD stats contained in [PATCH
> > v5,2/3] dpif-netdev: Refactor some pmd stats:
> >
> > 1. The cycles per packet counts are now based on packets received
> > rather than packet passes through the datapath.
[[BO'M]] ie. a packet that is recirculated was previously counted as two 
packets but now just as one? Maybe confirm that in commit msg.
> >
> > 2. Packet counters are now kept for packets received and packets
> > recirculated. These are kept as separate counters for maintainability
> > reasons. The cost of incrementing these counters is negligible.  These
> > new counters are also displayed to the user.
> >
> > 3. A display statistic is added for the average number of datapath
> > passes per packet. This should be useful for user debugging and
> > understanding of packet processing.
> >
> > 4. The user visible 'miss' counter is used for successful upcalls,
> > rather than the sum of sucessful and unsuccessful upcalls. Hence, this
[[BO'M]] typo
> > becomes what user historically understands by OVS 'miss upcall'.
> > The user display is annotated to make this clear as well.
> >
<snip>
> 
> > +            int error = handle_packet_upcall(pmd, packet, &keys[i],
> > +                           &actions, &put_actions);
> > +
> > +            if (OVS_UNLIKELY(error)) {
> > +                upcall_fail_cnt++;
> > +            } else {
> > +                upcall_ok_cnt++;
> > +            }
> 
> Also, 'error' is not used. How about just:
> 
>              if (!handle_packet_upcall(pmd, packet, &keys[i],
>                                        &actions, &put_actions)) {
>                  upcall_ok_cnt++;
>              } else {
>                  upcall_fail_cnt++;
>              }
> 
> 
> >          }
[[BO'M]] I find the original much more natural to read. Unless it's shown that 
the alternative compiles to be more efficient I think most readers of the code 
would prefer the original long hand version.
> >
> >          ofpbuf_uninit(&actions);
> > @@ -5212,8 +5148,7 @@ fast_path_processing(struct
> dp_netdev_pmd_thread *pmd,
> >          DP_PACKET_BATCH_FOR_EACH (packet, packets_) {
> >              if (OVS_UNLIKELY(!rules[i])) {
> >                  dp_packet_delete(packet);
> > -                lost_cnt++;
> > -                miss_cnt++;
> > +                upcall_fail_cnt++;
> >              }
> >          }
> >      }
> > @@ -5231,10 +5166,14 @@ fast_path_processing(struct
> dp_netdev_pmd_thread *pmd,
> >          dp_netdev_queue_batches(packet, flow, &keys[i].mf, batches,
> n_batches);
> >      }
> >
> > -    dp_netdev_count_packet(pmd, DP_STAT_MASKED_HIT, cnt -
> miss_cnt);
> > -    dp_netdev_count_packet(pmd, DP_STAT_LOOKUP_HIT, lookup_cnt);
> > -    dp_netdev_count_packet(pmd, DP_STAT_MISS, miss_cnt);
> > -    dp_netdev_count_packet(pmd, DP_STAT_LOST, lost_cnt);
> > +    pmd_perf_update_counter(&pmd->perf_stats,
> PMD_STAT_MASKED_HIT,
> > +                            cnt - upcall_ok_cnt - upcall_fail_cnt);
> > +    pmd_perf_update_counter(&pmd->perf_stats,
> PMD_STAT_MASKED_LOOKUP,
> > +                            lookup_cnt);
> > +    pmd_perf_update_counter(&pmd->perf_stats, PMD_STAT_MISS,
> > +                            upcall_ok_cnt);
> > +    pmd_perf_update_counter(&pmd->perf_stats, PMD_STAT_LOST,
> > +                            upcall_fail_cnt);
> >  }
> >
> >  /* Packets enter the datapath from a port (or from recirculation) here.
> > diff --git a/tests/pmd.at b/tests/pmd.at index e39a23a..0356f87 100644
> > --- a/tests/pmd.at
> > +++ b/tests/pmd.at
> > @@ -170,13 +170,16 @@ dummy@ovs-dummy: hit:0 missed:0
> >             p0 7/1: (dummy-pmd: configured_rx_queues=4,
> > configured_tx_queues=<cleared>, requested_rx_queues=4,
> > requested_tx_queues=<cleared>)
> >  ])
> >
> > -AT_CHECK([ovs-appctl dpif-netdev/pmd-stats-show | sed
> > SED_NUMA_CORE_PATTERN | sed '/cycles/d' | grep pmd -A 5], [0], [dnl
> > +AT_CHECK([ovs-appctl dpif-netdev/pmd-stats-show | sed
> > +SED_NUMA_CORE_PATTERN | sed '/cycles/d' | grep pmd -A 8], [0], [dnl
> >  pmd thread numa_id <cleared> core_id <cleared>:
> > +   packets received:0
> > +   packet recirculations:0
> > +   avg. datapath passes per packet:0.00
> >     emc hits:0
> >     megaflow hits:0
> > -   avg. subtable lookups per hit:0.00
> > -   miss:0
> > -   lost:0
> > +   avg. subtable lookups per megaflow hit:0.00
> > +   miss(i.e. lookup miss with success upcall):0
> > +   lost(i.e. lookup miss with failed upcall):0
> >  ])
> >
> >  ovs-appctl time/stop
> > @@ -197,13 +200,16 @@ AT_CHECK([cat ovs-vswitchd.log |
> > filter_flow_install | strip_xout], [0], [dnl
> > recirc_id(0),in_port(1),packet_type(ns=0,id=0),eth(src=50:54:00:00:00:
> > 77,dst=50:54:00:00:01:78),eth_type(0x0800),ipv4(frag=no), actions:
> > <del>
> >  ])
> >
> > -AT_CHECK([ovs-appctl dpif-netdev/pmd-stats-show | sed
> > SED_NUMA_CORE_PATTERN | sed '/cycles/d' | grep pmd -A 5], [0], [dnl
> > +AT_CHECK([ovs-appctl dpif-netdev/pmd-stats-show | sed
> > +SED_NUMA_CORE_PATTERN | sed '/cycles/d' | grep pmd -A 8], [0], [dnl
> >  pmd thread numa_id <cleared> core_id <cleared>:
> > +   packets received:20
> > +   packet recirculations:0
> > +   avg. datapath passes per packet:1.00
> >     emc hits:19
> >     megaflow hits:0
> > -   avg. subtable lookups per hit:0.00
> > -   miss:1
> > -   lost:0
> > +   avg. subtable lookups per megaflow hit:0.00
> > +   miss(i.e. lookup miss with success upcall):1
> > +   lost(i.e. lookup miss with failed upcall):0
> >  ])
> >
> >  OVS_VSWITCHD_STOP
> >
> _______________________________________________
> dev mailing list
> [email protected]
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to