Hi Kevin,

Thanks for the feedback. See below.

> > @@ -3429,16 +3381,17 @@ dp_netdev_process_rxq_port(struct 
> > dp_netdev_pmd_thread *pmd,
> >      struct pmd_perf_stats *s = &pmd->perf_stats;
> >      struct dp_packet_batch batch;
> >      int error;
> > -    int batch_cnt = 0, output_cnt = 0;
> > +    int batch_cnt = 0;
> >
> >      dp_packet_batch_init(&batch);
> > -
> > -    cycles_count_intermediate(pmd, NULL, 0);
> >      pmd->ctx.last_rxq = rxq;
> > -    pmd->ctx.current_pmd_cycles_type = PMD_CYCLES_POLL_BUSY;
> > +
> > +    /* Measure duration for polling and processing rx burst. */
> > +    uint64_t cycles = cycles_counter(pmd);
> 
> hi Jan - not a full review. Consider this function is called from
> dp_netdev_input()...dp_execute_cb(), if the code below is hit, the above
> measurement will make for double counting.
> 
> <snip>
> #ifdef DPDK_NETDEV
>             if (OVS_UNLIKELY(!dp_packet_batch_is_empty(&p->output_pkts)
>                              && packets_->packets[0]->source
>                                 != p->output_pkts.packets[0]->source)) {
>                 /* XXX: netdev-dpdk assumes that all packets in a single
>                  *      output batch has the same source. Flush here to
>                  *      avoid memory access issues. */
>                 dp_netdev_pmd_flush_output_on_port(pmd, p);
>             }
> #endif
>             if (dp_packet_batch_size(&p->output_pkts)
>                 + dp_packet_batch_size(packets_) > NETDEV_MAX_BURST) {
>                 /* Flush here to avoid overflow. */
>                 dp_netdev_pmd_flush_output_on_port(pmd, p);
>             }
> </snip>
> 
> I wouldn't be too concerned about the first one because it's unlikely. I
> would be more concerned about the second one, as it is quite likely that
> multiple rxqs are sending to a single port and the cycles for tx could
> be significant. Take 2 rxq's sending packets to a vhost port, unless we
> got batches of 32 on each rxq there would be double counting for one of
> the queues everytime. There could be more cases like this also, as there
> is flush from a lot of places. I didn't compare, but from memory I don't
> think this would be an issues in Ilya's patches.
> 
> start/intermediate/stop type functions might be better than storing
> cycles locally in functions, because you could stop and start the count
> from any function you need. In this case, you could avoid the double
> counting by something like stop/call
> dp_netdev_pmd_flush_output_on_port()/start. That might start to make the
> code more like Ilya's, so it's probably good to get his review.

You are right, I overlooked the possibility for a tx burst triggered 
immediately by
executing an output action. This needs fixing, and I agree that a scheme which
is able to "suspend" an ongoing measurement at any time will be needed for 
that. Let me have another look at the previous scheme to see if I can simplify
that. If not we can stick to the original solution.

> 
> >      error = netdev_rxq_recv(rxq->rx, &batch);
> >
> >      if (!error) {
> > +        /* At least one packet received. */
> >          *recirc_depth_get() = 0;
> >          pmd_thread_ctx_time_update(pmd);
> >          batch_cnt = batch.count;
> > @@ -3448,7 +3401,7 @@ dp_netdev_process_rxq_port(struct 
> > dp_netdev_pmd_thread *pmd,
> >              histogram_add_sample(&s->pkts_per_batch, batch_cnt);
> >              /* Update the maximum Rx queue fill level. */
> >              uint32_t qfill = batch.qfill;
> > -            switch (netdev_dpdk_get_type(netdev_rxq_get_netdev(rx))) {
> > +            switch (netdev_dpdk_get_type(netdev_rxq_get_netdev(rxq->rx))) {
> 
> It looks like this is fixing an error from a previous patch

Good spot. 
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to