On 2017-01-18 17:32, Kevin Traynor wrote:
On 01/18/2017 01:34 AM, Daniele Di Proietto wrote:
2017-01-17 11:43 GMT-08:00 Kevin Traynor <[email protected]>:
On 01/17/2017 05:43 PM, Ciara Loftus wrote:
Instead of counting all polling cycles as processing cycles, only count
the cycles where packets were received from the polling.
This makes these stats much clearer. One minor comment below, other than
that
Acked-by: Kevin Traynor <[email protected]>
Signed-off-by: Georg Schmuecking <[email protected]>
Signed-off-by: Ciara Loftus <[email protected]>
Co-authored-by: Ciara Loftus <[email protected]>
Minor: the co-authored-by tag should be different from the main author.
This makes it easier to understand how busy a pmd thread is, a valid question
that a sysadmin might have.
The counters were originally introduced to help developers understand how cycles
are spent between drivers(netdev rx) and datapath processing(dpif).
Do you think
it's ok to lose this type of information? Perhaps it is, since a
developer can also
use a profiler, I'm not sure.
Maybe we could 'last_cycles' as it is and introduce a separate counter to get
the idle/busy ratio. I'm not 100% sure this is the best way.
What do you guys think?
I've only ever used the current stats for trying to estimate if polling
was getting packets or not, so the addition of an idle stat helps that.
I like your suggestion of having all three stats, so then it would be
something like:
polling unsuccessful (idle)
polling successful (got pkts)
processing pkts
That would keep the info for a developer and it could help initial debug
if pkt rates drop on a pmd.
Kevin.
From an operational perspective, the most important data is clearly the
fraction of busy cycles. Any additional breakdown of busy cycles is
debatable. We have always been wondering why Rx cost was accounted for
separately in the current code, while Tx cost was included in the
processing. That didn't make much sense to us.
A developer should be able to split the busy cycles between Rx polling,
processing (parsing, EMC lookup, dplcs lookup, upcall(!), actions) and
Tx to port by analysing "perf top" output, as we have done in the
analysis for our performance patches, or using a fancier profiler.
One additional metric that would be interesting to see in
pmd_stats_show, however, is the average number of packets per batch
polled from a port (or recirculated).
Regards, Jan
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev