Thanks for this, Jakub. I've added some comments in-line

On 02/28/2018 08:17 AM, Jakub Sitnicki wrote:
On Tue, 20 Feb 2018 16:13:30 -0600
Mark Michelson <mmich...@redhat.com> wrote:

This set of commits adds a new library for OVS that allows for measuring
the performance of operations in OVS and compiling statistics from these
measurements.

For developers, this can provide a measurement of something that is
either finer or coarser-grained than what is easily measured with a
profiler.

At the risk of sounding like a broken record - I think we do need tests for this
new module. Otherwise verifying the calculations is difficult.

So here are patches that add tests. Feel free to incorporate them into the
series, reuse the code, or use it just for development.

Some problems that I've noticed while reviewing/testing it:

1. Return status from start_sample/end_sample is always true, so it's not very
    useful. It has also led me to thinking that the sample has been processed
    correctly, so you might even say that it is confusing.

This is a holdover from the first version of the patch. I will change the return type to void.


2. Minimum value is not getting initialized, so it is always 0. See FIXME's in
    the tests.

Ouch. In my manual tests, there were always some quick iterations that occurred in the beginning that actually registered as 0, so the 0 minimum always seemed accurate. I'll get this corrected.


3. The calculation of 95th percentile seems to be quite off for a simple
    population of 10 samples ({ 1, 2, ..., 10 }). I'm not familiar with the P^2
    method that was used so maybe it's expected. Worth checking. There is a 
FIXME
    in the tests.

I've double-checked the P-square paper, and they discuss their findings on accuracy of the algorithm. Based on data in that paper, you'll likely get better accuracy once you have more samples (>50 is a good estimate). When I was performing manual tests, I was usually collecting around 200 samples and the P-square value compared favorably to the actual 95th percentile.

For the next version of the patch, I'll set up some minimum threshold to reach before switching to reporting the P-square algorithm. Tentatively, I'll say 50 samples, but testing may result in a different value.


4. It is not possible to use 0 as an initial timestamp. Seems like a weird
    limitation.

There's a safety check when calculating an an ending timestamp that will ensure that you've actually started a measurement. Currently that works by comparing the starting timestamp to 0. A better way to do it would be to have a separate boolean that is set true when a measurement is in progress.


5. The module calculates statistics over durations of time intervals. Is the
    name 'performance' adequate for something as specific?

That's a good point. I don't have any good alternative names in mind though.


Thanks,
Jakub

---

Jakub Sitnicki (3):
   performance: Add API for retrieving calculated statistics
   performance: Add API for waiting until samples have been processed
   tests: Add tests for performance library module

  lib/performance.c        |  52 ++++++++++++++++
  lib/performance.h        |  16 +++++
  tests/automake.mk        |   3 +-
  tests/library.at         |   5 ++
  tests/test-performance.c | 150 +++++++++++++++++++++++++++++++++++++++++++++++
  5 files changed, 225 insertions(+), 1 deletion(-)
  create mode 100644 tests/test-performance.c

--
2.14.3


_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to