1) regular-latency.png

I'm wondering whether it would be clearer if the percentiles where
relative to the largest sample, not to itself, so that the figures
from the largest one would still be between 0 and 1, but the other
(unpatched) one would go between 0 and 0.85, that is would be cut
short proportionnaly to the actual performance.

I'm not sure what you mean by 'relative to largest sample'?

You took 5% of the tx on two 12 hours runs, totaling say 85M tx on one and 100M tx on the other, so you get 4.25M tx from the first and 5M from the second.

I'm saying that the percentile should be computed on the largest one (5M), so that you get a curve like the following, with both curve having the same transaction density on the y axis, so the second one does not go up to the top, reflecting that in this case less transactions where processed.

  +    ____----- # up to 100%
  |   /  ___---- # cut short
  |   | /
  |   | |
  | _/ /


Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to