On 22/10/13 13:26, Ants Aasma wrote:
On Tue, Oct 22, 2013 at 1:09 AM, Alvaro Herrera
<alvhe...@2ndquadrant.com> wrote:
Gavin Flower wrote:

One way it could be done, but even this would consume far too much
storage and processing power (hence totally impractical), would be
to 'simply' store a counter for each value found and increment it
for each occurence...
An histogram?  Sounds like a huge lot of code complexity to me.  Not
sure the gain is enough.
I have a proof of concept patch somewhere that does exactly this. I
used logarithmic bin widths. With 8 log10 bins you can tell the
fraction of queries running at each order of magnitude from less than
1ms to more than 1000s. Or with 31 bins you can cover factor of 2
increments from 100us to over 27h. And the code is almost trivial,
just take a log of the duration and calculate the bin number from that
and increment the value in the corresponding bin.

Ants Aasma
That might be useful in determining if things are sufficiently bad to be worth investigating in more detail. No point in tuning stuff that is behaving acceptably.

Also good enough to say 95% execute within 5 seconds (or whatever).


Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to