On Tue, Oct 22, 2013 at 4:00 AM, Gavin Flower
>> I have a proof of concept patch somewhere that does exactly this. I
>> used logarithmic bin widths. With 8 log10 bins you can tell the
>> fraction of queries running at each order of magnitude from less than
>> 1ms to more than 1000s. Or with 31 bins you can cover factor of 2
>> increments from 100us to over 27h. And the code is almost trivial,
>> just take a log of the duration and calculate the bin number from that
>> and increment the value in the corresponding bin.
> I suppose this has to be decided at compile time to keep the code both
> simple and efficient - if so, I like the binary approach.
For efficiency's sake it can easily be done at run time, one extra
logarithm calculation per query will not be noticeable. Having a
proper user interface to make it configurable and changeable is where
the complexity is. We might just decide to go with something good
enough as even the 31 bin solution would bloat the pg_stat_statements
data structure only by about 10%.
> Curious, why start at 100us? I suppose this might be of interest if
> everything of note is in RAM and/or stuff is on SSD's.
Selecting a single row takes about 20us on my computer, I picked 100us
as a reasonable limit below where the exact speed doesn't matter
Cybertec Schönig & Schönig GmbH
A-2700 Wiener Neustadt
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: