(2014/01/22 9:34), Simon Riggs wrote:
I think it was replied that will be heavily. If we realize histogram in
pg_stat_statements, we have to implement dobuble precision arrays for storing
histogram data. And when we update histogram data in each statements, we must
update arrays with searching what response time is the smallest or biggest? It is
very big cost, assuming large memory, and too hevily when updating than we get
benefit from it. So I just add stddev for as fast as latest pg_stat_statements. I
got some agreed from some people, as you say.
AFAICS, all that has happened is that people have given their opinions
and we've got almost the same identical patch, with a rush-rush
comment to commit even though we've waited months. If you submit a
patch, then you need to listen to feedback and be clear about what you
will do next, if you don't people will learn to ignore you and nobody
OK, testing DBT-2 now. However, error range of benchmark might be 1% higher. So I
show you detail HTML results.
On 21 January 2014 21:19, Peter Geoghegan <p...@heroku.com> wrote:
On Tue, Jan 21, 2014 at 11:48 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
I agree with people saying that stddev is better than nothing at all,
so I am inclined to commit this, in spite of the above.
I could live with stddev. But we really ought to be investing in
making pg_stat_statements work well with third-party tools. I am very
wary of enlarging the counters structure, because it is protected by a
spinlock. There has been no attempt to quantify that cost, nor has
anyone even theorized that it is not likely to be appreciable.
OK, Kondo, please demonstrate benchmarks that show we have <1% impact
from this change. Otherwise we may need a config parameter to allow
NTT Open Source Software Center
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: