On 2015-02-21 16:09:02 -0500, Andrew Dunstan wrote:
I think all the outstanding issues are fixed in this patch.
Do you plan to push this? I don't see a benefit in delaying things
any further...
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
On 22/02/15 01:59, Petr Jelinek wrote:
On 21/02/15 22:09, Andrew Dunstan wrote:
On 02/16/2015 09:05 PM, Petr Jelinek wrote:
I found one more issue with the 1.2--1.3 upgrade script, the DROP
FUNCTION pg_stat_statements(); should be DROP FUNCTION
pg_stat_statements(bool); since in 1.2 the
On 21/02/15 22:09, Andrew Dunstan wrote:
On 02/16/2015 09:05 PM, Petr Jelinek wrote:
I found one more issue with the 1.2--1.3 upgrade script, the DROP
FUNCTION pg_stat_statements(); should be DROP FUNCTION
pg_stat_statements(bool); since in 1.2 the function identity has changed.
I think
On 02/16/2015 09:05 PM, Petr Jelinek wrote:
On 17/02/15 02:57, Andrew Dunstan wrote:
On 02/16/2015 08:48 PM, Petr Jelinek wrote:
On 17/02/15 01:57, Peter Geoghegan wrote:
On Mon, Feb 16, 2015 at 4:44 PM, Petr Jelinek p...@2ndquadrant.com
wrote:
We definitely want this feature, I wished to
On Wed, Feb 18, 2015 at 08:31:09PM -0700, David G. Johnston wrote:
On Wed, Feb 18, 2015 at 6:50 PM, Andrew Dunstan and...@dunslane.net wrote:
On 02/18/2015 08:34 PM, David Fetter wrote:
On Tue, Feb 17, 2015 at 08:21:32PM -0500, Peter Eisentraut wrote:
On 1/20/15 6:32 PM, David G
On Thu, Feb 19, 2015 at 11:10 AM, David Fetter da...@fetter.org wrote:
On Wed, Feb 18, 2015 at 08:31:09PM -0700, David G. Johnston wrote:
On Wed, Feb 18, 2015 at 6:50 PM, Andrew Dunstan and...@dunslane.net
wrote:
On 02/18/2015 08:34 PM, David Fetter wrote:
On Tue, Feb 17, 2015 at
On 02/17/2015 08:21 PM, Peter Eisentraut wrote:
On 1/20/15 6:32 PM, David G Johnston wrote:
In fact, as far as
the database knows, the values provided to this function do represent an
entire population and such a correction would be unnecessary. I guess it
boils down to whether future queries
On 02/18/2015 08:34 PM, David Fetter wrote:
On Tue, Feb 17, 2015 at 08:21:32PM -0500, Peter Eisentraut wrote:
On 1/20/15 6:32 PM, David G Johnston wrote:
In fact, as far as the database knows, the values provided to this
function do represent an entire population and such a correction
would
On Tue, Feb 17, 2015 at 08:21:32PM -0500, Peter Eisentraut wrote:
On 1/20/15 6:32 PM, David G Johnston wrote:
In fact, as far as the database knows, the values provided to this
function do represent an entire population and such a correction
would be unnecessary. I guess it boils down to
On 1/20/15 6:32 PM, David G Johnston wrote:
In fact, as far as
the database knows, the values provided to this function do represent an
entire population and such a correction would be unnecessary. I guess it
boils down to whether future queries are considered part of the population
or
On 17/02/15 16:12, Andres Freund wrote:
On 2015-02-17 15:50:39 +0100, Petr Jelinek wrote:
On 17/02/15 03:07, Petr Jelinek wrote:
On 17/02/15 03:03, Andrew Dunstan wrote:
On 02/16/2015 08:57 PM, Andrew Dunstan wrote:
Average of 3 runs of read-only pgbench on my system all with
On 2015-02-17 15:50:39 +0100, Petr Jelinek wrote:
On 17/02/15 03:07, Petr Jelinek wrote:
On 17/02/15 03:03, Andrew Dunstan wrote:
On 02/16/2015 08:57 PM, Andrew Dunstan wrote:
Average of 3 runs of read-only pgbench on my system all with
pg_stat_statement activated:
HEAD: 20631
SQRT: 20533
On 17/02/15 03:07, Petr Jelinek wrote:
On 17/02/15 03:03, Andrew Dunstan wrote:
On 02/16/2015 08:57 PM, Andrew Dunstan wrote:
On 02/16/2015 08:48 PM, Petr Jelinek wrote:
On 17/02/15 01:57, Peter Geoghegan wrote:
On Mon, Feb 16, 2015 at 4:44 PM, Petr Jelinek p...@2ndquadrant.com
wrote:
We
On 02/16/2015 08:48 PM, Petr Jelinek wrote:
On 17/02/15 01:57, Peter Geoghegan wrote:
On Mon, Feb 16, 2015 at 4:44 PM, Petr Jelinek p...@2ndquadrant.com
wrote:
We definitely want this feature, I wished to have this info many times.
I would still like to see a benchmark.
Average of 3
On 21/01/15 17:32, Andrew Dunstan wrote:
On 01/21/2015 11:21 AM, Arne Scheffer wrote:
Why is it a bad thing to call the column stddev_samp analog to the
aggregate function or make a note in the documentation, that the
sample stddev is used to compute the solution?
...
But I will add a
On Mon, Feb 16, 2015 at 4:44 PM, Petr Jelinek p...@2ndquadrant.com wrote:
We definitely want this feature, I wished to have this info many times.
I would still like to see a benchmark.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes
On 17/02/15 01:57, Peter Geoghegan wrote:
On Mon, Feb 16, 2015 at 4:44 PM, Petr Jelinek p...@2ndquadrant.com wrote:
We definitely want this feature, I wished to have this info many times.
I would still like to see a benchmark.
Average of 3 runs of read-only pgbench on my system all with
On 17/02/15 03:03, Andrew Dunstan wrote:
On 02/16/2015 08:57 PM, Andrew Dunstan wrote:
On 02/16/2015 08:48 PM, Petr Jelinek wrote:
On 17/02/15 01:57, Peter Geoghegan wrote:
On Mon, Feb 16, 2015 at 4:44 PM, Petr Jelinek p...@2ndquadrant.com
wrote:
We definitely want this feature, I wished
On 02/16/2015 08:57 PM, Andrew Dunstan wrote:
On 02/16/2015 08:48 PM, Petr Jelinek wrote:
On 17/02/15 01:57, Peter Geoghegan wrote:
On Mon, Feb 16, 2015 at 4:44 PM, Petr Jelinek p...@2ndquadrant.com
wrote:
We definitely want this feature, I wished to have this info many
times.
I would
On 17/02/15 02:57, Andrew Dunstan wrote:
On 02/16/2015 08:48 PM, Petr Jelinek wrote:
On 17/02/15 01:57, Peter Geoghegan wrote:
On Mon, Feb 16, 2015 at 4:44 PM, Petr Jelinek p...@2ndquadrant.com
wrote:
We definitely want this feature, I wished to have this info many times.
I would still
On Wed, 21 Jan 2015, Andrew Dunstan wrote:
On 01/21/2015 09:27 AM, Arne Scheffer wrote:
Sorry, corrected second try because of copypaste mistakes:
VlG-Arne
Comments appreciated.
Definition var_samp = Sum of squared differences /n-1
Definition stddev_samp = sqrt(var_samp)
Example N=4
1.)
On 01/21/2015 11:21 AM, Arne Scheffer wrote:
Why is it a bad thing to call the column stddev_samp analog to the
aggregate function or make a note in the documentation, that the
sample stddev is used to compute the solution?
I think you are making a mountain out of a molehill, frankly.
Andrew Dunstan schrieb am 2015-01-21:
On 01/21/2015 11:21 AM, Arne Scheffer wrote:
Why is it a bad thing to call the column stddev_samp analog to the
aggregate function or make a note in the documentation, that the
sample stddev is used to compute the solution?
I think you are making
David G Johnston schrieb am 2015-01-21:
Andrew Dunstan wrote
On 01/20/2015 01:26 PM, Arne Scheffer wrote:
And a very minor aspect:
The term standard deviation in your code stands for
(corrected) sample standard deviation, I think,
because you devide by n-1 instead of n to keep the
I don't understand. I'm following pretty exactly the calculations
stated
at lt;http://www.johndcook.com/blog/standard_deviation/gt;
I'm not a statistician. Perhaps others who are more literate in
Maybe I'm mistaken here,
but I think, the algorithm is not that complicated.
I try to explain it
Sorry, corrected second try because of copypaste mistakes:
VlG-Arne
Comments appreciated.
Definition var_samp = Sum of squared differences /n-1
Definition stddev_samp = sqrt(var_samp)
Example N=4
1.) Sum of squared differences
1_4Sum(Xi-XM4)²
=
2.) adding nothing
1_4Sum(Xi-XM4)²
On 01/21/2015 09:27 AM, Arne Scheffer wrote:
Sorry, corrected second try because of copypaste mistakes:
VlG-Arne
Comments appreciated.
Definition var_samp = Sum of squared differences /n-1
Definition stddev_samp = sqrt(var_samp)
Example N=4
1.) Sum of squared differences
1_4Sum(Xi-XM4)²
=
On 01/20/2015 01:26 PM, Arne Scheffer wrote:
Interesting patch.
I did a quick review looking only into the patch file.
The sum of variances variable contains
the sum of squared differences instead, I think.
Umm, no. It's not.
e-counters.sum_var_time +=
(total_time - old_mean) *
On 01/20/2015 06:32 PM, David G Johnston wrote:
Andrew Dunstan wrote
On 01/20/2015 01:26 PM, Arne Scheffer wrote:
And a very minor aspect:
The term standard deviation in your code stands for
(corrected) sample standard deviation, I think,
because you devide by n-1 instead of n to keep the
Andrew Dunstan wrote
On 01/20/2015 01:26 PM, Arne Scheffer wrote:
And a very minor aspect:
The term standard deviation in your code stands for
(corrected) sample standard deviation, I think,
because you devide by n-1 instead of n to keep the
estimator unbiased.
How about mentioning the
Andrew Dunstan schrieb am 2015-01-20:
On 01/20/2015 01:26 PM, Arne Scheffer wrote:
Interesting patch.
I did a quick review looking only into the patch file.
The sum of variances variable contains
the sum of squared differences instead, I think.
Umm, no. It's not.
Umm, yes, i think, it
Interesting patch.
I did a quick review looking only into the patch file.
The sum of variances variable contains
the sum of squared differences instead, I think.
And a very minor aspect:
The term standard deviation in your code stands for
(corrected) sample standard deviation, I think,
because
On 12/21/2014 02:50 PM, Andrew Dunstan wrote:
On 12/21/2014 02:12 PM, Tom Lane wrote:
Andrew Dunstan and...@dunslane.net writes:
On 12/21/2014 01:23 PM, Alvaro Herrera wrote:
The point, I think, is that without atomic instructions you have to
hold
a lock while incrementing the counters.
On 04/07/2014 04:19 PM, Tom Lane wrote:
Alvaro Herrera alvhe...@2ndquadrant.com writes:
I just noticed that this patch not only adds min,max,stddev, but it also
adds the ability to reset an entry's counters. This hasn't been
mentioned in this thread at all; there has been no discussion on
Andrew Dunstan wrote:
On my blog Peter Geoghegan mentioned something about atomic fetch-and-add
being useful here, but I'm not quite sure what that's referring to. Perhaps
someone can give me a pointer.
The point, I think, is that without atomic instructions you have to hold
a lock while
On December 21, 2014 7:23:27 PM CET, Alvaro Herrera alvhe...@2ndquadrant.com
wrote:
Andrew Dunstan wrote:
On my blog Peter Geoghegan mentioned something about atomic
fetch-and-add
being useful here, but I'm not quite sure what that's referring to.
Perhaps
someone can give me a pointer.
The
On 12/21/2014 01:23 PM, Alvaro Herrera wrote:
Andrew Dunstan wrote:
On my blog Peter Geoghegan mentioned something about atomic fetch-and-add
being useful here, but I'm not quite sure what that's referring to. Perhaps
someone can give me a pointer.
The point, I think, is that without atomic
Andrew Dunstan and...@dunslane.net writes:
On 12/21/2014 01:23 PM, Alvaro Herrera wrote:
The point, I think, is that without atomic instructions you have to hold
a lock while incrementing the counters.
Hmm, do we do that now?
We already have a spinlock mutex around the counter adjustment
On 12/21/2014 02:12 PM, Tom Lane wrote:
Andrew Dunstan and...@dunslane.net writes:
On 12/21/2014 01:23 PM, Alvaro Herrera wrote:
The point, I think, is that without atomic instructions you have to hold
a lock while incrementing the counters.
Hmm, do we do that now?
We already have a
KONDO Mitsumasa wrote:
Hi all,
I think this patch is completely forgotten, and feel very unfortunate:(
Min, max, and stdev is basic statistics in general monitoring tools,
So I'd like to push it.
I just noticed that this patch not only adds min,max,stddev, but it also
adds the ability to
Alvaro Herrera alvhe...@2ndquadrant.com writes:
I just noticed that this patch not only adds min,max,stddev, but it also
adds the ability to reset an entry's counters. This hasn't been
mentioned in this thread at all; there has been no discussion on whether
this is something we want to have,
On Mon, Apr 7, 2014 at 4:19 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Alvaro Herrera alvhe...@2ndquadrant.com writes:
I just noticed that this patch not only adds min,max,stddev, but it also
adds the ability to reset an entry's counters. This hasn't been
mentioned in this thread at all; there
Robert Haas robertmh...@gmail.com writes:
On Mon, Apr 7, 2014 at 4:19 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Alvaro Herrera alvhe...@2ndquadrant.com writes:
What it does is add a new function pg_stat_statements_reset_time() which
resets the min and max values from all function's entries,
Hi all,
I think this patch is completely forgotten, and feel very unfortunate:(
Min, max, and stdev is basic statistics in general monitoring tools,
So I'd like to push it.
(2014/02/12 15:45), KONDO Mitsumasa wrote:
(2014/01/29 17:31), Rajeev rastogi wrote:
No Issue, you can share me the
(2014/02/17 21:44), Rajeev rastogi wrote:
It got compiled successfully on Windows.
Thank you for checking on Windows! It is very helpful for me.
Can we allow to add three statistics? I think only adding stdev is difficult to
image for user. But if there are min and max, we can image each
On 12 February 2014 12:16, KONDO Mitsumasa Wrote:
Hi Rajeev,
(2014/01/29 17:31), Rajeev rastogi wrote:
No Issue, you can share me the test cases, I will take the
performance report.
Attached patch is supported to latest pg_stat_statements. It includes
min, max, and stdev statistics.
Hi Rajeev,
(2014/01/29 17:31), Rajeev rastogi wrote:
No Issue, you can share me the test cases, I will take the performance report.
Attached patch is supported to latest pg_stat_statements. It includes min, max,
and stdev statistics. Could you run compiling test on your windows enviroments?
I
2014-01-31 Peter Geoghegan p...@heroku.com
On Thu, Jan 30, 2014 at 12:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
In reality, actual applications
could hardly be further from the perfectly uniform distribution of
distinct queries presented here.
Yeah, I made the same point in different
On Fri, Jan 31, 2014 at 5:07 AM, Mitsumasa KONDO
kondo.mitsum...@gmail.com wrote:
And past result shows that your patch's most weak point is that deleting
most old statement
and inserting new old statement cost is very high, as you know.
No, there is no reason to imagine that entry_dealloc()
Peter Geoghegan p...@heroku.com writes:
On Fri, Jan 31, 2014 at 5:07 AM, Mitsumasa KONDO
kondo.mitsum...@gmail.com wrote:
It accelerate to affect
update(delete and insert) cost in pg_stat_statements table. So you proposed
new setting
10k in default max value. But it is not essential
(2014/01/30 8:29), Tom Lane wrote:
Andrew Dunstan and...@dunslane.net writes:
I could live with just stddev. Not sure others would be so happy.
FWIW, I'd vote for just stddev, on the basis that min/max appear to add
more to the counter update time than stddev does; you've got
this:
+
On Wed, Jan 29, 2014 at 8:48 PM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
I'd like to know the truth and the fact in your patch, rather than your
argument:-)
I see.
[part of sample sqls file, they are executed in pgbench]
SELECT
KONDO Mitsumasa kondo.mitsum...@lab.ntt.co.jp writes:
(2014/01/30 8:29), Tom Lane wrote:
If we felt that min/max were of similar value to stddev then this
would be mere nitpicking. But since people seem to agree they're
worth less, I'm thinking the cost/benefit ratio isn't there.
Why do you
Peter Geoghegan p...@heroku.com writes:
On Wed, Jan 29, 2014 at 8:48 PM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
method | try1 | try2 | try3 | degrade performance ratio
-
method 1 | 6.546 | 6.558 | 6.638 |
I wrote:
If I understand this test scenario properly, there are no duplicate
queries, so that each iteration creates a new hashtable entry (possibly
evicting an old one). And it's a single-threaded test, so that there
can be no benefit from reduced locking.
After looking more closely, it's
On Thu, Jan 30, 2014 at 12:23 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I wrote:
If I understand this test scenario properly, there are no duplicate
queries, so that each iteration creates a new hashtable entry (possibly
evicting an old one). And it's a single-threaded test, so that there
can
Robert Haas robertmh...@gmail.com writes:
One could test it with each pgbench thread starting at a random point
in the same sequence and wrapping at the end.
Well, the real point is that 1 distinct statements all occurring with
exactly the same frequency isn't a realistic scenario: any
BTW ... it occurs to me to wonder if it'd be feasible to keep the
query-texts file mmap'd in each backend, thereby reducing the overhead
to write a new text to about the cost of a memcpy, and eliminating the
read cost in pg_stat_statements() altogether. It's most likely not worth
the trouble; but
On Thu, Jan 30, 2014 at 9:57 AM, Tom Lane t...@sss.pgh.pa.us wrote:
BTW ... it occurs to me to wonder if it'd be feasible to keep the
query-texts file mmap'd in each backend, thereby reducing the overhead
to write a new text to about the cost of a memcpy, and eliminating the
read cost in
Peter Geoghegan p...@heroku.com writes:
On Thu, Jan 30, 2014 at 9:57 AM, Tom Lane t...@sss.pgh.pa.us wrote:
BTW ... it occurs to me to wonder if it'd be feasible to keep the
query-texts file mmap'd in each backend, thereby reducing the overhead
to write a new text to about the cost of a
On Thu, Jan 30, 2014 at 12:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
In reality, actual applications
could hardly be further from the perfectly uniform distribution of
distinct queries presented here.
Yeah, I made the same point in different words. I think any realistic
comparison of this
On 28th January, Mitsumasa KONDO wrote:
2014-01-26 Simon Riggs si...@2ndquadrant.com
mailto:si...@2ndquadrant.com
On 21 January 2014 19:48, Simon Riggs si...@2ndquadrant.com
mailto:si...@2ndquadrant.com wrote:
On 21 January 2014 12:54, KONDO Mitsumasa
On Wed, Jan 29, 2014 at 9:03 AM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
(2014/01/29 15:51), Tom Lane wrote:
KONDO Mitsumasa kondo.mitsum...@lab.ntt.co.jp writes:
By the way, latest pg_stat_statement might affect performance in
Windows system.
Because it uses fflush() system
On Wed, Jan 29, 2014 at 8:58 AM, Peter Geoghegan p...@heroku.com wrote:
I am not opposed in principle to adding new things to the counters
struct in pg_stat_statements. I just think that the fact that the
overhead of installing the module on a busy production system is
currently so low is of
On 29 January 2014 09:16, Magnus Hagander mag...@hagander.net wrote:
On Wed, Jan 29, 2014 at 8:58 AM, Peter Geoghegan p...@heroku.com wrote:
I am not opposed in principle to adding new things to the counters
struct in pg_stat_statements. I just think that the fact that the
overhead of
On 01/29/2014 02:58 AM, Peter Geoghegan wrote:
I am not opposed in principle to adding new things to the counters
struct in pg_stat_statements. I just think that the fact that the
overhead of installing the module on a busy production system is
currently so low is of *major* value, and
On 01/29/2014 04:55 AM, Simon Riggs wrote:
It is possible that adding this extra straw breaks the camel's back,
but it seems unlikely to be that simple. A new feature to help
performance problems will be of a great use to many people; complaints
about the performance of pg_stat_statements are
On Wed, Jan 29, 2014 at 9:06 AM, Andrew Dunstan and...@dunslane.net wrote:
I am not opposed in principle to adding new things to the counters
struct in pg_stat_statements. I just think that the fact that the
overhead of installing the module on a busy production system is
currently so low is
On 01/29/2014 11:54 AM, Robert Haas wrote:
I agree. I find it somewhat unlikely that pg_stat_statements is
fragile enough that these few extra counters are going to make much of
a difference. At the same time, I find min and max a dubious value
proposition. It seems highly likely to me that
On Wed, Jan 29, 2014 at 6:06 AM, Andrew Dunstan and...@dunslane.net wrote:
mportance is in the eye of the beholder. As far as I'm concerned, min and
max are of FAR less value than stddev. If stddev gets left out I'm going to
be pretty darned annoyed, especially since the benchmarks seem to
On 01/29/2014 04:14 PM, Peter Geoghegan wrote:
On Wed, Jan 29, 2014 at 6:06 AM, Andrew Dunstan and...@dunslane.net wrote:
mportance is in the eye of the beholder. As far as I'm concerned, min and
max are of FAR less value than stddev. If stddev gets left out I'm going to
be pretty darned
Andrew Dunstan and...@dunslane.net writes:
I could live with just stddev. Not sure others would be so happy.
FWIW, I'd vote for just stddev, on the basis that min/max appear to add
more to the counter update time than stddev does; you've got
this:
+ e-counters.total_sqtime +=
(2014/01/29 16:58), Peter Geoghegan wrote:
On Tue, Jan 28, 2014 at 10:51 PM, Tom Lane t...@sss.pgh.pa.us wrote:
KONDO Mitsumasa kondo.mitsum...@lab.ntt.co.jp writes:
By the way, latest pg_stat_statement might affect performance in Windows system.
Because it uses fflush() system call every
(2014/01/29 17:31), Rajeev rastogi wrote:
On 28th January, Mitsumasa KONDO wrote:
By the way, latest pg_stat_statement might affect performance in
Windows system.
Because it uses fflush() system call every creating new entry in
pg_stat_statements, and it calls many fread() to warm file cache.
(2014/01/28 15:17), Rajeev rastogi wrote:
On 27th January, Mitsumasa KONDO wrote:
2014-01-26 Simon Riggs si...@2ndquadrant.com
mailto:si...@2ndquadrant.com
On 21 January 2014 19:48, Simon Riggs si...@2ndquadrant.com
mailto:si...@2ndquadrant.com wrote:
On 21 January 2014
KONDO Mitsumasa kondo.mitsum...@lab.ntt.co.jp writes:
By the way, latest pg_stat_statement might affect performance in Windows
system.
Because it uses fflush() system call every creating new entry in
pg_stat_statements, and it calls many fread() to warm file cache.
This statement doesn't
On Tue, Jan 28, 2014 at 10:51 PM, Tom Lane t...@sss.pgh.pa.us wrote:
KONDO Mitsumasa kondo.mitsum...@lab.ntt.co.jp writes:
By the way, latest pg_stat_statement might affect performance in Windows
system.
Because it uses fflush() system call every creating new entry in
pg_stat_statements, and
(2014/01/29 15:51), Tom Lane wrote:
KONDO Mitsumasa kondo.mitsum...@lab.ntt.co.jp writes:
By the way, latest pg_stat_statement might affect performance in Windows
system.
Because it uses fflush() system call every creating new entry in
pg_stat_statements, and it calls many fread() to warm
(2014/01/23 23:18), Andrew Dunstan wrote:
What is more, if the square root calculation is affecting your benchmarks, I
suspect you are benchmarking the wrong thing.
I run another test that has two pgbench-clients in same time, one is
select-only-query and another is executing 'SELECT *
(2014/01/26 17:43), Mitsumasa KONDO wrote:
2014-01-26 Simon Riggs si...@2ndquadrant.com mailto:si...@2ndquadrant.com
On 21 January 2014 19:48, Simon Riggs si...@2ndquadrant.com
mailto:si...@2ndquadrant.com wrote:
On 21 January 2014 12:54, KONDO Mitsumasa
On 01/27/2014 07:09 AM, KONDO Mitsumasa wrote:
(2014/01/23 23:18), Andrew Dunstan wrote:
What is more, if the square root calculation is affecting your
benchmarks, I
suspect you are benchmarking the wrong thing.
I run another test that has two pgbench-clients in same time, one is
2014-01-27 Andrew Dunstan and...@dunslane.net
On 01/27/2014 07:09 AM, KONDO Mitsumasa wrote:
(2014/01/23 23:18), Andrew Dunstan wrote:
What is more, if the square root calculation is affecting your
benchmarks, I
suspect you are benchmarking the wrong thing.
I run another test that has
On 01/27/2014 08:48 AM, Mitsumasa KONDO wrote:
The issue of concern is not the performance of pg_stat_statements,
AUIU. The issue is whether this patch affects performance
generally, i.e. is there a significant cost in collecting these
extra stats. To test this you would
On Mon, Jan 27, 2014 at 4:45 AM, Andrew Dunstan and...@dunslane.net wrote:
I personally don't give a tinker's cuss about whether the patch slows down
pg_stat_statements a bit.
Why not? The assurance that the overhead is generally very low is what
makes it possible to install it widely usually
On 01/27/2014 04:37 PM, Peter Geoghegan wrote:
On Mon, Jan 27, 2014 at 4:45 AM, Andrew Dunstan and...@dunslane.net wrote:
I personally don't give a tinker's cuss about whether the patch slows down
pg_stat_statements a bit.
Why not? The assurance that the overhead is generally very low is what
On Mon, Jan 27, 2014 at 2:01 PM, Andrew Dunstan and...@dunslane.net wrote:
I care very much what the module does to the performance of all statements.
But I don't care much if selecting from pg_stat_statements itself is a bit
slowed. Perhaps I didn't express myself as clearly as I could have.
On 27th January, Mitsumasa KONDO wrote:
2014-01-26 Simon Riggs si...@2ndquadrant.com
mailto:si...@2ndquadrant.com
On 21 January 2014 19:48, Simon Riggs si...@2ndquadrant.com
mailto:si...@2ndquadrant.com wrote:
On 21 January 2014 12:54, KONDO Mitsumasa
On 21 January 2014 19:48, Simon Riggs si...@2ndquadrant.com wrote:
On 21 January 2014 12:54, KONDO Mitsumasa kondo.mitsum...@lab.ntt.co.jp
wrote:
Rebased patch is attached.
Does this fix the Windows bug reported by Kumar on 20/11/2013 ?
Please respond.
--
Simon Riggs
2014-01-26 Simon Riggs si...@2ndquadrant.com
On 21 January 2014 19:48, Simon Riggs si...@2ndquadrant.com wrote:
On 21 January 2014 12:54, KONDO Mitsumasa kondo.mitsum...@lab.ntt.co.jp
wrote:
Rebased patch is attached.
Does this fix the Windows bug reported by Kumar on 20/11/2013 ?
On 23 January 2014 12:43, KONDO Mitsumasa kondo.mitsum...@lab.ntt.co.jp wrote:
I tested my patch in pgbench, but I cannot find bottleneck of my latest
patch.
...
Attached is latest developping patch. It hasn't been test much yet, but sqrt
caluclation may be faster.
Thank you for reworking
(2014/01/23 10:28), Peter Geoghegan wrote:
On Wed, Jan 22, 2014 at 5:28 PM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
Oh, thanks to inform me. I think essential problem of my patch has bottle
neck in sqrt() function and other division caluculation.
Well, that's a pretty easy theory
On 01/22/2014 11:33 PM, KONDO Mitsumasa wrote:
(2014/01/23 12:00), Andrew Dunstan wrote:
On 01/22/2014 08:28 PM, KONDO Mitsumasa wrote:
(2014/01/22 22:26), Robert Haas wrote:
On Wed, Jan 22, 2014 at 3:32 AM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
OK, Kondo, please demonstrate
(2014/01/22 9:34), Simon Riggs wrote:
AFAICS, all that has happened is that people have given their opinions
and we've got almost the same identical patch, with a rush-rush
comment to commit even though we've waited months. If you submit a
patch, then you need to listen to feedback and be clear
On Wed, Jan 22, 2014 at 3:32 AM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
OK, Kondo, please demonstrate benchmarks that show we have 1% impact
from this change. Otherwise we may need a config parameter to allow
the calculation.
OK, testing DBT-2 now. However, error range of
(2014/01/22 22:26), Robert Haas wrote:
On Wed, Jan 22, 2014 at 3:32 AM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
OK, Kondo, please demonstrate benchmarks that show we have 1% impact
from this change. Otherwise we may need a config parameter to allow
the calculation.
OK, testing
On Wed, Jan 22, 2014 at 5:28 PM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
Oh, thanks to inform me. I think essential problem of my patch has bottle
neck in sqrt() function and other division caluculation.
Well, that's a pretty easy theory to test. Just stop calling them (and
do
On 01/22/2014 08:28 PM, KONDO Mitsumasa wrote:
(2014/01/22 22:26), Robert Haas wrote:
On Wed, Jan 22, 2014 at 3:32 AM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
OK, Kondo, please demonstrate benchmarks that show we have 1% impact
from this change. Otherwise we may need a config
(2014/01/23 12:00), Andrew Dunstan wrote:
On 01/22/2014 08:28 PM, KONDO Mitsumasa wrote:
(2014/01/22 22:26), Robert Haas wrote:
On Wed, Jan 22, 2014 at 3:32 AM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
OK, Kondo, please demonstrate benchmarks that show we have 1% impact
from this
Rebased patch is attached.
pg_stat_statements in PG9.4dev has already changed table columns in. So I hope
this patch will be committed in PG9.4dev.
Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center
*** a/contrib/pg_stat_statements/pg_stat_statements--1.1--1.2.sql
---
On 21 January 2014 12:54, KONDO Mitsumasa kondo.mitsum...@lab.ntt.co.jp wrote:
Rebased patch is attached.
Does this fix the Windows bug reported by Kumar on 20/11/2013 ?
pg_stat_statements in PG9.4dev has already changed table columns in. So I
hope this patch will be committed in PG9.4dev.
1 - 100 of 167 matches
Mail list logo