Re: [HACKERS] explain analyze rows=%.0f
On Mon, 2009-06-01 at 20:30 -0700, Ron Mayer wrote: What I'd find strange about 6.67 rows in your example is more that on the estimated rows side, it seems to imply an unrealistically precise estimate in the same way that 667 rows would seem unrealistically precise to me. Maybe rounding to 2 significant digits would reduce confusion? You're right that the number of significant digits already exceeds the true accuracy of the computation. I think what Robert wants to see is the exact value used in the calc, so the estimates can be checked more thoroughly than is currently possible. -- Simon Riggs www.2ndQuadrant.com PostgreSQL Training, Services and Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] explain analyze rows=%.0f
On Jun 2, 2009, at 9:41 AM, Simon Riggs si...@2ndquadrant.com wrote: On Mon, 2009-06-01 at 20:30 -0700, Ron Mayer wrote: What I'd find strange about 6.67 rows in your example is more that on the estimated rows side, it seems to imply an unrealistically precise estimate in the same way that 667 rows would seem unrealistically precise to me. Maybe rounding to 2 significant digits would reduce confusion? You're right that the number of significant digits already exceeds the true accuracy of the computation. I think what Robert wants to see is the exact value used in the calc, so the estimates can be checked more thoroughly than is currently possible. Bingo. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] explain analyze rows=%.0f
Robert Haas robertmh...@gmail.com writes: On Jun 2, 2009, at 9:41 AM, Simon Riggs si...@2ndquadrant.com wrote: You're right that the number of significant digits already exceeds the true accuracy of the computation. I think what Robert wants to see is the exact value used in the calc, so the estimates can be checked more thoroughly than is currently possible. Bingo. Uh, the planner's estimate *is* an integer. What was under discussion (I thought) was showing some fractional digits in the case where EXPLAIN ANALYZE is outputting a measured row count that is an average over multiple loops, and therefore isn't necessarily an integer. In that case the measured value can be considered arbitrarily precise --- though I think in practice one or two fractional digits would be plenty. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] explain analyze rows=%.0f
...Robert On Jun 2, 2009, at 10:38 AM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: On Jun 2, 2009, at 9:41 AM, Simon Riggs si...@2ndquadrant.com wrote: You're right that the number of significant digits already exceeds the true accuracy of the computation. I think what Robert wants to see is the exact value used in the calc, so the estimates can be checked more thoroughly than is currently possible. Bingo. Uh, the planner's estimate *is* an integer. What was under discussion (I thought) was showing some fractional digits in the case where EXPLAIN ANALYZE is outputting a measured row count that is an average over multiple loops, and therefore isn't necessarily an integer. In that case the measured value can be considered arbitrarily precise --- though I think in practice one or two fractional digits would be plenty. We're in violent agreement here. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] explain analyze rows=%.0f
Euler Taveira de Oliveira wrote: Robert Haas escreveu: ...EXPLAIN ANALYZE reports the number of rows as an integer... Any chance we could reconsider this decision? I often find myself wanting to know the value that is here called ntuples, but rounding ntuples/nloops off to the nearest integer loses too much precision. Don't you think is too strange having, for example, 6.67 rows? I would confuse users and programs that parses the EXPLAIN output. However, I wouldn't object I don't think it's that confusing. If it says 0.1 rows, I imagine most people would infer that this means typically 0, but sometimes 1 or a few rows. What I'd find strange about 6.67 rows in your example is more that on the estimated rows side, it seems to imply an unrealistically precise estimate in the same way that 667 rows would seem unrealistically precise to me. Maybe rounding to 2 significant digits would reduce confusion? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] explain analyze rows=%.0f
Joshua Tolley eggyk...@gmail.com writes: On Thu, May 28, 2009 at 11:12:42PM -0400, Robert Haas wrote: On Thu, May 28, 2009 at 11:00 PM, Euler Taveira de Oliveira Don't you think is too strange having, for example, 6.67 rows? No stranger than having it say 7 when it's really not. Actually mine mostly come out 1 when the real value is somewhere between 0.5 and 1.49. :-( +1. It would help users realize more quickly that some of the values in the EXPLAIN output are, for instance, *average* number of rows *per iteration* of a nested loop, say, rather than total rows found in all loops. I think it would only be sensible to show fractional digits if nloops is greater than 1. Otherwise the value must in fact be an integer, and you're just going to confuse people more by suggesting that it might not be. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] explain analyze rows=%.0f
On Fri, May 29, 2009 at 1:30 PM, Tom Lane t...@sss.pgh.pa.us wrote: Joshua Tolley eggyk...@gmail.com writes: On Thu, May 28, 2009 at 11:12:42PM -0400, Robert Haas wrote: On Thu, May 28, 2009 at 11:00 PM, Euler Taveira de Oliveira Don't you think is too strange having, for example, 6.67 rows? No stranger than having it say 7 when it's really not. Actually mine mostly come out 1 when the real value is somewhere between 0.5 and 1.49. :-( +1. It would help users realize more quickly that some of the values in the EXPLAIN output are, for instance, *average* number of rows *per iteration* of a nested loop, say, rather than total rows found in all loops. I think it would only be sensible to show fractional digits if nloops is greater than 1. Otherwise the value must in fact be an integer, and you're just going to confuse people more by suggesting that it might not be. That might be over-engineering, but I'll take it. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] explain analyze rows=%.0f
I have always assumed that there is some very good reason why EXPLAIN ANALYZE reports the number of rows as an integer rather than a floating point value, but in reading explain.c it seems that the reason is just that we decided to round to zero decimal places. Any chance we could reconsider this decision? I often find myself wanting to know the value that is here called ntuples, but rounding ntuples/nloops off to the nearest integer loses too much precision. (Before someone mentions it, yes that would be a good thing to include in XML-formatted explain output. But I don't see that including a couple of decimal places would hurt the text output format either.) ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] explain analyze rows=%.0f
Robert Haas escreveu: I have always assumed that there is some very good reason why EXPLAIN ANALYZE reports the number of rows as an integer rather than a floating point value, but in reading explain.c it seems that the reason is just that we decided to round to zero decimal places. Any chance we could reconsider this decision? I often find myself wanting to know the value that is here called ntuples, but rounding ntuples/nloops off to the nearest integer loses too much precision. Don't you think is too strange having, for example, 6.67 rows? I would confuse users and programs that parses the EXPLAIN output. However, I wouldn't object to add ntuples to an extended explain output (as discussed in the other thread). -- Euler Taveira de Oliveira http://www.timbira.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] explain analyze rows=%.0f
On Thu, May 28, 2009 at 11:00 PM, Euler Taveira de Oliveira eu...@timbira.com wrote: Robert Haas escreveu: I have always assumed that there is some very good reason why EXPLAIN ANALYZE reports the number of rows as an integer rather than a floating point value, but in reading explain.c it seems that the reason is just that we decided to round to zero decimal places. Any chance we could reconsider this decision? I often find myself wanting to know the value that is here called ntuples, but rounding ntuples/nloops off to the nearest integer loses too much precision. Don't you think is too strange having, for example, 6.67 rows? No stranger than having it say 7 when it's really not. Actually mine mostly come out 1 when the real value is somewhere between 0.5 and 1.49. :-( ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] explain analyze rows=%.0f
On Thu, May 28, 2009 at 11:12:42PM -0400, Robert Haas wrote: On Thu, May 28, 2009 at 11:00 PM, Euler Taveira de Oliveira Don't you think is too strange having, for example, 6.67 rows? No stranger than having it say 7 when it's really not. Actually mine mostly come out 1 when the real value is somewhere between 0.5 and 1.49. :-( +1. It would help users realize more quickly that some of the values in the EXPLAIN output are, for instance, *average* number of rows *per iteration* of a nested loop, say, rather than total rows found in all loops. That's an important distinction that isn't immediately clear to the novice EXPLAIN reader, but would become so very quickly as users tried to figure out how a scan could come up with a fractional row. - Josh / eggyknap signature.asc Description: Digital signature