On 3/30/15 10:52 AM, Tom Lane wrote:
Kevin Grittner<kgri...@ymail.com>  writes:
>Tom Lane<t...@sss.pgh.pa.us>  wrote:
>>But the other problem is that the planner considers less-than-1%
>>differences in cost estimates to be "in the noise", which means
>>that it's not going to consider cost differences of less than
>>1480 units in the remaining join steps to be significant.  This
>>is how come we end up with the apparently brain-dead decisions to
>>use seqscans on some of the other tables such as "pi" and "ac":
>>comparing the seqscan to a potential inner indexscan, the total
>>cost of the join is "the same" according to the 1% rule,
>The 1% rule itself might be something to add to the R&D list.

Perhaps.  But it does make for a significant difference in planner speed,
and I would argue that any case where it really hurts is by definition
a cost estimation failure somewhere else.

What I wish we had was some way to represent "confidence" in the accuracy of a specific plan node, with the goal of avoiding plans that cost out slightly cheaper but if we guessed wrong on something will blow up spectacularly. Nested loops are an example; if you miscalculate either of the sides by very much you can end up with a real mess unless the rowcounts were already pretty trivial to begin with.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to