On 23/08/2013 23:55, Tom Lane wrote:
Pavel Stehule <pavel.steh...@gmail.com> writes:
please, can you send a self explained test
this issue should be fixed, and we need a examples.
We already had a perfectly good example at the beginning of this thread.
What's missing is a decision on how we ought to approximate the cost of
planning (relative to execution costs).
As I mentioned upthread, it doesn't seem unreasonable to me to do
something quick-and-dirty based on the length of the plan's rangetable.
Pretty nearly anything would fix these specific situations where the
estimated execution cost is negligible. It's possible that there are
more complicated cases where we'll need a more accurate estimate, but
we've not seen an example of that yet.
My previous suggestion was to estimate planning cost as
10 * (length(plan->rangetable) + 1)
but on reflection it ought to be scaled by one of the cpu cost constants,
so perhaps
1000 * cpu_operator_cost * (length(plan->rangetable) + 1)
which'd mean a custom plan has to be estimated to save a minimum of
about 5 cost units (more if more than 1 table is used) before it'll
be chosen. I'm tempted to make the multiplier be 10000 not 1000,
but it seems better to be conservative about changing the behavior
until we see how well this works in practice.
Objections, better ideas?
regards, tom lane
No better idea as far as I'm concerned, of course :)
But it is a bit tricky to understand what is going on when you get
hit by it, and using a very approximated cost of the planning time
seems the most logical to me. So I'm all for this solution.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers