On 30.08.2018 17:58, Tom Lane wrote:
Alexander Korotkov <a.korot...@postgrespro.ru> writes:
On Thu, Aug 30, 2018 at 5:05 PM Tom Lane <t...@sss.pgh.pa.us> wrote:
Because it's what the mental model of startup cost says it should be.
 From this model we make a conclusion that we're starting getting rows
from sequential scan sooner than from index scan.  And this conclusion
doesn't reflect reality.
No, startup cost is not the "time to find the first row".  It's overhead
paid before you even get to start examining rows.
But it seems to me that calculation of cost in LIMIT node contradicts with this statement:

            pathnode->path.startup_cost +=
                (subpath->total_cost - subpath->startup_cost)
                * offset_rows / subpath->rows;




I'm disinclined to consider fundamental changes to our costing model
on the basis of this example.  The fact that the rowcount estimates are
so far off reality means that you're basically looking at "garbage in,
garbage out" for the cost calculations --- and applying a small LIMIT
just magnifies that.

It'd be more useful to think first about how to make the selectivity
estimates better; after that, we might or might not still think there's
a costing issue.

                        regards, tom lane


--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Reply via email to