There have been a number of planner improvement ideas that have been thrown out because of the overhead they would add to the planning process, specifically for queries that would otherwise be quiet fast. Other databases seem to have dealt with this by creating plan caches (which might be worth doing for Postgres), but what if we could determine when we need a fast planning time vs when it won't matter?

What I'm thinking is that on the first pass through the planner, we only estimate things that we can do quickly. If the plan that falls out of that is below a certain cost/row threshold, we just run with that plan. If not, we go back and do a more detailed estimate.
--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to