On Wed, Aug 3, 2011 at 12:19 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Dimitri Fontaine <dimi...@2ndquadrant.fr> writes:
>> Tom Lane <t...@sss.pgh.pa.us> writes:
>>> Anyone have an opinion about that?
>
>> I still have this application where PREPARE takes between 50ms and 300ms
>> and EXECUTE 5ms to 10ms, and I can handle 1 PREPARE for 10000 EXECUTE
>> quite easily.  (Yes the database fits in RAM, and yes when that's no
>> longer the case we just upgrade the hardware)
>
>> What does your proposal mean for such a use case?
>
> Well, the policy for when to replan or not remains to be worked out in
> detail, but what is likely to happen for such cases is that we'll waste
> a few planning cycles before determining that there's no benefit in a
> custom plan.  So, using the worst-case ends of your ranges above and
> assuming that "a few" means "10", we'd go from 300 + 5 * 10000 = 50300
> ms to execute the query 10000 times, to 10 * 300 + 5 * 10000 = 53000 ms.

A little OT here, but (as I think Simon said elsewhere) I think we
really ought to be considering the table statistics when deciding
whether or not to replan.  It seems to me that the overwhelmingly
common case where this is going to come up is when (some subset of)
the MCVs require a different plan than run-of-the-mill values.  It
would be nice to somehow work that out.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to