Tom Lane wrote:

Relcache inval casts a fairly wide net; for example, adding or dropping an
index will invalidate all plans using the index's table whether or not
they used that particular index, and I believe that VACUUM will also
result in a relcache inval due to updating the table's pg_class row.
I think this is a good thing though --- for instance, after adding an
index it seems a good idea to replan to see if the new index is useful,
and replanning after a VACUUM is useful if the table has changed size
enough to warrant a different plan.  OTOH this might mean that plans on a
high-update-traffic table never survive very long because of autovacuum's
efforts.  If that proves to be a problem in practice we can look at ways
to dial down the number of replans, but for the moment I think it's more
important to be sure we *can* replan at need than to find ways to avoid

I remember that there was discussion about invalidating plans who's estimated cost turn out to be severely off when executed. That is probably a more reliable metric (than invalidating with every VACCUM - unless of course the amount of changed rows is considered), though it will probably put a fixed overhead on all relevant queries. So it might not be feasible. Of course this checking after a query runs longer than expected also means that at least one execution will in fact have to run slow instead of preempting this from happening at all.

Also while not directly related it might be thing to keep in mind. It would also be cool to support multiple plans for different sets of parameters, since obviously the data distribution and therefore the optimal plan will potentially vary greatly with different parameters.


PS: I moved "Plan invalidation" to confirmed on the wishlist ..

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
      choose an index scan if your joining column's datatypes do not

Reply via email to