> Generally using a very small sample That is another issue. Inventing some other algorithm instead of current "cache after 5 executions" is another effort.
However, I suggest to "learn" from what client is sending. You suggest to completely ignore that and just prepare for the case he/she will send "a random value". Why expect client would stop sending MCVs if we have already seen them during previous 5 executions? > That'd not change with the change you propose. It will. In my suggestion, the first "explain analyze execute" will match the "finally cached plan" provided the plan is not treated in a special way (e.g. replan every time, etc). > That a prepared statement suddenly performs way differently >depending on which the first bind values are is not, in any way, easier >to debug. It is way easier to debug since *the first* execution plan you get out of "explain" *matches* the one that will finally be used. Lots of developers are just not aware of "5 replans by backend". Lots of the remaining confuse it with "5 non-server-prepared executions by pgjdbc driver". In other way: in order to identify a root cause of a slow query you find bind values. Then you perform explain analyze and you find shiny fast plan. Does that immediately ring bells that you need to execute it 6 times to ensure the plan would still be good? Well, try being someone not that smart as you are when you answering this question. 2) In my suggestion, "the first execution would likely to match the plan". VS>> 3) What about "client sends top most common value 5 times in a row"? VS>> Why assume "it will stop doing that"? AF>If 20% of your values are nonunique and the rest is unique you'll get AF>*drastically* different plans, each performing badly for the other case; AF>with the unique cardinality plan being extremly bad. Can you elaborate a bit? I can hardly follow that. Vladimir -- Sent via pgsql-hackers mailing list (email@example.com) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers