Greg Smith wrote:
There are also some severe query plan stability issues with this idea beyond this. The idea that your plan might vary based on execution latency, that the system load going up can make query plans alter with it, is terrifying for a production server.

I thought I was clear that it should present some stats to the DBA, not that it would try to auto-tune? This thread started with a discussion of appropriate tunings for random page cost vs sequential page cost I believe,, based on some finger in the air based on total size vs available disk cache. And it was observed that on systems that have very large databases but modest hot data, you can perform like a fully cached system, for much of the time.

I'm just suggesting providing statistical information to the DBA which will indicate whether the system has 'recently' been behaving like a system that runs from buffer cache and/or subsystem caches, or one that runs from disk platters, and what the actual observed latency difference is. It may well be that this varies with time of day or day of week. Whether the actual latencies translate directly into the relative costs is another matter.




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to