On 2/23/11 7:10 AM, Robert Haas wrote: > IME, most bad query plans are caused by either incorrect > estimates of selectivity, or wrongheaded notions about what's likely > to be cached. If we could find a way, automated or manual, of > providing the planner some better information about the facts of life > in those areas, I think we'd be way better off. I'm open to ideas > about what the best way to do that is.
As previously discussed, I'm fine with approaches which involve modifying database objects. These are auditable and centrally managable and aren't devastating to upgrades. So thinks like the proposed "CREATE SELECTIVITY" would be OK in a way that decorating queries would not. Similiarly, I would love to be able to set "cache %" on a per-relation basis, and override the whole dubious calculation involving random_page_cost for scans of that table. The great thing about object decorations is that we could then collect data on which ones worked and which didn't through the performance list and then use those to improve the query planner. I doubt such would work with query decorations. -- -- Josh Berkus PostgreSQL Experts Inc. http://www.pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers