Simon Riggs <[EMAIL PROTECTED]> writes: > Csaba raises a good point here. Many people say they want "hints" when > what they actually require the plan to be both stable and predictable.
Plan stability is also an important feature, especially for OLTP systems which have hard real-time requirements. OLTP systems typically don't care about getting the "best" plan for a query, only a plan that is "good enough". "Good enough" means it can keep up with the rate of incoming requests; it doesn't matter whether it keeps up with 10% headroom or 20% headroom. But if one incoming query even one in a thousand takes 1000% of the time available then the entire system risks falling down. But plan stability is something that should be integrated directly in the server. Not something achieved by having the user hint every query to defeat the optimizer. What I'm working on for my purposes here is a perl script that takes all the queries in the application (either gathered from the log or stored statically) and runs ANALYZE on all of them. Then within a transaction it runs ANALYZE on the database and re-ANALYZES every query again. If any plans change then it mails them to the DBA and rolls back the transaction with the database analysis. The DBA gets a chance to approve the new plans before they go into effect. That's more or less what I expect an integrated plan stability feature to do. It's like a shared query plan cache except that instead of being a cache it's a database of plans that are specifically approved by the DBA. Queries that don't have an approved plan could be configured to either produce a warning or an error until the plan is approved. -- greg ---------------------------(end of broadcast)--------------------------- TIP 2: Don't 'kill -9' the postmaster