Peter Eisentraut <[EMAIL PROTECTED]> writes: > I don't recall, has it ever been considered to compare the number of > actual result rows against the estimate computed by the optimizer and then > draw some conclusions from it? Both numbers should be easily available. It's been suggested, but doing anything with the knowledge that you guessed wrong seems to be an AI project, the more so as the query gets more complex. I haven't been able to think of anything very productive to do with such a comparison (no, I don't like any of your suggestions ;-)). Which parameter should be tweaked on the basis of a bad result? If the real problem is not a bad parameter but a bad model, will the tweaker remain sane, or will it drive the parameters to completely ridiculous values? The one thing that we *can* recommend unreservedly is running ANALYZE more often, but that's just a DB administration issue, not something you need deep study of the planner results to discover. In 7.2, both VACUUM and ANALYZE should be sufficiently cheap/noninvasive that people can just run them in background every hour-or-so... regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster