Tom Lane wrote:
Stefan Kaltenbrunner <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
Apparently we've made the planner a bit too optimistic about the savings
that can be expected from repeated indexscans occurring on the inside of
a join.

effective_cache_size was set to 10GB(my fault for copying over the conf
from a 16GB box) during the run - lowering it just a few megabytes(!) or
to a more realistic 6GB results in the following MUCH better plan:
http://www.kaltenbrunner.cc/files/dbt3_explain_analyze2.txt

Interesting.  It used to be that effective_cache_size wasn't all that
critical... what I think this report is showing is that with the 8.2
changes to try to account for caching effects in repeated indexscans,
we've turned that into a pretty significant parameter.

yes I'm a bit worried about that too - it has been a bit of "conventional wisdom" that setting effective_cache_size optimistic will never hurt and that it encourages postgresql to sometimes get a better plan by favouring index-scans.


It'd be nice not to have to depend on the DBA to give us a good number
for this setting.  But I don't know of any portable ways to find out
how much RAM is in the box, let alone what fraction of it we should
assume is available per-query.

well there are really a number of things the dba would better give accurate information to the database - though in that case we might go from "too much won't hurt" to "too much will hurt" ...


Stefan

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to