> You might want to reduce random_page_cost a little.
> Keep in mind that your test case is small enough to fit in RAM and is
> probably not reflective of what will happen with larger tables.
I am also running 8.0 rc1 for Windows. Despite many hours spent tweaking
various planner cost constants, I found little effect on cost estimates. Even
reducing random_page_cost from 4.0 to 0.1 had negligible impact and failed to
significantly influence the planner.
Increasing the statistics target for the last_name column to 250 or so *may*
help, at least if you're only selecting one name at a time. That's the standard
advice around here and the only thing I've found useful. Half the threads in
this forum are about under-utilized indexes. It would be great if someone could
admit the planner is broken and talk about actually fixing it!
I'm unconvinced that the planner only favours sequential scans as table size
decreases. In my experience so far, larger tables have the same problem only
it's more noticeable.
The issue hits PostgreSQL harder than others because of its awful sequential
scan speed, which is two to five times slower than other DBMS. The archives
show there has been talk for years about this, but it seems, no solution. The
obvious thing to consider is the block size, but people have tried increasing
this in the past with only marginal success.
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?