Harry Broomhall <[EMAIL PROTECTED]> writes: > -> Index Scan using import_cdrs_cdr_id_key on import_cdrs (cost=0.00..52.00 > rows=1000 width=164) (actual time=0.42..11479.51 rows=335671 loops=1)
> -> Seq Scan on import_cdrs (cost=0.00..8496.71 rows=335671 width=126) (actual > time=0.15..9504.24 rows=335671 loops=1) Hm. The planner's default cost parameters assume that a full-table index scan will be much slower than a full-table seq scan. That's evidently not the case in your test situation. You could probably bring the estimates more in line with reality (and thereby improve the choice of plan) by reducing random_page_cost towards 1 and increasing effective_cache_size to represent some realistic fraction of your available RAM (though I concur with your observation that the latter doesn't change the estimates all that much). Beware however that test-case reality and production reality are not the same thing. You are evidently testing with tables that fit in RAM. If your production tables will not, you'd better be wary of being overly aggressive about reducing random_page_cost. I believe the default value (4.0) is fairly representative for situations where many actual disk fetches are needed, ie, the tables are much larger than RAM. 1.0 would be appropriate if all your tables are always fully cached in RAM (since RAM has by definition no random-access penalty). In intermediate cases you need to select intermediate values. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives? http://archives.postgresql.org