On Mon, 12 Apr 2004, Jeremy Dunn wrote:

>    explain analyze select count(*) from xxx where cid=6223341;
>    Aggregate  (cost=74384.19..74384.19 rows=1 width=0) (actual
> time=11614.89..11614.89 rows=1 loops=1)
>      ->  Index Scan using xxx_cid on emailrcpts  (cost=0.00..74329.26
> rows=21974 width=0) (actual time=35.75..11582.10 rows=20114 loops=1)
>    Total runtime: 11615.05 msec
> However for the values that have > 20,000 rows, the plan changes to a
> sequential scan, which is proportionately much slower.
>    explain analyze select count(*) from xxx where cid=7191032;
>    Aggregate  (cost=97357.61..97357.61 rows=1 width=0) (actual
> time=46427.81..46427.82 rows=1 loops=1)
>     ->   Seq Scan on xxx (cost=0.00..97230.62 rows=50792 width=0)
> (actual time=9104.45..46370.27 rows=37765 loops=1)
>     Total runtime: 46428.00 msec
> The question: why does the planner consider a sequential scan to be
> better for these top 10 values?  In terms of elapsed time it is more
> than twice as slow, proportionate to an index scan for the same number
> of rows.

One thing to do is to set enable_seqscan=off and run the above and compare
the estimated and real costs.  It may be possible to lower
random_page_cost to a still reasonable number in order to move the point
of the switchover to seqscan.

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?


Reply via email to