> OK, that makes sense; however, this doesn't:
> elon2=# explain analyse select * from call where aspid='123C' and
> OpenedDateTime between '2000-01-01 00:00:00.0' and '2004-06-24
> 23:59:59.999' order by aspid asc, openeddatetime asc, callstatus asc,
> calltype asc, callkey asc;

> I've modified the "order by" to reflect the call_idx13 index, yet the
> query still causes a sequence scan of the table.

This query shown above does not have a limit where the original one had
LIMIT 26. PostgreSQL has determined that pulling out all the table rows,
and sorting them in CPU is cheaper than pulling out all index rows, then
randomly pulling out all table rows.

Normally, that would be well on the mark. You can sort a large number of
tuples for a single random disk seek, but this is not true for you. 

Considering you're pulling out 450k rows in 8 seconds, I'd also guess
the data is mostly in memory. Is that normal? Or is this a result of
having run several test queries against the same data multiple times?

If it's normal, bump your effective_cache parameter higher to move the
sort vs. scan threshold.

> Again, that makes sense to me, but if I remove aspid from the query it
> still ignores the index....

You've changed 2 variables. You removed the aspid AND removed the LIMIT.
Add back the limit of 26 like you originally showed, and it'll do what I

> Setting enable_seqscan=off still doesn't cause the desired index to be
> selected:
> elon2=# explain analyse select * from call where aspid='123C' and
> OpenedDateTime between '2000-01-01 00:00:00.0' and '2004-06-24
> 23:59:59.999' order by aspid desc, openeddatetime desc, callstatus desc,
> calltype desc, callkey desc;
> ------------------------------------------------------------------------
> ------------------------------------------------------------------------
> -------------------------
>  Sort  (cost=355314.41..356482.87 rows=467384 width=295) (actual
> time=33382.92..34088.10 rows=461973 loops=1)
>    Sort Key: aspid, openeddatetime, callstatus, calltype, callkey
>    ->  Index Scan using call_aspid on call  (cost=0.00..43430.25
> rows=467384 width=295) (actual time=0.24..7915.21 rows=461973 loops=1)
>          Index Cond: (aspid = '123C'::bpchar)
>          Filter: ((openeddatetime >= '2000-01-01 00:00:00-07'::timestamp
> with time zone) AND (openeddatetime <= '2004-06-24
> 23:59:59.999-07'::timestamp with time zone))
>  Total runtime: 39196.39 msec

I'm a little surprised at this. I should have done a reverse index scan
and skipped the sort step. In fact, with a very simple select, I get

rbt=# \d t
                  Table "public.t"
 Column |              Type              | Modifiers
 col1   | bpchar                         |
 col2   | timestamp(0) without time zone |
 col3   | integer                        |
 col4   | integer                        |
 col5   | integer                        |
    "t_idx" btree (col1, col2, col3, col4, col5)

rbt=# set enable_seqscan = false;
rbt=# explain analyze select * from t order by col1 desc, col2 desc,
col3 desc, col4 desc, col5 desc;
                                                     QUERY PLAN 
 Index Scan Backward using t_idx on t  (cost=0.00..6.20 rows=18
width=52) (actual time=0.046..0.219 rows=18 loops=1)
 Total runtime: 1.813 ms
(2 rows)

Any chance you could put together a test case demonstrating the above
behaviour? Everything from CREATE TABLE, through dataload to the EXPLAIN

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to