The following is from a database of several hundred million rows of real data that has been VACUUM ANALYZEd.


Why isn't the index being used for a query that seems tailor-made for it? The results (6,300 rows) take about ten minutes to retrieve with a sequential scan.


A copy of this database with "integer" in place of "smallint", a primary key in column order (date, time, type, subtype) and a secondary index in the required order (type, subtype, date, time) correctly uses the secondary index to return results in under a second.


Actually, the integer version is the first one I made, and the smallint is the copy, but that shouldn't matter.


Postgres is version "postgresql-server-7.3.4-3.rhl9" from Red Hat Linux 9.




testdb2=# \d db

              Table "public.db"

 Column  |          Type          | Modifiers


 date    | date                   | not null

 time    | time without time zone | not null

 type    | smallint               | not null

 subtype | smallint               | not null

 value   | integer                |

Indexes: db_pkey primary key btree ("type", subtype, date, "time")


testdb2=# set enable_seqscan to off;



testdb2=# explain select * from db where type=90 and subtype=70 and date='7/1/2004';

                                  QUERY PLAN


 Seq Scan on db  (cost=100000000.00..107455603.76 rows=178 width=20)

   Filter: (("type" = 90) AND (subtype = 70) AND (date = '2004-07-01'::date))

(2 rows)

Reply via email to