Hi guys 

Im running a Datawarehouse benchmark (APB-1) on PostgreSql. The objective is to
choose which of the to main db (PostgreSQL, MySQL) is fastest. I've run into a
small problem which I hope could be resolved here.

I'm trying to speed up this query:

select count(*) from actvars, prodlevel where
actvars.product_level=prodlevel.code_level and

ACTVARS is a fact table of aproximatly 16 million rows, PRODLEVEL has 20700
rows. Both have btree indices. 

I executed the query and it took me almost half an hour to execute! Running the
same query on MySQL the result came 6 seconds after. As you can see there is a
large differences between execution times.

After running an explain:

Aggregate  (cost=3123459.62..3123459.62 rows=1 width=32)
   ->  Merge Join  (cost=3021564.79..3119827.17 rows=1452981 width=32)
         Merge Cond: ("outer".product_level = "inner".code_level)
         ->  Sort  (cost=3020875.00..3060938.81 rows=16025523 width=16)
               Sort Key: actvars.product_level
               ->  Seq Scan on actvars  (cost=0.00..365711.23 rows=16025523
         ->  Sort  (cost=689.79..694.48 rows=1877 width=16)
               Sort Key: prodlevel.code_level
               ->  Seq Scan on prodlevel  (cost=0.00..587.75 rows=1877 width=16)
                     Filter: (division_level = 'OY3S5LAPALL6'::bpchar)

I found that the indices werent being used. 

The database has been vacuumed and analyze has been executed.

I tried disabling the seqscan, so as to force index usage. The planner uses
index scans but the query stil takes a very long time to execute.

Any suggestions on resolving this would would be appreciated.

P.S: Im running PostgrSQL

This message was sent using Endymion MailMan.

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?


Reply via email to