[EMAIL PROTECTED] wrote:
Hi guys

Im running a Datawarehouse benchmark (APB-1) on PostgreSql. The objective is to
choose which of the to main db (PostgreSQL, MySQL) is fastest. I've run into a
small problem which I hope could be resolved here.

I'm trying to speed up this query:

select count(*) from actvars, prodlevel where
actvars.product_level=prodlevel.code_level and
prodlevel.division_level='OY3S5LAPALL6';

ACTVARS is a fact table of aproximatly 16 million rows, PRODLEVEL has 20700
rows. Both have btree indices.


I executed the query and it took me almost half an hour to execute! Running the
same query on MySQL the result came 6 seconds after. As you can see there is a
large differences between execution times.

After running an explain:

Aggregate  (cost=3123459.62..3123459.62 rows=1 width=32)
   ->  Merge Join  (cost=3021564.79..3119827.17 rows=1452981 width=32)
         Merge Cond: ("outer".product_level = "inner".code_level)
         ->  Sort  (cost=3020875.00..3060938.81 rows=16025523 width=16)
               Sort Key: actvars.product_level
               ->  Seq Scan on actvars  (cost=0.00..365711.23 rows=16025523
width=16)

Damn.. Seq. scan for actvars? I would say half an hour is a good throughput.


Are there any indexes on both actvars.product_level and prodlevel.code_level? Are they exactly compatible type? int2 and int4 are not compatible in postgresql lingo.

That plan should go for index scan. Can you show us the table definitions?

And yes, what tuning you did to postgresql?

Shridhar


---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to