Try 'set enable-mergejoin=false' and see if you get a hashjoin.

- Luke

----- Original Message -----
From: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
To: Richard Huxton <[EMAIL PROTECTED]>
Cc: pgsql-performance@postgresql.org <pgsql-performance@postgresql.org>
Sent: Fri May 16 04:00:41 2008
Subject: Re: [PERFORM] Join runs for > 10 hours and then fills up >1.3TB of 
disk space

I'm expecting 9,961,914 rows returned. Each row in the big table  
should have a corresponding key in the smaller tale, I want to  
basically "expand" the big table column list by one, via adding the  
appropriate key from the smaller table for each row in the big table.  
It's not a cartesion product join.



On May 16, 2008, at 1:40 AM, Richard Huxton wrote:

> kevin kempter wrote:
>> Hi List;
>> I have a table with 9,961,914 rows in it (see the describe of  
>> bigtab_stats_fact_tmp14 below)
>> I also have a table with 7,785 rows in it (see the describe of  
>> xsegment_dim below)
>> I'm running the join shown below and it takes > 10 hours and  
>> eventually runs out of disk space on a 1.4TB file system
>
>> QUERY PLAN
>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>>  Merge 
>>  Join  (cost=1757001.74..73569676.49 rows=3191677219 width=118)
>
> Dumb question Kevin, but are you really expecting 3.2 billion rows  
> in the result-set? Because that's approaching 400GB of result-set  
> without any overheads.
>
> -- 
>  Richard Huxton
>  Archonet Ltd


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to