Ran vacuum analyse on both program and logfile tables.  Estimates are more in line with reality now, but query still takes 10 seconds on v7.4 and 10 ms on v7.3.  Function is marked as immutable and returns varchar(5).  I am wondering why the planner would choose a merge join (v7.4) as opposed to a nested loop (v7.3) given the small number of rows in the top level table (logfile) based upon the where clause (
L.air_date = '04/12/2002'::TIMESTAMP
there are typically only 30 rows per station/air_date.  What am I missing here?

Richard Huxton wrote:
On Wednesday 26 November 2003 18:39, Roger Ging wrote:
version 7.4 results:

explain analyse SELECT L.row_id FROM music.logfile L LEFT JOIN
music.program P ON
music.fn_mri_id_no_program(P.mri_id_no) = L.program_id
WHERE  L.station = UPPER('kabc')::VARCHAR
AND L.air_date = '04/12/2002'::TIMESTAMP
AND P.cutoff_date IS NULL
ORDER BY L.chron_start,L.chron_end;

                ->  Seq Scan on program p  (cost=0.00..15192.35
rows=4335 width=20) (actual time=109.045..1955.882 rows=173998 loops=1)

The estimated number of rows here (4335) is *way* off (173998 actually). If 
you only had 4335 rows, then this might be a more sensible plan.

First step is to run:
Then, check the definition of your function fn_mri_id_no_program() and make 
sure it is marked immutable/stable (depending on what it does) and that it's 
returning a varchar.


Reply via email to