On Sun, Oct 30, 2005 at 06:16:04PM +0100, Svenne Krap wrote:
>  Nested Loop  (cost=223.09..338.61 rows=1 width=174) (actual 
> time=20.213..721.361 rows=2250 loops=1)
>    Join Filter: (("outer".dataset_id = "inner".dataset_id) AND 
> ("outer".nb_property_type_id = "inner".nb_property_type_id))
>    ->  Hash Join  (cost=58.04..164.26 rows=1 width=150) (actual 
> time=5.510..22.088 rows=2250 loops=1)

There's horrible misestimation here. It expects one row and thus starts a
nested loop, but gets 2250. No wonder it's slow :-)

The misestimation can be traced all the way down here:

>          Hash Cond: ("outer".institut = "inner".id)
>          ->  Hash Join  (cost=56.88..163.00 rows=16 width=137) (actual 
> time=5.473..19.165 rows=2250 loops=1)
>                Hash Cond: ("outer".dataset_id = "inner".id)
>                ->  Hash Join  (cost=55.48..160.95 rows=99 width=101) (actual 
> time=5.412..16.264 rows=2250 loops=1)

where the planner misestimates the selectivity of your join (it estimates 99
rows, and there are 2250).

I've had problems joining with Append nodes in the past, and solved the
problem by moving the UNION ALL a bit out, but I'm not sure if it's a very
good general solution, or a solution to your problems here.

If all else fails, you could "set enable_nestloop=false", but that is not a
good idea in the long run, I'd guess -- it's much better to make sure the
planner has good estimates and let it do the correct decisions from there.

/* Steinar */
Homepage: http://www.sesse.net/

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to