So I implemented two changes.

- Moved random_page_cost from 1.1 to 2.0 
- Manually ran analyze on all the tables

*Here is the new explain analyze:*
QUERY PLAN
HashAggregate  (cost=74122.97..74125.53 rows=256 width=24) (actual
time=45.205..45.211 rows=24 loops=1)
  InitPlan 1 (returns $0)
    ->  Limit  (cost=8.30..8.30 rows=1 width=9) (actual time=0.152..0.152
rows=1 loops=1)
          ->  Sort  (cost=8.30..8.78 rows=193 width=9) (actual
time=0.150..0.150 rows=1 loops=1)
                Sort Key: c.cim
                Sort Method: top-N heapsort  Memory: 25kB
                ->  Seq Scan on cim_calendar c  (cost=0.00..7.33 rows=193
width=9) (actual time=0.008..0.085 rows=192 loops=1)
                      Filter: (nodal_load <= '2015-01-01'::date)
                      Rows Removed by Filter: 36
  ->  Nested Loop  (cost=22623.47..74111.47 rows=256 width=24) (actual
time=43.798..45.181 rows=24 loops=1)
        ->  Bitmap Heap Scan on api_settlement_points sp 
(cost=22622.91..67425.92 rows=12 width=9) (actual time=43.756..43.823 rows=1
loops=1)
              Recheck Cond: ((rt_model = $0) AND (start_date <=
'2015-01-01'::date) AND (end_date > '2015-01-01'::date))
              Filter: ((settlement_point_rdfid)::text =
'#_{09F3A628-3B9D-481A-AC90-72AF8EAB64CA}'::text)
              Rows Removed by Filter: 5298
              ->  Bitmap Index Scan on api_settlement_points_idx 
(cost=0.00..22622.90 rows=72134 width=0) (actual time=42.998..42.998
rows=5299 loops=1)
                    Index Cond: ((rt_model = $0) AND (start_date <=
'2015-01-01'::date) AND (end_date > '2015-01-01'::date))
        ->  Index Scan using dp_hist_gen_actual_idx2 on dp_hist_gen_actual
dp  (cost=0.56..556.88 rows=25 width=24) (actual time=0.033..1.333 rows=24
loops=1)
              Index Cond: ((market_day >= '2015-01-01'::date) AND
(market_day <= '2015-01-01'::date) AND (expiry_date IS NULL) AND
((settlement_point)::text = (sp.settlement_point)::text))
Total runtime: 45.278 ms

I'm a little perplexed why the autovacuum wasn't keeping up. Any
recommendations for those settings to push it to do a bit more analyzing of
the tables??



--
View this message in context: 
http://postgresql.nabble.com/How-to-get-explain-plan-to-prefer-Hash-Join-tp5841450p5841520.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to