Re: [PERFORM] mis-estimate in nested query causes slow runtimes
On 2/11/08, Tom Lane [EMAIL PROTECTED] wrote: Chris Kratz [EMAIL PROTECTED] writes: The first frustration is that I can't get the transaction details scan to get any more accurate. It thinks it will find 1407 records, instead it finds 20,153. Then for whatever reason it thinks that a join between 1 record and 1407 records will return 1 record. This is mainly what I can't understand. Why does it think it will only get one record in response when it's a left join? I don't see any left join there ... PG 8.2.4 on Linux kernel 2.6.9 x64 The first thing you should do is update to 8.2.6; we've fixed a fair number of problems since then that were fallout from the outer-join planning rewrite in 8.2. If it still doesn't work very well, please post the pg_stats rows for the join columns involved (idatrndtl.ida_trans_match_source_id and mtchsrcprj3.id). (You do have up-to-date ANALYZE stats for both of those tables, right?) regards, tom lane Hello Tom, We've updated to Postgres 8.2.6 on our production database over the weekend. Unfortunately, the estimates on this query are no better after the upgrade. Here is just the part of the estimate that is incorrect. (2 vs 20153) - Nested Loop (cost=12.68..165.69 rows=2 width=38) (actual time= 0.089..29.792 rows=20153 loops=1) - Hash Join (cost=12.68..24.37 rows=1 width=24) (actual time= 0.064..0.135 rows=1 loops=1) Hash Cond: (mtchsrcprj3.funding_source_id = mtchsrcprjfs3.nameid ) - Seq Scan on project mtchsrcprj3 (cost=0.00..11.22 rows=122 width=8) (actual time=0.002..0.053 rows=122 loops=1) - Hash (cost=12.61..12.61 rows=6 width=24) (actual time= 0.017..0.017 rows=1 loops=1) - Index Scan using name_float_lfm_idx on namemaster mtchsrcprjfs3 (cost=0.00..12.61 rows=6 width=24) (actual time=0.012..0.013rows=1 loops=1) Index Cond: (name_float_lfm = 'DWS'::text) - Index Scan using transaction_details_ida_trans_match_source_id on transaction_details idatrndtl (cost=0.00..123.72 rows=1408 width=22) (actual time=0.023..17.128 rows=20153 loops=1) (Entire explain analyze posted earlier in thread) Total Query runtime: 35309.298 ms Same w/ enable_nestloop off: 761.715 ms I've tried the stats up to 1000 on both columns which causes no differences. Currently the stats are at 100. test=# alter table transaction_details alter column ida_trans_match_source_id set statistics 100; ALTER TABLE test=# analyze transaction_details; ANALYZE test=# alter table project alter column id set statistics 100; ALTER TABLE test=# analyze project; ANALYZE Stats rows in pg_stats for these two columns: test=# select * from pg_stats where tablename = 'transaction_details' and attname='ida_trans_match_source_id'; schemaname | tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation +-+---+---+---++--+---+-+- public | transaction_details | ida_trans_match_source_id | 0.479533 | 4 | 69 | {818,832,930,937,923,812,931,836,837,829,830,14,809} | {0.1024,0.0991333, 0.0408,0.0232,0.0221,0.0219,0.0207,0.0188667,0.0186667,0.0177667,0.0176667, 0.0130333,0.0118667} | {6,802,813,813,814,814,815,815,816,816,817,817,827,827,833,835,835,838,838,838,838,838,843,920,921,921,921,921,922,922,924,924,924,924,925,926,926,928,928,934,936,936,936,936,936,938,939,941,941,955,965,967,968,968,974,980} |0.178655 (1 row) test=# select * from pg_stats where tablename = 'project' and attname='id'; schemaname | tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation +---+-+---+---++--+---++- public | project | id
Re: [PERFORM] mis-estimate in nested query causes slow runtimes
On 2/11/08, Tom Lane [EMAIL PROTECTED] wrote: Chris Kratz [EMAIL PROTECTED] writes: - Nested Loop (cost=42.74..161.76 rows=1 width=38) (actual time=2.932..27.772 rows=20153 loops=1) - Hash Join (cost=10.89..22.58 rows=1 width=24) (actual time=0.065..0.134 rows=1 loops=1) Hash Cond: (mtchsrcprj3.funding_source_id = mtchsrcprjfs3.nameid) - Seq Scan on project mtchsrcprj3 (cost=0.00..11.22 rows=122 width=8) (actual time=0.002..0.054 rows=122 loops=1) - Hash (cost=10.83..10.83 rows=5 width=24) (actual time=0.017..0.017 rows=1 loops=1) - Index Scan using name_float_lfm_idx on namemaster mtchsrcprjfs3 (cost=0.00..10.83 rows=5 width=24) (actual time=0.012..0.013 rows=1 loops=1) Index Cond: (name_float_lfm = 'DWS'::text) - Bitmap Heap Scan on transaction_details idatrndtl (cost=31.85..121.60 rows=1407 width=22) (actual time=2.864..12.060 rows=20153 loops=1) Recheck Cond: (idatrndtl.ida_trans_match_source_id = mtchsrcprj3.id) - Bitmap Index Scan on transaction_details_ida_trans_match_source_id (cost=0.00..31.50 rows=1407 width=0) (actual time=2.696..2.696 rows=20153 loops=1) Index Cond: (idatrndtl.ida_trans_match_source_id = mtchsrcprj3.id) The first frustration is that I can't get the transaction details scan to get any more accurate. It thinks it will find 1407 records, instead it finds 20,153. Then for whatever reason it thinks that a join between 1 record and 1407 records will return 1 record. This is mainly what I can't understand. Why does it think it will only get one record in response when it's a left join? I don't see any left join there ... PG 8.2.4 on Linux kernel 2.6.9 x64 The first thing you should do is update to 8.2.6; we've fixed a fair number of problems since then that were fallout from the outer-join planning rewrite in 8.2. If it still doesn't work very well, please post the pg_stats rows for the join columns involved (idatrndtl.ida_trans_match_source_id and mtchsrcprj3.id). (You do have up-to-date ANALYZE stats for both of those tables, right?) regards, tom lane I know it's somewhat premature as we haven't had a chance to do the update yet, but here is what I did w/ the statistics with the current version for chuckles and grins just to see if it would make a difference in the plan. # alter table project alter column id set statistics 1000; ALTER TABLE # analyze project; ANALYZE # alter table transaction_details alter column ida_trans_match_source_id set statistics 1000; ALTER TABLE # analyze transaction_details; ANALYZE # select * from pg_stats where (tablename='project' and attname='id') or (tablename='transaction_details' and attname='ida_trans_match_source_id'); schemaname | tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation +-+---+---+---++--+++- public | project | id| 0 | 4 | -1 | | |
Re: [PERFORM] mis-estimate in nested query causes slow runtimes
On 2/11/08, Tom Lane [EMAIL PROTECTED] wrote: Chris Kratz [EMAIL PROTECTED] writes: - Nested Loop (cost=42.74..161.76 rows=1 width=38) (actual time=2.932..27.772 rows=20153 loops=1) - Hash Join (cost=10.89..22.58 rows=1 width=24) (actual time=0.065..0.134 rows=1 loops=1) Hash Cond: (mtchsrcprj3.funding_source_id = mtchsrcprjfs3.nameid) - Seq Scan on project mtchsrcprj3 (cost=0.00..11.22 rows=122 width=8) (actual time=0.002..0.054 rows=122 loops=1) - Hash (cost=10.83..10.83 rows=5 width=24) (actual time=0.017..0.017 rows=1 loops=1) - Index Scan using name_float_lfm_idx on namemaster mtchsrcprjfs3 (cost=0.00..10.83 rows=5 width=24) (actual time=0.012..0.013 rows=1 loops=1) Index Cond: (name_float_lfm = 'DWS'::text) - Bitmap Heap Scan on transaction_details idatrndtl (cost=31.85..121.60 rows=1407 width=22) (actual time=2.864..12.060 rows=20153 loops=1) Recheck Cond: (idatrndtl.ida_trans_match_source_id = mtchsrcprj3.id) - Bitmap Index Scan on transaction_details_ida_trans_match_source_id (cost=0.00..31.50 rows=1407 width=0) (actual time=2.696..2.696 rows=20153 loops=1) Index Cond: (idatrndtl.ida_trans_match_source_id = mtchsrcprj3.id) The first frustration is that I can't get the transaction details scan to get any more accurate. It thinks it will find 1407 records, instead it finds 20,153. Then for whatever reason it thinks that a join between 1 record and 1407 records will return 1 record. This is mainly what I can't understand. Why does it think it will only get one record in response when it's a left join? I don't see any left join there ... PG 8.2.4 on Linux kernel 2.6.9 x64 The first thing you should do is update to 8.2.6; we've fixed a fair number of problems since then that were fallout from the outer-join planning rewrite in 8.2. If it still doesn't work very well, please post the pg_stats rows for the join columns involved (idatrndtl.ida_trans_match_source_id and mtchsrcprj3.id). (You do have up-to-date ANALYZE stats for both of those tables, right?) regards, tom lane Thanks Tom, we will try the upgrade and see if that makes a difference. -Chris
[PERFORM] mis-estimate in nested query causes slow runtimes
Hello, I've been wrestling w/ a complex query for another developer for awhile today. The problem consistently seems to a mis-estimation of the number of rows resulting from a join. This causes the query early on to think it's only going to be processing 1 row and so it chooses nested loops much of the way up the chain. I've messed w/ statistics targets on some of the columns w/ no increase in the accuracy of the estimates. I've analyzed the tables in question (autovac is running). If I turn off nested loops, the query runs in 1.5 seconds. Otherwise it takes about 37s. With other criteria in the where clause it can take many minutes to return. Here is a subset of the explain analyze that I'm wrestling with. The entire explain is at the end of the email. - Nested Loop (cost=42.74..161.76 rows=1 width=38) (actual time=2.932..27.772 rows=20153 loops=1) - Hash Join (cost=10.89..22.58 rows=1 width=24) (actual time=0.065..0.134 rows=1 loops=1) Hash Cond: (mtchsrcprj3.funding_source_id = mtchsrcprjfs3.nameid) - Seq Scan on project mtchsrcprj3 (cost=0.00..11.22 rows=122 width=8) (actual time=0.002..0.054 rows=122 loops=1) - Hash (cost=10.83..10.83 rows=5 width=24) (actual time=0.017..0.017 rows=1 loops=1) - Index Scan using name_float_lfm_idx on namemaster mtchsrcprjfs3 (cost=0.00..10.83 rows=5 width=24) (actual time=0.012..0.013 rows=1 loops=1) Index Cond: (name_float_lfm = 'DWS'::text) - Bitmap Heap Scan on transaction_details idatrndtl (cost=31.85..121.60 rows=1407 width=22) (actual time=2.864..12.060 rows=20153 loops=1) Recheck Cond: (idatrndtl.ida_trans_match_source_id = mtchsrcprj3.id) - Bitmap Index Scan on transaction_details_ida_trans_match_source_id (cost=0.00..31.50 rows=1407 width=0) (actual time=2.696..2.696 rows=20153 loops=1) Index Cond: (idatrndtl.ida_trans_match_source_id = mtchsrcprj3.id) The first frustration is that I can't get the transaction details scan to get any more accurate. It thinks it will find 1407 records, instead it finds 20,153. Then for whatever reason it thinks that a join between 1 record and 1407 records will return 1 record. This is mainly what I can't understand. Why does it think it will only get one record in response when it's a left join? The thing is that we've had this happen a number of times recently with complex nested queries. Most of the time things will run very quickly, but an early mis-estimation by the planner causes it to use nested loops exclusively when hash joins would be more appropriate. Is there anything I can do to improve this short of the set enable_nestloop=off? PG 8.2.4 on Linux kernel 2.6.9 x64 -Chris --- Full explain analyze - Group (cost=336.76..336.82 rows=1 width=328) (actual time=36620.831..36621.176 rows=248 loops=1) - Sort (cost=336.76..336.76 rows=1 width=328) (actual time=36620.828..36620.888 rows=248 loops=1) Sort Key: county, fullname_last_first_mdl, CASE WHEN (COALESCE(fullname_last_first_mdl, '0'::text) = '0'::text) THEN ''::text ELSE COALESCE(fullname_last_first_mdl, '0'::text) END || ' '::text) || '-'::text) || ' '::text) || CASE WHEN (COALESCE(ssn, '0'::text) = '0'::text) THEN ''::text ELSE COALESCE(ssn, '0'::text) END), system_name_id, ssn, ida_account_id, ida_account_match_source_funding_source_name_float_lfm, ida_account_status, vs_query_27453_212267, vs_query_27453_212252, vs_query_27453_212253, vs_query_27453_212254, vs_query_27453_212255, (COALESCE(vs_query_27453_212267, 0::numeric) + COALESCE(vs_query_27453_212255, 0::numeric)) - Subquery Scan foo (cost=336.72..336.75 rows=1 width=328) (actual time=36614.750..36615.319 rows=248 loops=1) - Sort (cost=336.72..336.72 rows=1 width=255) (actual time=36614.737..36614.798 rows=248 loops=1) Sort Key: cou.validvalue, dem.name_float_lfm - Nested Loop Left Join (cost=194.80..336.71 rows=1 width=255) (actual time=506.599..36611.702 rows=248 loops=1) - Nested Loop Left Join (cost=194.80..332.90 rows=1 width=242) (actual time=506.566..36606.528 rows=248 loops=1) Join Filter: (acc.id = qry27453.ida_account_id) - Nested Loop (cost=30.16..168.13 rows=1 width=82) (actual time=0.461..27.079 rows=248 loops=1) - Nested Loop (cost=30.16..167.85 rows=1 width=90) (actual time=0.453..25.133 rows=248 loops=1) - Nested Loop (cost=30.16..165.94 rows=1 width=77) (actual time=0.441..19.687 rows=970 loops=1) - Nested Loop (cost=30.16..162.90 rows=1 width=40) (actual
Re: [PERFORM] mis-estimate in nested query causes slow runtimes
Chris Kratz [EMAIL PROTECTED] writes: - Nested Loop (cost=42.74..161.76 rows=1 width=38) (actual time=2.932..27.772 rows=20153 loops=1) - Hash Join (cost=10.89..22.58 rows=1 width=24) (actual time=0.065..0.134 rows=1 loops=1) Hash Cond: (mtchsrcprj3.funding_source_id = mtchsrcprjfs3.nameid) - Seq Scan on project mtchsrcprj3 (cost=0.00..11.22 rows=122 width=8) (actual time=0.002..0.054 rows=122 loops=1) - Hash (cost=10.83..10.83 rows=5 width=24) (actual time=0.017..0.017 rows=1 loops=1) - Index Scan using name_float_lfm_idx on namemaster mtchsrcprjfs3 (cost=0.00..10.83 rows=5 width=24) (actual time=0.012..0.013 rows=1 loops=1) Index Cond: (name_float_lfm = 'DWS'::text) - Bitmap Heap Scan on transaction_details idatrndtl (cost=31.85..121.60 rows=1407 width=22) (actual time=2.864..12.060 rows=20153 loops=1) Recheck Cond: (idatrndtl.ida_trans_match_source_id = mtchsrcprj3.id) - Bitmap Index Scan on transaction_details_ida_trans_match_source_id (cost=0.00..31.50 rows=1407 width=0) (actual time=2.696..2.696 rows=20153 loops=1) Index Cond: (idatrndtl.ida_trans_match_source_id = mtchsrcprj3.id) The first frustration is that I can't get the transaction details scan to get any more accurate. It thinks it will find 1407 records, instead it finds 20,153. Then for whatever reason it thinks that a join between 1 record and 1407 records will return 1 record. This is mainly what I can't understand. Why does it think it will only get one record in response when it's a left join? I don't see any left join there ... PG 8.2.4 on Linux kernel 2.6.9 x64 The first thing you should do is update to 8.2.6; we've fixed a fair number of problems since then that were fallout from the outer-join planning rewrite in 8.2. If it still doesn't work very well, please post the pg_stats rows for the join columns involved (idatrndtl.ida_trans_match_source_id and mtchsrcprj3.id). (You do have up-to-date ANALYZE stats for both of those tables, right?) regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org