[PERFORM] Query plan - now what?
Well, now that I have the plan for my slow-running query, what do I do? Where should I focus my attention? Thanks. -David Hash Join (cost=16620.59..22331.88 rows=40133 width=266) (actual time=118773.28..580889.01 rows=57076 loops=1) - Hash Join (cost=16619.49..21628.48 rows=40133 width=249) (actual time=118771.29..535709.47 rows=57076 loops=1) - Hash Join (cost=16618.41..20724.39 rows=40133 width=240) (actual time=118768.04..432327.82 rows=57076 loops=1) - Hash Join (cost=16617.34..19920.66 rows=40133 width=223) (actual time=118764.67..340333.78 rows=57076 loops=l) - Hash Join (cost=16616.14..19217.14 rows=4Ol33 width=214) (actual time=118761.38..258978.8l row=57076 loops=1) - Merge Join (cost=16615.07..18413.42 rows=40133 width=205) (actual time=118758.74..187180.55 rows=57076 loops=i) - Index Scan using grf_grf_id_idx on giraffes (cost=O.O0..1115.61 rows=53874 width=8) (actual time=2.37..6802.38 rows=57077 loops=l) - Sort (cost=l66l5.07..16615.07 rows=18554 width=197) (actual time=118755.11..120261.06 rows=59416 loops=l) - Hash Join (cost=8126.08..14152.54 rows=18554 width=197) (actual time=50615.72..l09853.7l rows=16310 loops=1) - Hash Join (cost=8124.39..12690.30 rows=24907 width=179) (actual time=50607.36..86868.58 rows=iSBiS loops=1) - Hash Join (cost=249.26..2375.23 rows=24907 width=131) (actual time=23476.42..35107.80 rows=16310 loops=l) - Nested Loop (cost=248.2l..1938.31 rows=24907 width=118) (actual time=23474.70..28155.13 rows=16310 loops=1) - Seq Scan on zebras (cost=0.00..l.0l rows=l width=14) (actual time=O.64..0.72 rows=1 ioops=1) - Materialize (cost=1688.23..l688.23 rows=24907 width=104) (actual time=23473.77..23834.26 rows=16310 loops=l) - Hash Join (cost=248.21..1688.23 rows=24907 width=lO4) (actual time=1199.26..23059.92 rows=16310 loops=l) - Seq Scan on frogs (cost=0.00..755.07 rows=24907 width=83) (actual time=0.53..4629.58 rows=25702 loops=l) - Hash (cost=225.57..225.57 rows=9057 width=21) (actual time=1198.0l..1198.01 rows=0 loops=1) - Seq Scan on tigers (cost=0.00..225.57 rows=9057 width=21) (actual time=0.39..892.67 rows=9927 loops=1) - Hash (cost=l.O4..1.-4 rows=4 width=13) (actual time=l.07..1.07 rows=0 loops=1) - Seq Scan on deers (cost=0.0O..1.04 rows=4 width=13) (actual time=0.64..0.95 rows=4 loops=1)
Re: [PERFORM] Query plan - now what?
David Shadovitz [EMAIL PROTECTED] writes: Well, now that I have the plan for my slow-running query, what do I do? This is not very informative when you didn't show us the query nor the table schemas (column datatypes and the existence of indexes are the important parts). I have a feeling that you might be well advised to fold the multiple tables into one animals table, but there's not enough info here to make that recommendation for sure. BTW, what did you do with this, print and OCR it? It's full of the most bizarre typos ... mostly l for 1, but others too ... regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] Query plan - now what?
This is not very informative when you didn't show us the query nor the table schemas.. BTW, what did you do with this, print and OCR it? Tom, I work in a classified environment, so I had to sanitize the query plan, print it, and OCR it. I spent a lot of time fixing typos, but I guess at midnight my eyes missed some. This hassle is why I posted neither the query nor the schema. The database is normalized, though, but my use of animal names of couse masks this. If you think that you or anyone else would invest the time, I could post more info. I will also try Shridhar's suggestions on statistics_target and enable_hash_join. Thanks. -David ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PERFORM] Query plan - now what?
David Shadovitz [EMAIL PROTECTED] writes: If you think that you or anyone else would invest the time, I could post more info. I doubt you will get any useful help if you don't post more info. I will also try Shridhar's suggestions on statistics_target and enable_hash_join. It seemed to me that the row estimates were not so far off that I would call it a statistical failure; you can try increasing the stats target but I'm not hopeful about that. My guess is that you will have to look to revising either the query or the whole database structure (ie, merging tables). We'll need the info I asked for before we can make any recommendations, though. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings