[PERFORM] Query plan - now what?
Well, now that I have the plan for my slow-running query, what do I do? Where should I focus my attention? Thanks. -David Hash Join (cost=16620.59..22331.88 rows=40133 width=266) (actual time=118773.28..580889.01 rows=57076 loops=1) - Hash Join (cost=16619.49..21628.48 rows=40133 width=249) (actual time=118771.29..535709.47 rows=57076 loops=1) - Hash Join (cost=16618.41..20724.39 rows=40133 width=240) (actual time=118768.04..432327.82 rows=57076 loops=1) - Hash Join (cost=16617.34..19920.66 rows=40133 width=223) (actual time=118764.67..340333.78 rows=57076 loops=l) - Hash Join (cost=16616.14..19217.14 rows=4Ol33 width=214) (actual time=118761.38..258978.8l row=57076 loops=1) - Merge Join (cost=16615.07..18413.42 rows=40133 width=205) (actual time=118758.74..187180.55 rows=57076 loops=i) - Index Scan using grf_grf_id_idx on giraffes (cost=O.O0..1115.61 rows=53874 width=8) (actual time=2.37..6802.38 rows=57077 loops=l) - Sort (cost=l66l5.07..16615.07 rows=18554 width=197) (actual time=118755.11..120261.06 rows=59416 loops=l) - Hash Join (cost=8126.08..14152.54 rows=18554 width=197) (actual time=50615.72..l09853.7l rows=16310 loops=1) - Hash Join (cost=8124.39..12690.30 rows=24907 width=179) (actual time=50607.36..86868.58 rows=iSBiS loops=1) - Hash Join (cost=249.26..2375.23 rows=24907 width=131) (actual time=23476.42..35107.80 rows=16310 loops=l) - Nested Loop (cost=248.2l..1938.31 rows=24907 width=118) (actual time=23474.70..28155.13 rows=16310 loops=1) - Seq Scan on zebras (cost=0.00..l.0l rows=l width=14) (actual time=O.64..0.72 rows=1 ioops=1) - Materialize (cost=1688.23..l688.23 rows=24907 width=104) (actual time=23473.77..23834.26 rows=16310 loops=l) - Hash Join (cost=248.21..1688.23 rows=24907 width=lO4) (actual time=1199.26..23059.92 rows=16310 loops=l) - Seq Scan on frogs (cost=0.00..755.07 rows=24907 width=83) (actual time=0.53..4629.58 rows=25702 loops=l) - Hash (cost=225.57..225.57 rows=9057 width=21) (actual time=1198.0l..1198.01 rows=0 loops=1) - Seq Scan on tigers (cost=0.00..225.57 rows=9057 width=21) (actual time=0.39..892.67 rows=9927 loops=1) - Hash (cost=l.O4..1.-4 rows=4 width=13) (actual time=l.07..1.07 rows=0 loops=1) - Seq Scan on deers (cost=0.0O..1.04 rows=4 width=13) (actual time=0.64..0.95 rows=4 loops=1)
Re: [PERFORM] Query plan - now what?
David Shadovitz [EMAIL PROTECTED] writes: Well, now that I have the plan for my slow-running query, what do I do? This is not very informative when you didn't show us the query nor the table schemas (column datatypes and the existence of indexes are the important parts). I have a feeling that you might be well advised to fold the multiple tables into one animals table, but there's not enough info here to make that recommendation for sure. BTW, what did you do with this, print and OCR it? It's full of the most bizarre typos ... mostly l for 1, but others too ... regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] Query plan - now what?
This is not very informative when you didn't show us the query nor the table schemas.. BTW, what did you do with this, print and OCR it? Tom, I work in a classified environment, so I had to sanitize the query plan, print it, and OCR it. I spent a lot of time fixing typos, but I guess at midnight my eyes missed some. This hassle is why I posted neither the query nor the schema. The database is normalized, though, but my use of animal names of couse masks this. If you think that you or anyone else would invest the time, I could post more info. I will also try Shridhar's suggestions on statistics_target and enable_hash_join. Thanks. -David ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PERFORM] Query plan - now what?
David Shadovitz [EMAIL PROTECTED] writes: If you think that you or anyone else would invest the time, I could post more info. I doubt you will get any useful help if you don't post more info. I will also try Shridhar's suggestions on statistics_target and enable_hash_join. It seemed to me that the row estimates were not so far off that I would call it a statistical failure; you can try increasing the stats target but I'm not hopeful about that. My guess is that you will have to look to revising either the query or the whole database structure (ie, merging tables). We'll need the info I asked for before we can make any recommendations, though. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[PERFORM] Performance related to size of tables
Hi everyone, I found that performance get worse as the size of a given table increases. I mean, for example I´ve just run some scripts shown in http://www.potentialtech.com/wmoran/postgresql.php I understand that those scripts are designed to see the behavior of postgresql under different filesystems. However, since them generate a lot of I/O activity, I think they can be used to adjust some configuration parameters. In that way, I increased the number of tuples inserted in the initial table to 200 and 300. What I saw is that the running time goes from 3 min., to 11 min. My question is, is it possible to use that test to tune some parameters?, if the answer is yes, what parameters should I change to get shorter running times? Thanks a lot Nestor ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[PERFORM] Update on putting WAL on ramdisk/
Some arbitrary data processing job WAL on single drive: 7.990 rec/s WAL on 2nd IDE drive: 8.329 rec/s WAL on tmpfs: 13.172 rec/s A huge jump in performance but a bit scary having a WAL that can disappear at any time. I'm gonna workup a rsync script and do some power-off experiments to see how badly it gets mangled. This could be good method though when you're dumping and restore an entire DB. Make a tmpfs mount, restore, shutdown DB and then copy the WAL back to the HD. I checked out the SanDisk IDE FlashDrives. They have a write cycle life of 2 million. I'll explore more expensive solid state drives. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PERFORM] Update on putting WAL on ramdisk/
WAL on single drive: 7.990 rec/s WAL on 2nd IDE drive: 8.329 rec/s WAL on tmpfs: 13.172 rec/s A huge jump in performance but a bit scary having a WAL that can disappear at any time. I'm gonna workup a rsync script and do some power-off experiments to see how badly it gets mangled. Surely this is just equivalent to disabling fsync? If you put a WAL on a volatile file system, there's not a whole lot of point in having one at all. Russ Garrett[EMAIL PROTECTED] http://last.fm ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PERFORM] Update on putting WAL on ramdisk/
Russell Garrett wrote: WAL on single drive: 7.990 rec/s WAL on 2nd IDE drive: 8.329 rec/s WAL on tmpfs: 13.172 rec/s A huge jump in performance but a bit scary having a WAL that can disappear at any time. I'm gonna workup a rsync script and do some power-off experiments to see how badly it gets mangled. Surely this is just equivalent to disabling fsync? If you put a WAL on a volatile file system, there's not a whole lot of point in having one at all. These tests were all with fsync off. And no, it's not equivalent to fsync off since the WAL is always written immediately regardless of fsync setting. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] [HACKERS] fsync method checking
Manfred Spraul [EMAIL PROTECTED] writes: One advantage of a seperate write and fsync call is better performance for the writes that are triggered within AdvanceXLInsertBuffer: I'm not sure how often that's necessary, but it's a write while holding both the WALWriteLock and WALInsertLock. If every write contains an implicit sync, that call would be much more expensive than necessary. Ideally that path isn't taken very often. But I'm currently having a discussion off-list with a CMU student who seems to be seeing a case where it happens a lot. (She reports that both WALWriteLock and WALInsertLock are causes of a lot of process blockages, which seems to mean that a lot of the WAL I/O is being done with both held, which would have to mean that AdvanceXLInsertBuffer is doing the I/O. More when we figure out what's going on exactly...) regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PERFORM] [ADMIN] ODBC Driver generates a too big windows swap file and
On Fri, 12 Dec 2003, Rhaoni Chiu Pereira wrote: Hi, is there a switch in your pgsql/odbc connector to enable cursors? If so, try turning that on. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html