[PERFORM] DB responce during DB dump
Hi, everybody! I experience problems with backing up one of my Postgresql 8.1.2 installations. The problem is that when I do DB backup, all queries begin to run very slow =( The database only grows in its size (~20Gb today), and the number of transactions increases every month. A year ago such slow down was OK, but today it is unacceptable. I found out that pg_dump dramatically increases hdd I/O and because of this most of all queries begin to run slower. My application using this DB server is time-critical, so any kind of slow down is critical. I've written a perl script to limit pg_dump output bandwidth, a simple traffic shaper, which runs as: pg_dumpall -c -U postgres | limit_bandwidth.pl | bzip2 pgsql_dump.bz2 The limit_bandwidth.pl script limits pipe output at 4Mb/sec rate, which seems to be ok. Is there any other solution to avoid this problem? -- Evgeny Gridasov Software Engineer I-Free, Russia ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PERFORM] DB responce during DB dump
Evgeny Gridasov wrote: Hi, everybody! I experience problems with backing up one of my Postgresql 8.1.2 installations. The problem is that when I do DB backup, all queries begin to run very slow =( The database only grows in its size (~20Gb today), and the number of transactions increases every month. A year ago such slow down was OK, but today it is unacceptable. I found out that pg_dump dramatically increases hdd I/O and because of this most of all queries begin to run slower. My application using this DB server is time-critical, so any kind of slow down is critical. I've written a perl script to limit pg_dump output bandwidth, a simple traffic shaper, which runs as: pg_dumpall -c -U postgres | limit_bandwidth.pl | bzip2 pgsql_dump.bz2 The limit_bandwidth.pl script limits pipe output at 4Mb/sec rate, which seems to be ok. Is there any other solution to avoid this problem? That's an interesting solution, and I'd guess people might like to see it posted to the list if it's not too big. Also, there's no reason you have to dump from the same machine, you can do so over the network which should reduce activity a little bit. Basically though, it sounds like you either need more disk I/O or a different approach. Have you looked into using PITR log-shipping or replication (e.g. slony) to have an off-machine backup? -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] DB responce during DB dump
Richard Huxton dev@archonet.com writes: Evgeny Gridasov wrote: I've written a perl script to limit pg_dump output bandwidth, ... Is there any other solution to avoid this problem? That's an interesting solution, and I'd guess people might like to see it posted to the list if it's not too big. Years ago there was some experimentation with dump-rate throttling logic inside pg_dump itself --- there's still a comment about it in pg_dump.c. The experiment didn't seem very successful, which is why it never got to be a permanent feature. I'm curious to know why this perl script is doing a better job than we were able to do inside pg_dump. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] DB responce during DB dump
All I was trying to achieve is to limit I/O rate done by pg_dump. The script is a very simple pipe rate limitter and nothing more: it reads input, but outputs data no more than at rate specified. I guess it helps because even if pg_dump outputs data at 20 mb/sec, the script won't be able to read it at rate higher than output rate. Pipe buffer is not infinitive, so pg_dump output rate and hard disk reads become almost equal the input rate of my perl script. On Wed, 25 Jan 2006 11:21:58 -0500 Tom Lane [EMAIL PROTECTED] wrote: Years ago there was some experimentation with dump-rate throttling logic inside pg_dump itself --- there's still a comment about it in pg_dump.c. The experiment didn't seem very successful, which is why it never got to be a permanent feature. I'm curious to know why this perl script is doing a better job than we were able to do inside pg_dump. -- Evgeny Gridasov Software Engineer I-Free, Russia ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] DB responce during DB dump
Ok, It's VERY simple =) here: http://deepcore.i-free.ru/simple_shaper.pl I could dump it to a spare machine, but I don't have one. Current DB server is 2xXEON / 4GbRAM / RAID10 (4 SCSI HDD). Performance is excellent, except during backups. I wanted to set up some kind of replication but it's useless - I don't have a spare machine now, may be in future... On Wed, 25 Jan 2006 12:44:45 + Richard Huxton dev@archonet.com wrote: That's an interesting solution, and I'd guess people might like to see it posted to the list if it's not too big. Also, there's no reason you have to dump from the same machine, you can do so over the network which should reduce activity a little bit. Basically though, it sounds like you either need more disk I/O or a different approach. Have you looked into using PITR log-shipping or replication (e.g. slony) to have an off-machine backup? -- Evgeny Gridasov Software Engineer I-Free, Russia ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[PERFORM] PostgreSQL Solaris packages now in beta
With big thanks to Josh Berkus and Devrim Gunduz, I'm happy to announce that Sun has just released a Solaris distribution of PostgreSQL 8.1.2 with ready-to-install packages for both Sparc and x86. These packages are currently in Beta, and we expect to FCS in 2 -3 weeks. The packages, along with an install guide, are available for download at http://pgfoundry.org/projects/solarispackages/ We have tightly integrated PostgreSQL with Solaris in a manner similar to the Linux distributions available on postgresql.org. In fact, the directory structures are identical. Starting with Solaris 10 Update 2, PostgreSQL will be distributed with every copy of Solaris, via download and physical media. We welcome any and all feedback on this PostgreSQL Solaris distribution. Please subscribe to the [EMAIL PROTECTED] mailing list to give us feedback: http://pgfoundry.org/mail/?group_id=163 BTW, I'm a senior engineer at Sun Microsystems, recently working with the PostgreSQL community (namely Josh Berkus, Devrim Gunduz, and Gavin Sherry) on the Solaris Packages Project at PgFoundry, PostgreSQL performance optimization on Solaris, and leveraging Solaris 10 capabilities (e.g. DTrace) specifically for PostgreSQL. I'll be posting a Solaris performance tuning guide in a few weeks. Regards, Robert Lor ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly