Re: [PERFORM] Restore performance?
Well, your pg_dump command lost your BLOBs since the plain text format doesn't support them. Well, no.. they are stored as BYTEA not Large Objects.. They are encoded in ASCII in the pg_dump output. As a side note: plain text dump format in 8.1 supprts LOBs ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] Restore performance?
> Well, your pg_dump command lost your BLOBs since the plain text > format doesn't support them. Well, no.. they are stored as BYTEA not Large Objects.. They are encoded in ASCII in the pg_dump output. > But once you use the -Fc format on your dump and enable blob backups, > you can speed up reloads by increasing your checkpoint segments to a big > number like 256 and the checkpoint timeout to something like 10 minutes. > All other normal tuning parameters should be what you plan > to use for your normal operations, too. Thanks. Jesper -- Jesper Krogh ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PERFORM] Restore performance?
On Apr 10, 2006, at 3:55 AM, Jesper Krogh wrote: I'd run pg_dump | gzip > sqldump.gz on the old system. That took about 30 hours and gave me an 90GB zipped file. Running cat sqldump.gz | gunzip | psql into the 8.1 database seems to take about the same time. Are there any tricks I can use to speed this dump+restore process up? The database contains quite alot of BLOB, thus the size. Well, your pg_dump command lost your BLOBs since the plain text format doesn't support them. But once you use the -Fc format on your dump and enable blob backups, you can speed up reloads by increasing your checkpoint segments to a big number like 256 and the checkpoint timeout to something like 10 minutes. All other normal tuning parameters should be what you plan to use for your normal operations, too. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [PERFORM] Restore performance?
I'd run pg_dump | gzip > sqldump.gz on the old system. If the source and destination databases are on different machines, you can pipe pg_dump on the source machine to pg_restore on the destination machine by using netcat. If you only have 100 Mbps ethernet, compressing the data will be faster. If you have Gb Ethernet, maybe you don't need to compress, but it doesn't hurt to test. use pg_restore instead of psql, and use a recent version of pg_dump which can generate dumps in the latest format. If you need fast compression, use gzip -1 or even lzop, which is incredibly fast. Turn off fsync during the restore and set maintenance_work_mem to use most of your available RAM for index creation. I think that creating foreign key constraints uses large joins ; it might be good to up work_mem also. Check the speed of your disks with dd beforehand. You might get a surprise. Maybe you can also play with the bgwriter and checkpoint parameters. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] Restore performance?
Rajesh Kumar Mallah wrote: > 4. fsync can also be turned off while loading huge dataset , > but seek others comments too (as study docs) as i am not sure about the > reliability. i think it can make a lot of difference. Also be sure to increase maintenance_work_mem so that index creation goes faster. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [PERFORM] Restore performance?
4. fsync can also be turned off while loading huge dataset , but seek others comments too (as study docs) as i am not sure about the reliability. i think it can make a lot of difference. On 4/10/06, Jesper Krogh <[EMAIL PROTECTED]> wrote: Rajesh Kumar Mallah wrote:>> I'd run pg_dump | gzip > sqldump.gz on the old system. That took about>> 30 hours and gave me an 90GB zipped file. Running>> cat sqldump.gz | gunzip | psql >> into the 8.1 database seems to take about the same time. Are there>> any tricks I can use to speed this dump+restore process up?>>> was the last restore successfull ?> if so why do you want to repeat ? "about the same time" == Estimated guess from restoring a few tablesI was running a testrun, without disabling updates to the productiondatabase, the real run is scheduled for easter where there hopefully is no users on the system. So I need to repeat, I'm just trying to get afeelingabout how long time I need to allocate for the operation.> 1. run new version of postgres in a different port and pipe pg_dump to psql > this may save the CPU time of compression , there is no need for a temporary> dump file.>> pg_dump | /path/to/psql813 -p 54XX newdbI'll do that. It is a completely different machine anyway. > 2. use new version of pg_dump to dump the old database as new version> is supposed to be wiser.Check.> 3. make sure you are trapping the restore errors properly> psql newdb 2>&1 | cat | tee err works for me. Thats noted.--Jesper Krogh, [EMAIL PROTECTED]
Re: [PERFORM] Restore performance?
Rajesh Kumar Mallah wrote: >> I'd run pg_dump | gzip > sqldump.gz on the old system. That took about >> 30 hours and gave me an 90GB zipped file. Running >> cat sqldump.gz | gunzip | psql >> into the 8.1 database seems to take about the same time. Are there >> any tricks I can use to speed this dump+restore process up? > > > was the last restore successfull ? > if so why do you want to repeat ? "about the same time" == Estimated guess from restoring a few tables I was running a testrun, without disabling updates to the production database, the real run is scheduled for easter where there hopefully is no users on the system. So I need to repeat, I'm just trying to get a feelingabout how long time I need to allocate for the operation. > 1. run new version of postgres in a different port and pipe pg_dump to psql > this may save the CPU time of compression , there is no need for a temporary > dump file. > > pg_dump | /path/to/psql813 -p 54XX newdb I'll do that. It is a completely different machine anyway. > 2. use new version of pg_dump to dump the old database as new version > is supposed to be wiser. Check. > 3. make sure you are trapping the restore errors properly > psql newdb 2>&1 | cat | tee err works for me. Thats noted. -- Jesper Krogh, [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PERFORM] Restore performance?
sorry for the post , i didn' saw the other replies only after posting.On 4/10/06, Rajesh Kumar Mallah <[EMAIL PROTECTED] > wrote: On 4/10/06, Jesper Krogh <[EMAIL PROTECTED] > wrote: HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.I'd run pg_dump | gzip > sqldump.gz on the old system. That took about30 hours and gave me an 90GB zipped file. Running cat sqldump.gz | gunzip | psqlinto the 8.1 database seems to take about the same time. Are thereany tricks I can use to speed this dump+restore process up? was the last restore successfull ? if so why do you want to repeat ?some tips1. run new version of postgres in a different port and pipe pg_dump to psqlthis may save the CPU time of compression , there is no need for a temporary dump file.pg_dump | /path/to/psql813 -p 54XX newdb2. use new version of pg_dump to dump the old database as new version is supposed to be wiser.3. make sure you are trapping the restore errors properly psql newdb 2>&1 | cat | tee err works for me. The database contains quite alot of BLOB, thus the size. Jesper--./Jesper Krogh, [EMAIL PROTECTED], Jabber ID: [EMAIL PROTECTED]---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PERFORM] Restore performance?
On 4/10/06, Jesper Krogh <[EMAIL PROTECTED]> wrote: HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.I'd run pg_dump | gzip > sqldump.gz on the old system. That took about30 hours and gave me an 90GB zipped file. Running cat sqldump.gz | gunzip | psqlinto the 8.1 database seems to take about the same time. Are thereany tricks I can use to speed this dump+restore process up?was the last restore successfull ? if so why do you want to repeat ?some tips1. run new version of postgres in a different port and pipe pg_dump to psqlthis may save the CPU time of compression , there is no need for a temporary dump file.pg_dump | /path/to/psql813 -p 54XX newdb2. use new version of pg_dump to dump the old database as new version is supposed to be wiser.3. make sure you are trapping the restore errors properly psql newdb 2>&1 | cat | tee err works for me. The database contains quite alot of BLOB, thus the size. Jesper--./Jesper Krogh, [EMAIL PROTECTED], Jabber ID: [EMAIL PROTECTED]---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PERFORM] Restore performance?
"Jesper Krogh" <[EMAIL PROTECTED]> writes: > gzip does not seem to be the bottleneck, on restore is psql the nr. 1 > consumer on cpu-time. Hm. We've seen some situations where readline mistakenly decides that the input is interactive and wastes lots of cycles doing useless processing (like keeping history). Try "psql -n" and see if that helps. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] Restore performance?
> I'd run pg_dump | gzip > sqldump.gz on the old system. That took about > 30 hours and gave me an 90GB zipped file. Running > cat sqldump.gz | gunzip | psql > into the 8.1 database seems to take about the same time. Are there > any tricks I can use to speed this dump+restore process up? > > The database contains quite alot of BLOB, thus the size. You could try slony - it can do almost-zero-downtime upgrades. Greetings Marcin Mank ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PERFORM] Restore performance?
> If they both took the same amount of time, then you are almost certainly > bottlenecked on gzip. > > Try a faster CPU or use "gzip -fast". gzip does not seem to be the bottleneck, on restore is psql the nr. 1 consumer on cpu-time. Jesper Sorry for the double post. -- Jesper Krogh ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [PERFORM] Restore performance?
Jesper Krogh wrote: Hi I'm currently upgrading a Posgresql 7.3.2 database to a 8.1. I'd run pg_dump | gzip > sqldump.gz on the old system. That took about 30 hours and gave me an 90GB zipped file. Running cat sqldump.gz | gunzip | psql into the 8.1 database seems to take about the same time. Are there any tricks I can use to speed this dump+restore process up? If you can have both database systems up at the same time, you could pg_dump | psql. Regards, Andreas ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PERFORM] Restore performance?
Jesper, If they both took the same amount of time, then you are almost certainly bottlenecked on gzip. Try a faster CPU or use "gzip -fast". - Luke From: [EMAIL PROTECTED] on behalf of Jesper Krogh Sent: Mon 4/10/2006 12:55 AM To: pgsql-performance@postgresql.org Subject: [PERFORM] Restore performance? Hi I'm currently upgrading a Posgresql 7.3.2 database to a 8.1. I'd run pg_dump | gzip > sqldump.gz on the old system. That took about 30 hours and gave me an 90GB zipped file. Running cat sqldump.gz | gunzip | psql into the 8.1 database seems to take about the same time. Are there any tricks I can use to speed this dump+restore process up? The database contains quite alot of BLOB, thus the size. Jesper -- ./Jesper Krogh, [EMAIL PROTECTED], Jabber ID: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[PERFORM] Restore performance?
Hi I'm currently upgrading a Posgresql 7.3.2 database to a 8.1. I'd run pg_dump | gzip > sqldump.gz on the old system. That took about 30 hours and gave me an 90GB zipped file. Running cat sqldump.gz | gunzip | psql into the 8.1 database seems to take about the same time. Are there any tricks I can use to speed this dump+restore process up? The database contains quite alot of BLOB, thus the size. Jesper -- ./Jesper Krogh, [EMAIL PROTECTED], Jabber ID: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: explain analyze is your friend