On 04/05/2012 12:32 PM, Joachim Wieland wrote: > So here's a pg_dump benchmark from a real world database as requested > earlier. This is a ~750 GB large 9.0.6 database, and the backup has > been done over the internal network from a different machine. Both > machines run Linux. > > I am attaching a chart that shows the table size distribution of the > largest tables and the overall pg_dump runtime. The resulting (zlib > compressed) dump directory was 28 GB. > > Here are the raw numbers: > > -Fc dump > real 168m58.005s > user 146m29.175s > sys 7m1.113s > > -j 2 > real 90m6.152s > user 155m23.887s > sys 15m15.521s > > -j 3 > real 61m5.787s > user 155m33.118s > sys 13m24.618s > > -j 4 > real 44m16.757s > user 155m25.917s > sys 13m13.599s > > -j 6 > real 36m11.743s > user 156m30.794s > sys 12m39.029s > > -j 8 > real 36m16.662s > user 154m37.495s > sys 11m47.141s
interesting numbers, any details on the network speed between the boxes, the number of cores, the size of the dump uncompressed and what the appearant bottleneck was? Stefan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers