I have found, on 7.3.4, a _massive_ performance difference on restoring without indices - on a 25million row table from 8 hours down to <1 hour!
I've found the best way is to do this... (there may be a script somewhere that automates this)
- do a --schema-only restore to create the tables
- manually drop the indices using psql
- do a --data-only restore, also using --disable-triggers
- manually recreate the indices.
IIRC, it also helps to turn off fsync
Hope that helps,
Shane
On 12 Nov 2003, at 16:55, ow wrote:
Hi,
Trying to restore a table that has about 80 million records. The database was
dumped and restored according to the following procedure:
1) dump the db, data only
time /usr/local/pgsql/bin/pg_dump -abf ./pgsql.7.4.rc1.pgdump.Z --format=c
--compress=6 -U postgres testdb
2) create db schema from a separate file, including table structures, constraints, indexes 3) edit restore order to satisfy the constraints 4) restore the db time /usr/local/pgsql/bin/pg_restore -d testdb -U postgres -a ./pgsql.7.4.rc1.pgdump.Z -L ./restoreOrder.txt --verbose
pg_restore has been running for 14 hours now and it does not appear that
there's any end in sight. Meanwhile, postmaster is slowly eating away at the
memory, it now has 46% of all available memory with about 900MB on swap. HD
activity is non-stopping.
In retrospective, I guess, the table with 80M records could've been created
without indexes (it has 3, pk & ak constraints and fk index) to speed up the
pg_restore ... but then I'm not sure if creating the indexes afterwards
would've been much faster. Anything I'm doing wrong?
Thanks
__________________________________ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster
---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]