Hi Steve, Very interested to hear about your setup,
as I have a similar setup (backend to a mail server/SPAM scanner) although on a
much lighter load at the moment. My database is only just touching a GB so
nothing near the scale of yours! I use a file-system level backup, and am
currently testing a PITR continuous recovery onto a hot-standby server. Tar-ing the database directory currently
takes about a minute (at 1GB), so as you can estimate it’d be about 3 hours
for yours. My future plan for when my database grows
larger, is with the use of WAL logging – have a base backup taken on a
Sunday morning (our quietest time), ship this to the hot-standby once a week,
and start it off in a recovery mode (using my rolling-WAL script I’m
testing now.) Then throughout the week, send the WAL logs from the live
box as they become available down to the standby, which then get processed on
arrival – these files are 16MB in size (I believe this can be changed). The beauty of all this is it doesn’t
require the database to be taken off-line, or slowed down. This is coming from an 8.1 server, I
believe it’d be okay for 7.4 but don’t quote me on it. Regards Andy From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On
Behalf Of Steve Burrows I am struggling to find an
acceptable way of backing up a PostgreSQL 7.4 database. |
- Re: [ADMIN] Backing up large databases Andy Shellam
- Re: [ADMIN] Backing up large databases Jim Nasby
- Re: [ADMIN] Backing up large databases Robin Iddon
- Re: [ADMIN] Backing up large databases Rafael Martinez
- Re: [ADMIN] Backing up large databases alex.cotarlan
- Re: [ADMIN] Backing up large databases Uwe C. Schroeder
- Re: [ADMIN] Backing up large databases Simon Riggs
- Re: [ADMIN] Backing up large databases Naomi Walker
- Re: [ADMIN] Backing up large databases Steve Burrows