On Tue, Jun 23, 2009 at 10:18:30PM +0200, Jakov Sosic wrote:
> On Fri, 19 Jun 2009 09:43:28 -0600
> torrez wrote:
>
> > Hello,
> > I'm implementing WAL archiving and PITR on my production DB.
> > I've set up my TAR, WAL archives and pg_xlog all to be store on a
> > separate disk then my DB.
On Fri, 19 Jun 2009 09:43:28 -0600
torrez wrote:
> Hello,
> I'm implementing WAL archiving and PITR on my production DB.
> I've set up my TAR, WAL archives and pg_xlog all to be store on a
> separate disk then my DB.
> I'm at the point where i'm running 'Select pg_start_backup('xxx');'.
>
On Freitag 19 Juni 2009 torrez wrote:
> time tar -czf /pbo/podbackuprecovery/tars/pod-backup-$
> {CURRDATE}.tar.gz /pbo/pod > /pbo/podbackuprecovery/pitr_logs/backup-
> tar-log-${CURRDATE}.log 2>&1
If you have a multi-core/multi-CPU machine, try to used pbzip2 (parallel
bzip2), which can use all
torrez wrote:
> The problem is that this tar took just over 25 hours to complete. I
> expected this to be a long process because since my DB is about 100
> gigs.
> But 25hrs seems a bit too long. Does anyone have any ideas how to cut
> down on this time?
Don't gzip it online?
--
Alvaro
Hello,
I'm implementing WAL archiving and PITR on my production DB.
I've set up my TAR, WAL archives and pg_xlog all to be store on a
separate disk then my DB.
I'm at the point where i'm running 'Select pg_start_backup('xxx');'.
Here's the command i've run for my tar:
time tar -czf /p