Re: [BackupPC-users] 38GB file backup is hanging backuppc (more info and more questions)

2009-03-02 Thread John Rouillard
On Mon, Mar 02, 2009 at 08:45:22AM +0200, Brad C wrote:
 Hi John,
 
 On Mon, Feb 16, 2009 at 8:42 PM, John Rouillard rouilj-backu...@renesys.com
  wrote:
 
  Hi Craig:
 
  On Thu, Feb 12, 2009 at 11:24:28PM -0800, Craig Barratt wrote:
   Tony writes:
I missed the original post, but  I run rsync with the --whole-file
option, but I still get RStmp files, is that not supposed to happen?
  
   RStmp is a temporary file used to store the uncompressed pool file,
   which is needed for the rsync algorithm.  It's only used for larger
   files - smaller files are uncompressed in memory.
  
   RStmp is independent of --whole-file.
 
  What about when there is no prior file? I have explictly deleted the
  original file from the prior backup and I still get an RStmp file. Is
  it just filled with zeros or something?
 
  I agree with Tony that the new file is created very slowly. I have
  14GB of the 38GB file transfrerred and the latest backup attempt has
  been running since:
 
 2009-02-12 19:58
 
  so that is very slow indeed.
 
  Is there some data I can get to try to figure out where the bottleneck
  is? Since Tony said he sees the same issues using --whole-file I guess
  that won't solve my problem either.

 I'm having the identical issue, moved from pure rsync which wasnt causing a
 problem before that I could see.
 Also with large database files (12GB).

If you use cmp -l between two copies of the database files (yesterdays
and todays for example), how may bytes do you have that are different?

  cmp -l yesterday.db today.db | wc -l

will give you the number of different bytes.

 Strangely enough, If I clear the backup manually and then set it to full

How do you manually clear the backup, delete the whole
/backuppc-root/pc/hotname/ tree? Or do you just delete the backup
database file.

 it backs up in 47 minutes flat,
 otherwise it could sit for days. Not sure what I should try? I could pre
 script it to remove the files before every backup but I it would be a
 shortcut and not resolving the problem.

My backup finally finished after 4 days and 15 hours. Now I am
incrementally backing up the file without issue (it has about 100MB of
byte differences in steady state mode). My guess is the rsync perl
module is having issues with large changes in files. I never did try
the --whole-file mode.

My guess is something is wacky in the rsync perl module used by
BackupPC since like you I could rsync the file using rsync(1) in less
than two hours whether I was rsyncing to a non-existing file, or
rsyncing to an old copy of the database.

So I am afraid I don't have any good ideas here.

-- 
-- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-244-9084 (cell)
603-643-9300 x 111

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 38GB file backup is hanging backuppc (more info and more questions)

2009-03-01 Thread Brad C
Hi John,

I'm having the identical issue, moved from pure rsync which wasnt causing a
problem before that I could see.
Also with large database files (12GB). Strangely enough, If I clear the
backup manually and then set it to full it backs up in 47 minutes flat,
otherwise it could sit for days. Not sure what I should try? I could pre
script it to remove the files before every backup but I it would be a
shortcut and not resolving the problem.

Kind Regards




On Mon, Feb 16, 2009 at 8:42 PM, John Rouillard rouilj-backu...@renesys.com
 wrote:

 Hi Craig:

 On Thu, Feb 12, 2009 at 11:24:28PM -0800, Craig Barratt wrote:
  Tony writes:
   I missed the original post, but  I run rsync with the --whole-file
   option, but I still get RStmp files, is that not supposed to happen?
 
  RStmp is a temporary file used to store the uncompressed pool file,
  which is needed for the rsync algorithm.  It's only used for larger
  files - smaller files are uncompressed in memory.
 
  RStmp is independent of --whole-file.

 What about when there is no prior file? I have explictly deleted the
 original file from the prior backup and I still get an RStmp file. Is
 it just filled with zeros or something?

 I agree with Tony that the new file is created very slowly. I have
 14GB of the 38GB file transfrerred and the latest backup attempt has
 been running since:

2009-02-12 19:58

 so that is very slow indeed.

 Is there some data I can get to try to figure out where the bottleneck
 is? Since Tony said he sees the same issues using --whole-file I guess
 that won't solve my problem either.

 --
-- rouilj

 John Rouillard
 System Administrator
 Renesys Corporation
 603-244-9084 (cell)
 603-643-9300 x 111


 --
 Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco,
 CA
 -OSBC tackles the biggest issue in open source: Open Sourcing the
 Enterprise
 -Strategies to boost innovation and cut costs with open source
 participation
 -Receive a $600 discount off the registration fee with the source code:
 SFAD
 http://p.sf.net/sfu/XcvMzF8H
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 38GB file backup is hanging backuppc (more info and more questions)

2009-02-16 Thread John Rouillard
Hi Craig:

On Thu, Feb 12, 2009 at 11:24:28PM -0800, Craig Barratt wrote:
 Tony writes:
  I missed the original post, but  I run rsync with the --whole-file
  option, but I still get RStmp files, is that not supposed to happen?
 
 RStmp is a temporary file used to store the uncompressed pool file,
 which is needed for the rsync algorithm.  It's only used for larger
 files - smaller files are uncompressed in memory.
 
 RStmp is independent of --whole-file.

What about when there is no prior file? I have explictly deleted the
original file from the prior backup and I still get an RStmp file. Is
it just filled with zeros or something?

I agree with Tony that the new file is created very slowly. I have
14GB of the 38GB file transfrerred and the latest backup attempt has
been running since:

2009-02-12 19:58

so that is very slow indeed.

Is there some data I can get to try to figure out where the bottleneck
is? Since Tony said he sees the same issues using --whole-file I guess
that won't solve my problem either.

-- 
-- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-244-9084 (cell)
603-643-9300 x 111

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 38GB file backup is hanging backuppc (more info and more questions)

2009-02-15 Thread dan
since you are already doing a database dump and backing that up, why not
split the file up in to chunks?  if you use something like rar or 7z you can
split the compressed files up nicely.  you could also do some magic with tar
and gz.  This would save rsync the headache of md4/md5 on such a large file,
which is likely where the issue is.  If you break the file up into 700MB
chunks or something you will be in good shape.

On Fri, Feb 13, 2009 at 12:24 AM, Craig Barratt 
cbarr...@users.sourceforge.net wrote:

 Tony writes:

  I missed the original post, but  I run rsync with the --whole-file
  option, but I still get RStmp files, is that not supposed to happen?

 RStmp is a temporary file used to store the uncompressed pool file,
 which is needed for the rsync algorithm.  It's only used for larger
 files - smaller files are uncompressed in memory.

 RStmp is independent of --whole-file.

 Craig


 --
 Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco,
 CA
 -OSBC tackles the biggest issue in open source: Open Sourcing the
 Enterprise
 -Strategies to boost innovation and cut costs with open source
 participation
 -Receive a $600 discount off the registration fee with the source code:
 SFAD
 http://p.sf.net/sfu/XcvMzF8H
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 38GB file backup is hanging backuppc (more info and more questions)

2009-02-12 Thread Pedro M. S. Oliveira
Hi!
I had this trouble some time ago while backing up vmware virtual machines (the 
files were extremely large about 120GB) and rsync would behave just like you 
are saying it happens to you. I had other smaller vms about 10 GB and those 
worked perfectly with rsync.
I did some research and from what i've found the clues were going to the rsync 
protocol itself while transferring large files.
I changed the transference mode from rsync to tar and since then I had no 
trouble.
Cheers 
Pedro  

On Wednesday 11 February 2009 22:41:03 John Rouillard wrote:
 Hi all:
 
 Following up with more info with the hope that somebody has a clue as
 to what is killing this backup.
 
 On Fri, Feb 06, 2009 at 03:41:40PM +, John Rouillard wrote:
  I am backing up a 38GB file daily (database dump). There were some
  changes on the database server, so I started a new full dump. After
  two days (40+ hours) it had still not completed. This is over a GB
  network link. I restarted the full dump thinking it had gotten hung.
  [...]
  What is the RStmp file? That one grew pretty quickly to it's current
  size given the start time of the backup (18:08 on feb 5). If I run
  file (1) against that file it identifies it as
 
 From another email, it looks like it is the uncompressed prior copy of
 the file that is currently being transferred.
 
   
 http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg05836.html
 
 Now because of another comment, I tried to disable the use of RStmp
 and force BackupPC to do the equivalent of the --whole-file where it
 just copies the file across without applying the rsync
 differential/incremental algorithm. I executed a full backup after
 moving the file in the prior backup aside. So I moved 
 
   
 /backup-pc/pc/ldap01.bos1.renesys.com/326/f%2fdata%2fbak/ffedora-ds/fcurrent/fuserRoot/fid2entry.db4
 
 to 
 
   
 /backup-pc/pc/ldap01.bos1.renesys.com/326/f%2fdata%2fbak/ffedora-ds/fcurrent/fuserRoot/fid2entry.db4.orig
 
 I claim this should make rsync diff against a non-existent file and
 thus just copy the entire file, but I still see in the lsof output for
 the BackupPC process:
 
   BackupPC_ 18683 backup8u   REG9,2 36878598144  48203250
   /backup-pc/pc/ldap01.bos1.renesys.com/new/f%2fdata%2fbak/RStmp
 
 and there is only 1 file that is that large (36878598144 bytes) under
 that volume and it is the id2entry.db4 file.
 
 So what did I miss?
 
 Does BackupPC search more than the prior backup? I verified that run
 327 (which is the partial copy) doesn't have any copy of:
 
   f%2fdata%2fbak/ffedora-ds/fcurrent/fuserRoot/fid2entry.db4
 
 in it's tree. So where is the RStmp file coming from?
 
 At this point it's been running about 24 hours and it has transferred
 only 10GB of the 30GB file.
 
 This is ridiculously slow. If my math is right, I should expect a
 36GByte file at a 1Mbit/sec rate (which is about what cacti is showing
 as a steady state throughput on the ethernet port) to transfer in:
 
   '36*1024*8/(3600*24)'= ~3.413
 
 so it will take 3 and a half days at this rate. This is with both
 systems having a lot of idle time and  1% wait state.
 
 If I run an rsync on the BackupPC server to copy the file from the
 same client, I get something closer to 10MBytes/second
 (80Mbits/sec). Which provides a full copy in a bit over an hour.
 
 Also one other thing that I noticed, BackupPC in it's 
 $Conf{RsyncArgs}  setting uses:
 
 
 '--block-size=2048',
 
 which is described in the rsync man page as:
 
-B, --block-size=BLOCKSIZE
   This forces the block size used in the rsync algorithm to a
   fixed value.  It is normally selected based on the size of
   each file being updated.  See the technical report for
   details.
 
 is it possible to increase this or will the Perl rsync library break??
 It would be preferable to not specify it at all allowing the remote
 rsync to set the block size based on the file size.
 
 I see this in lib/BackupPC/Xfer/RsyncFileIO.pm:
 
   sub csumStart
   {
 my($fio, $f, $needMD4, $defBlkSize, $phase) = @_;
 
 $defBlkSize ||= $fio-{blockSize};
 
 which makes it looks like it can handle a different blocksize, but
 this could be totally unrelated.
 
 I could easily see a 2k block size slowing down the transfer of a
 large file if each block has to be summed.
 
 Anybody with any ideas?
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://pedro.linux-geex.com
Cellular: +351 96 5867227
--
--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax 

Re: [BackupPC-users] 38GB file backup is hanging backuppc (more info and more questions)

2009-02-12 Thread Tony Schreiner

 the file that is currently being transferred.

 http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg05836.html

 Now because of another comment, I tried to disable the use of RStmp
 and force BackupPC to do the equivalent of the --whole-file where it
 just copies the file across without applying the rsync
 differential/incremental algorithm. I executed a full backup after
 moving the file in the prior backup aside. So I moved



I missed the original post, but  I run rsync with the --whole-file  
option, but I still get RStmp files, is that not supposed to happen?


And the writing to the new file in the  new subdirectory is  
sometimes very, very slow even though the RStmp file was filled  
reasonably fast.


Tony Schreiner
Boston College

--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 38GB file backup is hanging backuppc (more info and more questions)

2009-02-12 Thread Craig Barratt
Tony writes:

 I missed the original post, but  I run rsync with the --whole-file
 option, but I still get RStmp files, is that not supposed to happen?

RStmp is a temporary file used to store the uncompressed pool file,
which is needed for the rsync algorithm.  It's only used for larger
files - smaller files are uncompressed in memory.

RStmp is independent of --whole-file.

Craig

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 38GB file backup is hanging backuppc (more info and more questions)

2009-02-11 Thread John Rouillard
Hi all:

Following up with more info with the hope that somebody has a clue as
to what is killing this backup.

On Fri, Feb 06, 2009 at 03:41:40PM +, John Rouillard wrote:
 I am backing up a 38GB file daily (database dump). There were some
 changes on the database server, so I started a new full dump. After
 two days (40+ hours) it had still not completed. This is over a GB
 network link. I restarted the full dump thinking it had gotten hung.
 [...]
 What is the RStmp file? That one grew pretty quickly to it's current
 size given the start time of the backup (18:08 on feb 5). If I run
 file (1) against that file it identifies it as

From another email, it looks like it is the uncompressed prior copy of
the file that is currently being transferred.

  http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg05836.html

Now because of another comment, I tried to disable the use of RStmp
and force BackupPC to do the equivalent of the --whole-file where it
just copies the file across without applying the rsync
differential/incremental algorithm. I executed a full backup after
moving the file in the prior backup aside. So I moved 

  
/backup-pc/pc/ldap01.bos1.renesys.com/326/f%2fdata%2fbak/ffedora-ds/fcurrent/fuserRoot/fid2entry.db4

to 

  
/backup-pc/pc/ldap01.bos1.renesys.com/326/f%2fdata%2fbak/ffedora-ds/fcurrent/fuserRoot/fid2entry.db4.orig

I claim this should make rsync diff against a non-existent file and
thus just copy the entire file, but I still see in the lsof output for
the BackupPC process:

  BackupPC_ 18683 backup8u   REG9,2 36878598144  48203250
  /backup-pc/pc/ldap01.bos1.renesys.com/new/f%2fdata%2fbak/RStmp

and there is only 1 file that is that large (36878598144 bytes) under
that volume and it is the id2entry.db4 file.

So what did I miss?

Does BackupPC search more than the prior backup? I verified that run
327 (which is the partial copy) doesn't have any copy of:

  f%2fdata%2fbak/ffedora-ds/fcurrent/fuserRoot/fid2entry.db4

in it's tree. So where is the RStmp file coming from?

At this point it's been running about 24 hours and it has transferred
only 10GB of the 30GB file.

This is ridiculously slow. If my math is right, I should expect a
36GByte file at a 1Mbit/sec rate (which is about what cacti is showing
as a steady state throughput on the ethernet port) to transfer in:

  '36*1024*8/(3600*24)'= ~3.413

so it will take 3 and a half days at this rate. This is with both
systems having a lot of idle time and  1% wait state.

If I run an rsync on the BackupPC server to copy the file from the
same client, I get something closer to 10MBytes/second
(80Mbits/sec). Which provides a full copy in a bit over an hour.

Also one other thing that I noticed, BackupPC in it's 
$Conf{RsyncArgs}  setting uses:


'--block-size=2048',

which is described in the rsync man page as:

   -B, --block-size=BLOCKSIZE
  This forces the block size used in the rsync algorithm to a
  fixed value.  It is normally selected based on the size of
  each file being updated.  See the technical report for
  details.

is it possible to increase this or will the Perl rsync library break??
It would be preferable to not specify it at all allowing the remote
rsync to set the block size based on the file size.

I see this in lib/BackupPC/Xfer/RsyncFileIO.pm:

  sub csumStart
  {
my($fio, $f, $needMD4, $defBlkSize, $phase) = @_;

$defBlkSize ||= $fio-{blockSize};

which makes it looks like it can handle a different blocksize, but
this could be totally unrelated.

I could easily see a 2k block size slowing down the transfer of a
large file if each block has to be summed.

Anybody with any ideas?

-- 
-- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-244-9084 (cell)
603-643-9300 x 111

--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] 38GB file backup is hanging backuppc.

2009-02-06 Thread John Rouillard
Hi all:

Having a problem with backuppc running over ssh with rsync.

I am backing up a 38GB file daily (database dump). There were some
changes on the database server, so I started a new full dump. After
two days (40+ hours) it had still not completed. This is over a GB
network link. I restarted the full dump thinking it had gotten hung.

I tried to find the partly written file on the backup server, and
found some very odd things.

The BackupPC_dump perl process (pid 16784) is showing this sequence in
strace:

  alarm(172800)   = 172800
  select(16, [11], NULL, [11], NULL)  = 1 (in [11])
  read(11, BjGq8Ax1SfiCcrVqaRuPzV5js\n /c3By..., 65536) = 8188
  alarm(172800)   = 172800
  alarm(172800)   = 172800
  alarm(172800)   = 172800
  select(16, [11], NULL, [11], NULL)  = 1 (in [11])
  read(11, \374\17\0\7tQsU3/6Vsd/9z\n 5/k/8AH6xrT/X..., 65536) = 12288
  alarm(172800)   = 172800
  alarm(172800)   = 172800
  alarm(172800)   = 172800
  alarm(172800)   = 172800
  alarm(172800)   = 172800
  alarm(172800)   = 172800
  select(16, [11], NULL, [11], NULL)  = 1 (in [11])
  read(11, \374\17\0\7HvnkYehOv7k\n Dz+T/vuo5EevevG..., 65536) = 12288
  alarm(172800)   = 172800
  alarm(172800)   = 172800
  alarm(172800)   = 172800

lsof on that process shows the following file descriptors:

  BackupPC_ 16784 backup0w   CHR1,3  1937 /dev/null
  BackupPC_ 16784 backup1w  FIFO0,6 846787883 pipe
  BackupPC_ 16784 backup2w  FIFO0,6 846787883 pipe
  BackupPC_ 16784 backup3u  IPv4  846787907   TCP 
backup:59406-ldap:ldaps(ESTABLISHED)
  BackupPC_ 16784 backup4w   REG9,22171  47693861
 /backup-pc/pc/ldap01.bos1.renesys.com/LOG.022009
  BackupPC_ 16784 backup5w   REG9,2   0  47695023
 /backup-pc/pc/ldap01.bos1.renesys.com/XferLOG.z
  BackupPC_ 16784 backup6w   REG9,22228  47695267
 /backup-pc/pc/ldap01.bos1.renesys.com/NewFileList
  BackupPC_ 16784 backup7w   REG9,2  8634251394  36864007
 
/backup-pc/pc/ldap01.bos1.renesys.com/new/f%2fdata%2fbak/ffedora-ds/fcurrent/fuserRoot/fid2entry.db4
  BackupPC_ 16784 backup8u   REG9,2 37303115776  34717713
 /backup-pc/pc/ldap01.bos1.renesys.com/new/f%2fdata%2fbak/RStmp
  BackupPC_ 16784 backup9u  unix 0xf6f12c80 846791450 socket
  BackupPC_ 16784 backup   11u  unix 0xf6835c80 846791422 socket

ls -l of the file on file descriptor 7: 

   -rw-r- 1 backup dev 8634251394 Feb  6 15:10
 
/backup-pc/pc/ldap01.bos1.renesys.com/new/f%2fdata%2fbak/ffedora-ds/fcurrent/fuserRoot/fid2entry.db4

while the one on fd 8 shows:

   -rw-r- 1 backup dev 37303115776 Feb  5 18:34
 /backup-pc/pc/ldap01.bos1.renesys.com/new/f%2fdata%2fbak/RStmp

What is the RStmp file? That one grew pretty quickly to it's current
size given the start time of the backup (18:08 on feb 5). If I run
file (1) against that file it identifies it as

  /backup-pc/pc/ldap01.bos1.renesys.com/new/f%2fdata%2fbak/RStmp:
  Berkeley DB (Btree, version 9, native byte-order)

While file on the fid2entry.db4 file shows:

  
/backup-pc/pc/ldap01.bos1.renesys.com/new/f%2fdata%2fbak/ffedora-ds/fcurrent/fuserRoot/fid2entry.db4:
 data

I could almost believ that I looking at the actual rsynced data file
(in RStmp) which is a little shorter than the source file possibly due
to sparse file compression or something that makes it a bit shorter by
suppressing blocks of nulls? But running a cmp -l between the source
file and the RStmp file shows differences before the end of the RStmp
file.

The backup server has 8GB of memory available, is almost 100% idle and
I am seeing no I/O wait in top. I don't see the perl/BackupPC_dump
process showing up in the top output either.  The server is running
centos 5.2 on an ext3 filesystem.

The rsync on the host being backed up looks like:

  write(1, \374\17\0\7, 4)  = 4
  select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
  write(1,  fLv/BPD4v6neeJr74fazcy31hf2r3em..., 4092) = 4092
  select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
  write(1, \374\17\0\7, 4)  = 4
  select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
  write(1, \266.\27\0\1\0\346\37\0\7dn: nsuniqueid=4ed5424..., 4092) =
  4092
  select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
  write(1, \374\17\0\7, 4)  = 4
  select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
  write(1, DkmvilSqKpyWd+x9aqkHHmjJHg3x28V/..., 4092) = 4092
  select(2, NULL, [1], [1], {60, 0})  = 0 (Timeout)
  

Re: [BackupPC-users] 38GB file backup is hanging backuppc.

2009-02-06 Thread John Rouillard
On Fri, Feb 06, 2009 at 08:04:57AM -0800, Michael L. Barrow wrote in a
non-list copied email:
In my experience, rsync doesn't play nice with really big files for
incremental updates. I recommend that you have a unique filename so
that you basically end up with a full backup each time around.

In this problem I am backing up on the same lan, in production it will
be backed up offsite across a slow wan, so compression and rsync
algorithms will be key.

It was working well enough before the last major database change, so I
concur that large difference in a large file appear to be causing
issues, but it looks like the problem is on the backuppc end.

Hmm, maybe if I change the rsync command for that host to include
--whole-file I can get a new baseline. Does anybody know if the rsync
perl library used by BackupPC can handle whole-file mode?

Usually the incremental changes between the files are under 100MB of
the 36GB file which makes doing daily backups possible.

Using just plain rsync (2.x) on both sides across the wan works fine
and results in a huge speedup.

But thanks for the caution. That gives me another avenue to test at
least although it may result in having to dump backupc.

--
  -- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-244-9084 (cell)
603-643-9300 x 111

--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/