Re: [BackupPC-users] speed up backups

2010-06-06 Thread Ralf Gross
Les Mikesell schrieb:
 Ralf Gross wrote:
   
  the RsyncP man page tells me this:
  
  http://search.cpan.org/~cbarratt/File-RsyncP-0.68/lib/File/RsyncP.pm
  
  File::RsyncP does not compute file deltas (ie: it behaves as though
  --whole-file is specified) or implement exclude or include options
  when sending file. File::RsyncP does handle file deltas and exclude
  and include options when receiving files.
  
  
  Thus no need to try the --whole-file option.
 
 That's when sending files - as in doing a restore.  When doing backups RsyncP 
 is 
 on the receiving side and a stock rsync is sending - and will do deltas. 
 Whether on not it is a win to compute deltas probably depends on the 
 relationship to available bandwidth and CPU, but it might be worth a try.  
 I'd 
 guess --whole-file might generally  be a win on files with random changes but 
 not on growing logfiles where the deltas are all past the end of the previous 
 copy.

the --whole-file option didn't help. The second full backup since
changing to rsync has finished now, and it took 600 min. less than the
last couple of full backups before. On the other hand the incremental
backups now need 3-4h in contrast to 60-80 min. before.

I'll stay with rsync for now, maybe with a longer interval between the
full backups. rsync should catch moved/deletet files.

Ralf

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-06-01 Thread Carl Wilhelm Soderstrom
On 05/30 04:12 , Michael Stowe wrote:
 It's generally slower, so I'm going to go with no on this one.

From the tests I've seen, ext2 is notably faster than ext3. Not doing
journaling does make a difference.

If you wanted to speed up ext3; you can put the journal on a separate device
(even an SSD). This may make a measurable difference; but I haven't tried it
myself.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-06-01 Thread Les Mikesell
Carl Wilhelm Soderstrom wrote:
 On 05/30 04:12 , Michael Stowe wrote:
 It's generally slower, so I'm going to go with no on this one.
 
From the tests I've seen, ext2 is notably faster than ext3. Not doing
 journaling does make a difference.
 
 If you wanted to speed up ext3; you can put the journal on a separate device
 (even an SSD). This may make a measurable difference; but I haven't tried it
 myself.

The killer with ext2 is that if you crash you have to do a complete fsck before 
coming back up which will take hours on a large backuppc filesystem.  With ext3 
normally the journal takes care of this.

-- 
   Les Mikesell
lesmikes...@gmail.com

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-06-01 Thread Chris Dennis
On 01/06/10 14:41, Les Mikesell wrote:
 Carl Wilhelm Soderstrom wrote:
 On 05/30 04:12 , Michael Stowe wrote:
 It's generally slower, so I'm going to go with no on this one.

  From the tests I've seen, ext2 is notably faster than ext3. Not doing
 journaling does make a difference.

 If you wanted to speed up ext3; you can put the journal on a separate device
 (even an SSD). This may make a measurable difference; but I haven't tried it
 myself.

 The killer with ext2 is that if you crash you have to do a complete fsck 
 before
 coming back up which will take hours on a large backuppc filesystem.  With 
 ext3
 normally the journal takes care of this.


That's what I suspected.  I might give ext2 a try once the effects of my 
recently-enabled checksum caching have kicked in.

cheers

Chris

-- 
Chris Dennis  cgden...@btinternet.com
Fordingbridge, Hampshire, UK

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-31 Thread Ralf Gross
Les Mikesell schrieb:
 Ralf Gross wrote:
  Ok, the first rsync full backup (488) completed. It took 500min. longer than
  the last tar full backup (482).
  
  Backup  TypeFilled  Level   Start Date  Duration/mins  Age/days
  482 fullyes 0   5/19 02:05  3223.2 11.5
  483 incrno  1   5/21 07:4989.6  9.2
  484 incrno  2   5/22 03:05   136.4  8.4
  485 incrno  3   5/23 03:05   119.1  7.4
  486 incrno  4   5/24 03:05   111.4  6.4
  487 incrno  1   5/25 03:05   165.9  5.4
  488 fullyes 0   5/26 21:00  3744.2  3.7
  489 incrno  1   5/29 12:15   394.1  1.1
  490 incrno  2   5/30 03:05   190.8  0.4
  
  I'm not sure if the checksum caching will compensate this in after the 3rd
  backup. Anything else I could do to tune rsync?
  
 
 You could force a full to start on Friday evening so weekly scheduling will 
 keep 
 the full runs on weekends if they take more than a night to complete.  
 Depending 
 on how much daily change you have, you might want to set incremental levels 
 for 
 the intermediate runs.

I use BackupPC and bacula for backups, I once lost a complete backuppc
pool/filesystem by a defect raid controller. So I need 2 backup
windows. But the option to do a full backup only once in 2 weeks
sounds a resonable option.

What I not quite understand is that inc. backup also take much longer
than before.
 
 A more extreme change would be to edit Rsync.pm to not add the --ignore-times 
 option on fulls.  I haven't needed this myself yet but I think it would make 
 a 
 big difference in speed - at the expense of not checking files for unlikely 
 but 
 possible differences.

Hm, I think I'll leave this optios as it is. In the list archives I
found some posts about the --whole-file option, but no definitive
answer if RsyncP supports it and if it's usefull at all.


Ralf
 

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-31 Thread Les Mikesell
Ralf Gross wrote:

 
 What I not quite understand is that inc. backup also take much longer
 than before.

A tar incremental looks only at the timestamps on the target directory (and 
will 
miss new or moved files with old timestamps).  Rsync compares directory entries 
with the last full run to catch any new files regardless of timestamps and to 
note any deletions.  And it still transfers the entire directory tree before 
starting.  If you have a very large number of small files that never change, 
you 
might improve things by moving them into larger zip or tar archives.

-- 
   Les Mikesell
lesmikes...@gmail.com


--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-31 Thread Les Mikesell
Ralf Gross wrote:
  
 the RsyncP man page tells me this:
 
 http://search.cpan.org/~cbarratt/File-RsyncP-0.68/lib/File/RsyncP.pm
 
 File::RsyncP does not compute file deltas (ie: it behaves as though
 --whole-file is specified) or implement exclude or include options
 when sending file. File::RsyncP does handle file deltas and exclude
 and include options when receiving files.
 
 
 Thus no need to try the --whole-file option.

That's when sending files - as in doing a restore.  When doing backups RsyncP 
is 
on the receiving side and a stock rsync is sending - and will do deltas. 
Whether on not it is a win to compute deltas probably depends on the 
relationship to available bandwidth and CPU, but it might be worth a try.  I'd 
guess --whole-file might generally  be a win on files with random changes but 
not on growing logfiles where the deltas are all past the end of the previous 
copy.

-- 
Les Mikesell
lesmikes...@gmail.com


--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-30 Thread Ralf Gross
Ralf Gross schrieb:
 Les Mikesell schrieb:
  On 5/26/2010 3:41 PM, Ralf Gross wrote:
   Ralf Gross schrieb:
   write(1, N\2\0\7\5\3lvs\r\0\0\0\r\0\0\0lvmiopversion8\5..., 594) = 594
   select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
   select(1, [0], [], NULL, {60, 0}
  
  
   smells like a time out, but I don't know where. I found a couple of 
   messages
   with similar output in the list archives, but none of them had a 
   solution yet.
  
   *grr*
  
   I only traced the Xfer PID, not the PID. BackupPC_dump seems to be
   active and comparing the file list with the pool and I see high cpu
   load.
  
   I'm sure that I haven't seen that as I abortet the backup before.
   Now I'll have will wait until tomorrow morning...
  
  Until the 2nd full completes, the server side has to uncompress the 
  stored copy to compute the checkums on existing files.  And there may be 
  some quirk about switching from tar to rsync that I've forgotten.  Maybe 
  the 1st run will add the checksum cache for files you already have.
 
 
 The full rsync is still running sind 5/26 21:00. I'll report back when
 it's done. 


Ok, the first rsync full backup (488) completed. It took 500min. longer than
the last tar full backup (482).

Backup  TypeFilled  Level   Start Date  Duration/mins  Age/days
482 fullyes 0   5/19 02:05  3223.2 11.5
483 incrno  1   5/21 07:4989.6  9.2
484 incrno  2   5/22 03:05   136.4  8.4
485 incrno  3   5/23 03:05   119.1  7.4
486 incrno  4   5/24 03:05   111.4  6.4
487 incrno  1   5/25 03:05   165.9  5.4
488 fullyes 0   5/26 21:00  3744.2  3.7
489 incrno  1   5/29 12:15   394.1  1.1
490 incrno  2   5/30 03:05   190.8  0.4

I'm not sure if the checksum caching will compensate this in after the 3rd
backup. Anything else I could do to tune rsync?

Ralf


--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-30 Thread Les Mikesell
Ralf Gross wrote:
 Ralf Gross schrieb:
 Les Mikesell schrieb:
 On 5/26/2010 3:41 PM, Ralf Gross wrote:
 Ralf Gross schrieb:
 write(1, N\2\0\7\5\3lvs\r\0\0\0\r\0\0\0lvmiopversion8\5..., 594) = 594
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0}


 smells like a time out, but I don't know where. I found a couple of 
 messages
 with similar output in the list archives, but none of them had a solution 
 yet.
 *grr*

 I only traced the Xfer PID, not the PID. BackupPC_dump seems to be
 active and comparing the file list with the pool and I see high cpu
 load.

 I'm sure that I haven't seen that as I abortet the backup before.
 Now I'll have will wait until tomorrow morning...
 Until the 2nd full completes, the server side has to uncompress the 
 stored copy to compute the checkums on existing files.  And there may be 
 some quirk about switching from tar to rsync that I've forgotten.  Maybe 
 the 1st run will add the checksum cache for files you already have.

 The full rsync is still running sind 5/26 21:00. I'll report back when
 it's done. 
 
 
 Ok, the first rsync full backup (488) completed. It took 500min. longer than
 the last tar full backup (482).
 
 BackupTypeFilled  Level   Start Date  Duration/mins  Age/days
 482   fullyes 0   5/19 02:05  3223.2 11.5
 483   incrno  1   5/21 07:4989.6  9.2
 484   incrno  2   5/22 03:05   136.4  8.4
 485   incrno  3   5/23 03:05   119.1  7.4
 486   incrno  4   5/24 03:05   111.4  6.4
 487   incrno  1   5/25 03:05   165.9  5.4
 488   fullyes 0   5/26 21:00  3744.2  3.7
 489   incrno  1   5/29 12:15   394.1  1.1
 490   incrno  2   5/30 03:05   190.8  0.4
 
 I'm not sure if the checksum caching will compensate this in after the 3rd
 backup. Anything else I could do to tune rsync?
 

You could force a full to start on Friday evening so weekly scheduling will 
keep 
the full runs on weekends if they take more than a night to complete.  
Depending 
on how much daily change you have, you might want to set incremental levels for 
the intermediate runs.

A more extreme change would be to edit Rsync.pm to not add the --ignore-times 
option on fulls.  I haven't needed this myself yet but I think it would make a 
big difference in speed - at the expense of not checking files for unlikely but 
possible differences.

-- 
   Les Mikesell
lesmikes...@gmail.com

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-30 Thread Chris Dennis
On 27/05/10 12:17, Sorin Srbu wrote:
 -Original Message-
 From: Tyler J. Wagner [mailto:ty...@tolaris.com]
 Sent: Thursday, May 27, 2010 1:00 PM
 To: backuppc-users@lists.sourceforge.net; sorin.s...@orgfarm.uu.se
 Subject: Re: [BackupPC-users] speed up backups

 On Thursday 27 May 2010 11:13:29 Sorin Srbu wrote:
 Looked into file system optimization again, while waiting for a backup to
 finish. Seems like noatime is a recommended setting in /etc/fstab. If I
 already have defaults set for the backup-array mount, and add
 noatime,
 should I still keep defaults? Or is it implied that the defaults are
 not
 to be used anymore and therefore one can omit it in /etc/fstab?

 My file system-fu is not that strong... 8-/

 Replace defaults with noatime or better, noatime,nodiratime.

 Then just remount, you don't need to reboot:

 mount -o remount /

 Replace / with the appropriate filesystem. I have a separate /var and
 /var/local for backuppc.

 Thanks. Just what I suspected.

 Nodiratime is new to me though. I can't say I've come upon it before. I'll
 try it at home first. 8-)

Will using ext2 instead of ext3 speed up backuppc?

cheers

Chris
-- 
Chris Dennis  cgden...@btinternet.com
Fordingbridge, Hampshire, UK

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-30 Thread Michael Stowe

It's generally slower, so I'm going to go with no on this one.

If filesystem is a bottleneck, you'd be better off with xfs or jfs.

 Will using ext2 instead of ext3 speed up backuppc?

 cheers

 Chris
 --
 Chris Dennis  cgden...@btinternet.com
 Fordingbridge, Hampshire, UK


--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-28 Thread Ralf Gross
Les Mikesell schrieb:
 On 5/26/2010 3:41 PM, Ralf Gross wrote:
  Ralf Gross schrieb:
  write(1, N\2\0\7\5\3lvs\r\0\0\0\r\0\0\0lvmiopversion8\5..., 594) = 594
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
  select(1, [0], [], NULL, {60, 0}
 
 
  smells like a time out, but I don't know where. I found a couple of 
  messages
  with similar output in the list archives, but none of them had a solution 
  yet.
 
  *grr*
 
  I only traced the Xfer PID, not the PID. BackupPC_dump seems to be
  active and comparing the file list with the pool and I see high cpu
  load.
 
  I'm sure that I haven't seen that as I abortet the backup before.
  Now I'll have will wait until tomorrow morning...
 
 Until the 2nd full completes, the server side has to uncompress the 
 stored copy to compute the checkums on existing files.  And there may be 
 some quirk about switching from tar to rsync that I've forgotten.  Maybe 
 the 1st run will add the checksum cache for files you already have.


The full rsync is still running sind 5/26 21:00. I'll report back when
it's done. 

Ralf

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-27 Thread Sorin Srbu
-Original Message-
From: Steve [mailto:lepe...@gmail.com]
Sent: Wednesday, May 26, 2010 5:33 PM
To: sorin.s...@orgfarm.uu.se; General list for user discussion, questions
and
support
Subject: Re: [BackupPC-users] speed up backups

it's in the documentation - click on documentation over on the left of
the web interface and search for checksum;
the section is rsync checksum caching

Looked into file system optimization again, while waiting for a backup to
finish. Seems like noatime is a recommended setting in /etc/fstab. If I
already have defaults set for the backup-array mount, and add noatime,
should I still keep defaults? Or is it implied that the defaults are not
to be used anymore and therefore one can omit it in /etc/fstab?

My file system-fu is not that strong... 8-/
-- 
/Sorin


smime.p7s
Description: S/MIME cryptographic signature
--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-27 Thread Tyler J. Wagner
On Thursday 27 May 2010 11:13:29 Sorin Srbu wrote:
 Looked into file system optimization again, while waiting for a backup to
 finish. Seems like noatime is a recommended setting in /etc/fstab. If I
 already have defaults set for the backup-array mount, and add noatime,
 should I still keep defaults? Or is it implied that the defaults are not
 to be used anymore and therefore one can omit it in /etc/fstab?
 
 My file system-fu is not that strong... 8-/

Replace defaults with noatime or better, noatime,nodiratime.

Then just remount, you don't need to reboot:

mount -o remount /

Replace / with the appropriate filesystem. I have a separate /var and 
/var/local for backuppc.

Regards,
Tyler

-- 
Any lawyer worth his salt will tell the suspect in no uncertain terms
to make no statement to police under any circumstances.
   -- Justice Robert Jackson

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-27 Thread Sorin Srbu
-Original Message-
From: Tyler J. Wagner [mailto:ty...@tolaris.com]
Sent: Thursday, May 27, 2010 1:00 PM
To: backuppc-users@lists.sourceforge.net; sorin.s...@orgfarm.uu.se
Subject: Re: [BackupPC-users] speed up backups

On Thursday 27 May 2010 11:13:29 Sorin Srbu wrote:
 Looked into file system optimization again, while waiting for a backup to
 finish. Seems like noatime is a recommended setting in /etc/fstab. If I
 already have defaults set for the backup-array mount, and add
noatime,
 should I still keep defaults? Or is it implied that the defaults are
not
 to be used anymore and therefore one can omit it in /etc/fstab?

 My file system-fu is not that strong... 8-/

Replace defaults with noatime or better, noatime,nodiratime.

Then just remount, you don't need to reboot:

mount -o remount /

Replace / with the appropriate filesystem. I have a separate /var and
/var/local for backuppc.

Thanks. Just what I suspected. 

Nodiratime is new to me though. I can't say I've come upon it before. I'll
try it at home first. 8-)

-- 
/Sorin


smime.p7s
Description: S/MIME cryptographic signature
--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-27 Thread Leen Besselink
On Thu, May 27, 2010 at 01:17:49PM +0200, Sorin Srbu wrote:
 -Original Message-
 From: Tyler J. Wagner [mailto:ty...@tolaris.com]
 Sent: Thursday, May 27, 2010 1:00 PM
 To: backuppc-users@lists.sourceforge.net; sorin.s...@orgfarm.uu.se
 Subject: Re: [BackupPC-users] speed up backups
 
 On Thursday 27 May 2010 11:13:29 Sorin Srbu wrote:
  Looked into file system optimization again, while waiting for a backup to
  finish. Seems like noatime is a recommended setting in /etc/fstab. If I
  already have defaults set for the backup-array mount, and add
 noatime,
  should I still keep defaults? Or is it implied that the defaults are
 not
  to be used anymore and therefore one can omit it in /etc/fstab?
 
  My file system-fu is not that strong... 8-/
 
 Replace defaults with noatime or better, noatime,nodiratime.
 
 Then just remount, you don't need to reboot:
 
 mount -o remount /
 
 Replace / with the appropriate filesystem. I have a separate /var and
 /var/local for backuppc.
 
 Thanks. Just what I suspected. 
 
 Nodiratime is new to me though. I can't say I've come upon it before. I'll
 try it at home first. 8-)
 

It was for me to, but I see the following mentioned in several places:

noatime is a superset of nodiratime

That would imply, setting nodiratime with noatime doesn't add anything.

 -- 
 /Sorin



 --
 

 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-27 Thread Tyler J. Wagner
On Thursday 27 May 2010 16:25:23 Leen Besselink wrote:
 It was for me to, but I see the following mentioned in several places:
 
 noatime is a superset of nodiratime
 
 That would imply, setting nodiratime with noatime doesn't add anything.

You are right!  Thanks for that.

http://lwn.net/Articles/245002/

Regards,
Tyler

-- 
I strongly reject any conceptual scheme that places our options on a
line, and holds that the only alternative to a pair of extreme positions
lies somewhere between them. More fruitful perspectives often require that
we step off the line to a site outside the dichotomy.
   -- Stephen Jay Gould

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-26 Thread Pedro M. S. Oliveira
Hello,
Have you tried the rsync method, it should be way faster than tar.
Cheers,
Pedro
On Wednesday 26 May 2010 08:27:04 Ralf Gross wrote:
 Hi,
 
 I'm using BackupPC without major problems since a few years now. Our
 main fileserver has now reached 3,3 TB and it takes 2 days (18 MB/s)
 to do a full backup with tar method.
 
 I'd like to find out if there is something I can do to speed up the full
 backups without changing the hardware.
 
 The file server and the BackupPC server are connected by Gigabit Ethernet
 links. The BackupPC pool is on a harware raid device (RAID6) with 1 GB
 cache. The setup of the file server is a bit complex with drbd/lvm
 
 Both server are running debian with xfs as filesystem.
 
 I'd have expected high iowait numbers during backup, either on the file
 server or the BackupPC side. But the numbers look ok, not too much iowait.
 
 
 BackupPC server:
 
 04:35:01  CPU  %user %nice   %system   %iowait%steal %idle
 04:45:01  all  37.97  0.00  7.29 10.86  0.00 43.88
 04:55:01  all  35.87  0.00  5.32  9.11  0.00 49.70
 05:05:01  all  34.69  0.00  4.96  9.91  0.00 50.44
 05:15:02  all  41.40  0.00  5.88  6.22  0.00 46.50
 05:25:01  all  44.81  0.00  6.14  4.18  0.00 44.87
 05:35:01  all  39.34  0.00  5.41  8.41  0.00 46.84
 05:45:02  all  44.90  0.00  6.84  3.59  0.00 44.68
 05:55:02  all  32.12  0.00  5.87  5.90  0.00 56.12
 06:05:01  all  34.23  0.00  6.32  6.28  0.00 53.17
 06:15:02  all  30.66  0.00  6.01  7.31  0.00 56.01
 06:25:01  all  18.76  0.00  3.69  7.38  0.00 70.17
 06:35:03  all  22.08  0.00  5.27  6.61  0.00 66.04
 06:45:01  all  39.50  0.00 11.54  0.37  0.00 48.59
 06:55:01  all  37.16  0.00  9.91  2.18  0.00 50.75
 07:05:02  all  24.52  0.00  4.99  8.99  0.00 61.50
 07:15:01  all  11.46  0.00  2.65 13.12  0.00 72.77
 07:25:02  all  11.65  0.00  3.16 11.16  0.00 74.03
 07:35:01  all  25.32  0.00  5.48  7.22  0.00 61.97
 07:45:01  all  26.68  0.00  6.71  6.99  0.00 59.62
 07:55:02  all  29.74  0.00  5.80  4.08  0.00 60.38
 08:05:01  all  42.30  0.00  6.34  3.49  0.00 47.87
 08:15:01  all  18.21  0.00  4.26 21.14  0.00 56.39
 08:25:01  all  25.73  0.00  5.32 20.17  0.00 48.78
 08:35:01  all  34.94  0.00  6.42  5.61  0.00 53.03
 08:45:02  all  26.25  0.00  5.00 10.71  0.00 58.04
 08:55:01  all  48.16  0.00  8.18  0.53  0.00 43.14
 09:05:01  all  44.54  0.00  7.10  2.25  0.00 46.11
 Average:  all  29.70  0.00  5.48  8.36  0.00 56.46
 
 file server:
 
 06:25:01  CPU  %user %nice   %system   %iowait%steal %idle
 06:35:01  all   5.91  0.10  7.25  5.71  0.00 81.03
 06:45:01  all   6.08  0.00  9.49  4.91  0.00 79.52
 06:55:01  all   5.71  0.00  8.72  3.91  0.00 81.65
 07:05:01  all   5.67  0.00  6.89  4.16  0.00 83.28
 07:15:01  all   5.59  0.00  5.72  5.89  0.00 82.79
 07:25:01  all   5.24  0.00  9.86  6.27  0.00 78.62
 07:35:01  all   5.91  0.00 13.82  5.10  0.00 75.17
 07:45:01  all   5.10  0.00  7.54  4.82  0.00 82.55
 07:55:01  all   4.35  0.00  6.69  3.53  0.00 85.43
 08:05:01  all   1.77  0.00  3.57  2.17  0.00 92.49
 08:15:01  all   1.81  0.00  2.44  3.73  0.00 92.02
 08:25:01  all   2.10  0.00  4.75  3.08  0.00 90.07
 08:35:01  all   2.10  0.00  6.37  3.60  0.00 87.92
 08:45:01  all   2.31  0.00  5.09  3.46  0.00 89.15
 08:55:01  all   2.05  0.00  4.06  1.34  0.00 92.56
 09:05:01  all   2.00  0.00  3.63  2.31  0.00 92.06
 Average:  all   2.50  0.00  3.84  2.91  0.00 90.75
 
 
 After all I've read, switching to rsync instead of tar doesn't seem to be a
 better choice.
 
 Disk I/O on the file server doesn't seem to be the bottleneck either. I can
 boost the disk I/O during backup with other tools (dd, cat, bonnie++) to more
 than 50 MB/s.
 
 Any ideas if I can tune my BackupPC settings to speed things up?
 
 Ralf
 
 --
 
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: 

Re: [BackupPC-users] speed up backups

2010-05-26 Thread Ralf Gross
Pedro M. S. Oliveira schrieb:

 Have you tried the rsync method, it should be way faster than tar.

I think rsync is most useful with servers that have a slow network
connection. But the network speed is not the problem, more precisely I
don't exactly know what the real bottlenck is.

Ralf

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-26 Thread Les Mikesell
Ralf Gross wrote:
 
 
 After all I've read, switching to rsync instead of tar doesn't seem to be a
 better choice.

After the 1st 2 fulls, rsync should be better if you have enabled checksum 
caching.  You do need plenty of RAM to hold the directory listing if you have a 
large number of files.

 Disk I/O on the file server doesn't seem to be the bottleneck either. I can
 boost the disk I/O during backup with other tools (dd, cat, bonnie++) to more
 than 50 MB/s.

If you have a lot of small files, the access pattern and speed won't match 
benchmark tests.  And these days, 50 MB/s sounds kind of slow to deal with 
terrabytes.  Even with rsync you'll have to read all of the files on the target 
side during full runs.

 Any ideas if I can tune my BackupPC settings to speed things up?

Throwing RAM at the server is likely to help.

-- 
   Les Mikesell
lesmikes...@gmail.com


--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-26 Thread Sorin Srbu
-Original Message-
From: Les Mikesell [mailto:lesmikes...@gmail.com]
Sent: Wednesday, May 26, 2010 2:55 PM
To: General list for user discussion, questions and support
Subject: Re: [BackupPC-users] speed up backups

After the 1st 2 fulls, rsync should be better if you have enabled checksum
caching.  You do need plenty of RAM to hold the directory listing if you
have a
large number of files.

That was the checksum= 31thousandsomething to be added somewhere. I need to
find that mail in the archives...

-- 
/Sorin


smime.p7s
Description: S/MIME cryptographic signature
--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-26 Thread Tyler J. Wagner


On Wednesday 26 May 2010 14:34:40 Sorin Srbu wrote:
 -Original Message-
 From: Les Mikesell [mailto:lesmikes...@gmail.com]
 Sent: Wednesday, May 26, 2010 2:55 PM
 To: General list for user discussion, questions and support
 Subject: Re: [BackupPC-users] speed up backups
 
 After the 1st 2 fulls, rsync should be better if you have enabled checksum
 caching.  You do need plenty of RAM to hold the directory listing if you
 
 have a
 
 large number of files.
 
 That was the checksum= 31thousandsomething to be added somewhere. I need to
 find that mail in the archives...
 

Add to RsyncArgs and RsyncRestoreArgs:

--checksum-seed=32761

The best thing about BackupPC is that all help is available from the web 
interface.

Regards,
Tyler

-- 
In a time of universal deceit, telling the truth is a revolutionary act.
   -- George Orwell

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-26 Thread Sorin Srbu
-Original Message-
From: Tyler J. Wagner [mailto:ty...@tolaris.com]
Sent: Wednesday, May 26, 2010 4:19 PM
To: backuppc-users@lists.sourceforge.net; sorin.s...@orgfarm.uu.se
Subject: Re: [BackupPC-users] speed up backups



On Wednesday 26 May 2010 14:34:40 Sorin Srbu wrote:
 -Original Message-
 From: Les Mikesell [mailto:lesmikes...@gmail.com]
 Sent: Wednesday, May 26, 2010 2:55 PM
 To: General list for user discussion, questions and support
 Subject: Re: [BackupPC-users] speed up backups
 
 After the 1st 2 fulls, rsync should be better if you have enabled
checksum
 caching.  You do need plenty of RAM to hold the directory listing if you
 have a large number of files.

 That was the checksum= 31thousandsomething to be added somewhere. I need
 to find that mail in the archives...


Add to RsyncArgs and RsyncRestoreArgs:

--checksum-seed=32761

The best thing about BackupPC is that all help is available from the web
interface.

Yupp, I know. Found it a few minutes ago. Thanks though. ;-)

-- 
/Sorin


smime.p7s
Description: S/MIME cryptographic signature
--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-26 Thread Ralf Gross
Tyler J. Wagner schrieb:
 
 
 On Wednesday 26 May 2010 14:34:40 Sorin Srbu wrote:
  -Original Message-
  From: Les Mikesell [mailto:lesmikes...@gmail.com]
  Sent: Wednesday, May 26, 2010 2:55 PM
  To: General list for user discussion, questions and support
  Subject: Re: [BackupPC-users] speed up backups
  
  After the 1st 2 fulls, rsync should be better if you have enabled checksum
  caching.  You do need plenty of RAM to hold the directory listing if you
  
  have a
  
  large number of files.
  
  That was the checksum= 31thousandsomething to be added somewhere. I need to
  find that mail in the archives...
  
 
 Add to RsyncArgs and RsyncRestoreArgs:
 
 --checksum-seed=32761
 
 The best thing about BackupPC is that all help is available from the web 
 interface.

Ok, I give it a shot. Changed the Xfer method to rsync, updated to
rsync 3.0.2 (I know I will not benefit much because of BackupPC's own
rsync perl module) and added the --checksum-seed option.

Right now the rsync process is consuming 370 MB and is still growing.
The file server has 7.000.000 files. Let's see what happens...

Ralf

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-26 Thread Steve
it's in the documentation - click on documentation over on the left of
the web interface and search for checksum;
the section is rsync checksum caching
evets

On Wed, May 26, 2010 at 9:34 AM, Sorin Srbu sorin.s...@orgfarm.uu.se wrote:
-Original Message-
From: Les Mikesell [mailto:lesmikes...@gmail.com]
Sent: Wednesday, May 26, 2010 2:55 PM
To: General list for user discussion, questions and support
Subject: Re: [BackupPC-users] speed up backups

After the 1st 2 fulls, rsync should be better if you have enabled checksum
caching.  You do need plenty of RAM to hold the directory listing if you
 have a
large number of files.

 That was the checksum= 31thousandsomething to be added somewhere. I need to
 find that mail in the archives...

 --
 /Sorin

 --


 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/





-- 
It turns out there is considerable overlap between the smartest bears
and the dumbest tourists.

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-26 Thread Les Mikesell
On 5/26/2010 10:23 AM, Ralf Gross wrote:
 Tyler J. Wagner schrieb:


 On Wednesday 26 May 2010 14:34:40 Sorin Srbu wrote:
 -Original Message-
 From: Les Mikesell [mailto:lesmikes...@gmail.com]
 Sent: Wednesday, May 26, 2010 2:55 PM
 To: General list for user discussion, questions and support
 Subject: Re: [BackupPC-users] speed up backups

 After the 1st 2 fulls, rsync should be better if you have enabled checksum
 caching.  You do need plenty of RAM to hold the directory listing if you

 have a

 large number of files.

 That was the checksum= 31thousandsomething to be added somewhere. I need to
 find that mail in the archives...


 Add to RsyncArgs and RsyncRestoreArgs:

 --checksum-seed=32761

 The best thing about BackupPC is that all help is available from the web
 interface.

 Ok, I give it a shot. Changed the Xfer method to rsync, updated to
 rsync 3.0.2 (I know I will not benefit much because of BackupPC's own
 rsync perl module) and added the --checksum-seed option.

 Right now the rsync process is consuming 370 MB and is still growing.
 The file server has 7.000.000 files. Let's see what happens...

If the directory layout makes it reasonable to split that into several 
different runs it would help, especially if you are short on RAM.

-- 
   Les Mikesell
lesmikes...@gmail.com


--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-26 Thread Ralf Gross
Ralf Gross schrieb:
 Ralf Gross schrieb:
  Tyler J. Wagner schrieb:
   
   
   On Wednesday 26 May 2010 14:34:40 Sorin Srbu wrote:
-Original Message-
From: Les Mikesell [mailto:lesmikes...@gmail.com]
Sent: Wednesday, May 26, 2010 2:55 PM
To: General list for user discussion, questions and support
Subject: Re: [BackupPC-users] speed up backups

After the 1st 2 fulls, rsync should be better if you have enabled 
checksum
caching.  You do need plenty of RAM to hold the directory listing if 
you

have a

large number of files.

That was the checksum= 31thousandsomething to be added somewhere. I 
need to
find that mail in the archives...

   
   Add to RsyncArgs and RsyncRestoreArgs:
   
   --checksum-seed=32761
   
   The best thing about BackupPC is that all help is available from the web 
   interface.
  
  Ok, I give it a shot. Changed the Xfer method to rsync, updated to
  rsync 3.0.2 (I know I will not benefit much because of BackupPC's own
  rsync perl module) and added the --checksum-seed option.
  
  Right now the rsync process is consuming 370 MB and is still growing.
  The file server has 7.000.000 files. Let's see what happens...
 
 Hm, after 45 minutes the memory usage stopped growing at 540 MB. But now I
 don't see any activity at all. Neither on the file server, nor on the BackupPC
 server.  Nothing in the BackupPC log since start of the backup. Nothing in the
 the NewFileList file.
 
 On the file server I get this info with lsof:
 
 # lsof | grep rsync
 rsync 31139  root  cwd   DIR  104,1 4096  
 2 /
 rsync 31139  root  rtd   DIR  104,1 4096  
 2 /
 rsync 31139  root  txt   REG  104,1   384304 
 571696 /usr/bin/rsync
 rsync 31139  root  mem   REG0,0   
 0 [heap] (stat: No such file or directory)
 rsync 31139  root  mem   REG  104,197928
 2026757 /lib/ld-2.3.6.so
 rsync 31139  root  mem   REG  104,126088
 2023687 /lib/libacl.so.1.1.0
 rsync 31139  root  mem   REG  104,131784
 2023965 /lib/libpopt.so.0.0.0
 rsync 31139  root  mem   REG  104,1  1286104
 2026779 /lib/libc-2.3.6.so
 rsync 31139  root  mem   REG  104,115568
 2023689 /lib/libattr.so.1.1.0
 rsync 31139  root0u unix 0x81020f1ffc80
 29056104 socket
 rsync 31139  root1u unix 0x81020f1ffc80
 29056104 socket
 rsync 31139  root2u unix 0x81020f1ff380
 29056106 socket
 
 
 strace on the BackupPC server (BackupPC_dump process):
 
 $strace -f -p 11972
 Process 11972 attached - interrupt to quit
 select(8, [7], NULL, [7], NULL
 
 
 strace on the file server (rsync):
 
 $strace -f -p 31139
 Process 31139 attached - interrupt to quit
 select(1, [0], [], NULL, {11, 972000}
 

next try

# strace -e trace=\!file -f -p 12795

[60 minutes later]

fstat(3, {st_mode=S_IFDIR|0755, st_size=12288, ...}) = 0
fcntl(3, F_SETFD, FD_CLOEXEC)   = 0
getdents64(3, /* 126 entries */, 4096)  = 4096
select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
write(1, \374\17\0\7, 4)  = 4
select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
write(1, he/pagelinks\6\0\0\0\35\216\333H:K\ttext_html..., 4092) = 4092
getdents64(3, /* 85 entries */, 4096)   = 2752
select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
write(1, \374\17\0\7, 4)  = 4
select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
write(1, rsion:\5\2ip\7\0\0\0\33I\246E\7\0\0\0/bin/ip8\5\6..., 4092) = 4092
getdents64(3, /* 0 entries */, 4096)= 0
close(3)= 0
fstat(3, {st_mode=S_IFDIR|0700, st_size=16384, ...}) = 0
fcntl(3, F_SETFD, FD_CLOEXEC)   = 0
getdents64(3, /* 2 entries */, 4096)= 48
getdents64(3, /* 0 entries */, 4096)= 0
close(3)= 0
fstat(3, {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0
fcntl(3, F_SETFD, FD_CLOEXEC)   = 0
getdents64(3, /* 11 entries */, 4096)   = 320
getdents64(3, /* 0 entries */, 4096)= 0
close(3)= 0
mmap(NULL, 29110272, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x2ac342e63000
munmap(0x2ac342e63000, 29110272)= 0
select(2, NULL, [1], [1], {60, 0})  = 1 (out [1], left {60, 0})
write(1, N\2\0\7\5\3lvs\r\0\0\0\r\0\0\0lvmiopversion8\5..., 594) = 594
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
select(1, [0], [], NULL, {60, 0}


smells like a time out, but I don't know where. I found a couple of messages
with similar

Re: [BackupPC-users] speed up backups

2010-05-26 Thread Ralf Gross
Ralf Gross schrieb:
 write(1, N\2\0\7\5\3lvs\r\0\0\0\r\0\0\0lvmiopversion8\5..., 594) = 594
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0}
 
 
 smells like a time out, but I don't know where. I found a couple of messages
 with similar output in the list archives, but none of them had a solution yet.

*grr*

I only traced the Xfer PID, not the PID. BackupPC_dump seems to be
active and comparing the file list with the pool and I see high cpu
load.

I'm sure that I haven't seen that as I abortet the backup before.
Now I'll have will wait until tomorrow morning...

Ralf 

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-26 Thread Les Mikesell
On 5/26/2010 3:12 PM, Ralf Gross wrote:
 
 smells like a time out, but I don't know where. I found a couple of messages
 with similar output in the list archives, but none of them had a solution yet.


If it is repeatable, you might try to start the strace sooner so you can 
see what happens before things stop.  The server side wasn't swapping 
memory was it?

-- 
   Les Mikesell
lesmikes...@gmail.com

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-26 Thread Les Mikesell
On 5/26/2010 3:41 PM, Ralf Gross wrote:
 Ralf Gross schrieb:
 write(1, N\2\0\7\5\3lvs\r\0\0\0\r\0\0\0lvmiopversion8\5..., 594) = 594
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0})   = 0 (Timeout)
 select(1, [0], [], NULL, {60, 0}


 smells like a time out, but I don't know where. I found a couple of messages
 with similar output in the list archives, but none of them had a solution 
 yet.

 *grr*

 I only traced the Xfer PID, not the PID. BackupPC_dump seems to be
 active and comparing the file list with the pool and I see high cpu
 load.

 I'm sure that I haven't seen that as I abortet the backup before.
 Now I'll have will wait until tomorrow morning...

Until the 2nd full completes, the server side has to uncompress the 
stored copy to compute the checkums on existing files.  And there may be 
some quirk about switching from tar to rsync that I've forgotten.  Maybe 
the 1st run will add the checksum cache for files you already have.

-- 
   Les Mikesell
lesmikes...@gmail.com

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/