Hi,

Jason B wrote on 20.02.2007 at 21:28:43 [[BackupPC-users] Backing up large 
directories times out with signal=ALRM or PIPE]:
> I've run into a bit of
> difficulty backing up a large directory tree that has me not being
> able to do a successful backup in over a month now. I'm attempting to
> back up about 70GB over the Internet with a 1 MB/sec connection (the

if you really mean 8 MBit/s your backup will need about 20 hours to
complete, meaning $Conf{ClientTimeout} will need to be at least 72000 (if
you meant 128KB/s, it's obviously 8 times as much). Setting it to this value
or more is no problem. It just means, if a backup happens to get somehow
stuck, BackupPC will need that long to recover, possibly blocking other
backups for the time due to $Conf{MaxBackups}. That may or may not be a
problem for you in the long run, so you'll probably want to adjust it once
you've got a feeling for how long your backups take in the worst case.

> time it takes doesn't really bother me, just want to do a full backup
> and then run incrementals all  the time).

You don't really want to do that, for various reasons.

1.) An incremental is based on the last full backup (or incremental of lower
    level, to be exact). That means, everything changed since the last full
    backup will be transfered on each incremental - more data from day to
    day.
2.) In contrast to this, an rsync(d) full backup will also only transfer
    files changed since the last full backup (i.e. ideally not more than an
    incremental), but it will give you a new reference point, meaning future
    incrementals transfer less data.
3.) Rsync(d) full backups go to more trouble to determine what has changed,
    meaning they're more expensive in terms of CPU time and disk I/O, but
    they'll catch changes incrementals may have missed. That means they're
    vital every now and then, supposing you want a meaningful backup of your
    data.

> The tree is approximately like this:
> 
> - top level 1
> - articles
>   - dir 1
>     - subdirs 1 through 9
>   - dir 2
>     - subdirs 1 through 9
>   etc until dir 9 (same subdir structure)
> - images
>   - dir 1
>     - subdirs 1 through 9
>   - dir 2
>     - subdirs 1 through 9
>   etc until dir 9 (same subdir structure)
> - top level 4
> 
> There are (on average) 5,000 files per directory (about 230,000 files
> in total).

Jason Hughes explained how to incrementally transfer such a structure using
$Conf{BackupFilesExclude}. The important thing is that you need successful
backups to avoid re-transferring data, even if these backups at first
comprise only part of your target data. It might be enough to split the
process into two parts by first excluding half of your toplevel directories
and then removing the excludes for the second run. You might even be able
to transfer everything at once by simply adjusting your $Conf{ClientTimeout}.
If in doubt, set the value way too high rather than slightly too low. You
can always adjust it after your first successful backup.

Regards,
Holger

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to