Hi,

Jason B wrote on 20.02.2007 at 20:28:59 [Re[2]: [BackupPC-users] Backing up 
large directories times out with signal=ALRM or PIPE]:
> > [...] $Conf{ClientTimeout} will need to be at least 72000 [...]
> 
> I see. I must've been misunderstanding the meaning of that setting -
> my original impression was that it be the time that it would wait, at
> most, if nothing is happening before it times out - I assumed that if
> files are being transferred, that is sufficient activity for it to
> keep re-setting that timer. [...]

that is the way it would ideally be supposed to work. Unfortunately that's
not really easy to implement, as the instance (i.e. process) *watching* the
transfer is not the one *doing* the transfer. Apparently, the tar and smb
transfer methods are a bit "better" than rsync(d) in that the alarm time is
reset whenever (informational) output from the tar command is received. This
is not really an advantage, because you're dependent on the transfer time of
the largest file instead of the total backup. File sizes probably vary more
than total backup sizes.

> > You don't really want to do that, for various reasons.
> 
> Would you suggest, in that case, to lower the frequency of
> incrementals, and raise the frequency of full backups? I was going on
> the idea of doing an incremental once every 2 days or so, and a full
> backup once a month (because of the size of the data and the
> persistent timeouts).

Well, you *wrote* you wanted no full backups at all. Whether one month is a
good interval for full backups or not really depends on your data, the
changes, your bandwidth, and your requirements. If you require an exact
backup that is at most a week old (meaning no missed changes are acceptable),
then you'll need a weekly full. If the same files change every day, your
incrementals won't grow as much as if different files change every day. If
the time a backup takes is unimportant, as long as it finishes within 24
hours, you can probably get away with longer intervals between full backups.
If bandwidth is more expensive than server load, you'll need shorter
intervals. You'll have to work out for yourself, which interval best fits
your needs. I was just saying: "no fulls and only incrementals" won't work.

You can always configure monthly (automatic) full backups and then start one
by hand after a week. See how long that takes. Start the next one after
further two weeks. See how much interval you can get away with. Or watch how
long your incrementals are taking. BackupPC provides you with a lot of
flexibility.

Concerning the incremental backups: if you need (or want) a backup every two
days, then you should do one every two days. If that turns out to be too
expensive in terms of network bandwidth, you'll have to change something.
Doing *each backup as a full backup* (using rsync(d)!) will probably minimize
network utilisation at the expense of (much!) server load. Again: there's no
"one fits all" answer.

> > Jason Hughes explained how to incrementally transfer such a structure using
> > $Conf{BackupFilesExclude}. The important thing is that you need successful
> > backups to avoid re-transferring data, even if these backups at first
> > comprise only part of your target data. [...]
> 
> What I currently have is a rsyncd share for about 10 - 12 different
> subdirectories (I drilled down a level with the expectation that
> splitting into separate shares might help with the timeouts; I have
> not considered the possibility of backing up separately, though).
> By that token, I would imagine that I just comment out the shares I
> don't need at present, and re-activate them once the backups are done,
> right? And once I've gone through the entire tree, just enable them
> all and hope for the best?

I'm not sure I understand you correctly.

The important thing seems to be: define your share as you ultimately want it
to be. Exclude parts of it at first (with $Conf{BackupFilesExclude}) to get
a successful backup. Altering $Conf{BackupFilesExclude} will not change your
backup definition, i.e. it will appear as if the share started off with a few
files and quickly grew from backup to backup. You can start a full backup by
hand every hour (after changing $Conf{BackupFilesExclude}) to get your share
populated, no need to wait for your monthly full :). Each successful full
backup (with less excluded files) will bring you nearer to your goal.
Each future full backup will be similar to the last of these steps: most of
the files are already on the backup server, only a few percent need to be
transfered.


If, in contrast, you start up with several distinct shares, you'll either
have to keep it that way forever, or re-transfer files, or do some other
magic to move them around the backups and hope everything goes well. It's
certainly possible, but it's not easy. Using $Conf{BackupFilesExclude} is,
and you can't do much wrong, as long as you finally end up excluding nothing
you don't want to exclude.

Regards,
Holger

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to