Hi,
Jesse Proudman wrote on 02.05.2007 at 08:46:55 [Re: [BackupPC-users] Making
Nodes In A Cluster Backup At Different Times]:
> On May 2, 2007, at 4:34 AM, Holger Parplies wrote:
> > Jesse Proudman wrote:
> >> What's the best way to make sure that two specific nodes are not
> >> being backed up at the same time?
> >
> > set $Conf {MaxBackups} to 1?
> Well that'd be the easy way, but if we do that it will take forever
> for our network to be backed up... Any other ideas?
ah, you want the hard way. Ok.
There's no $Conf {DontBackupThisHostWhileOneOfTheseIsBeingBackedUp} which
you could set to an array of host names. You'll have to invent a mechanism
yourself. I'd probably do something like this:
1. Change $Conf {PingCmd} to a script which does some form of locking. If
the host is allowed to proceed, because no host that would block it is
currently being backed up, then execute the real ping command, passing
its output and return code back to BackupPC. Note that you'll have to
look at the result too, else your locking won't work, because you'll
assume $host is now being backed up even if the ping fails, blocking the
others (and the lock won't be removed). Note also that the PingCmd may be
used again by BackupPC after starting the backup but before ending it
(probably only in error cases, but you still want correct diagnostics,
don't you?). So, if your locking mechanism determines $host is currently
being backed up, it should just call the real ping command.
2. Use $Conf {DumpPostUserCmd} to tell your locking mechanism that the
backup for $host is finished.
Your locking mechanism might simply use a file storing a host name. If the
file doesn't exist, go ahead. If the file exists and contains the correct
host name, go ahead, too. Otherwise, a conflicting backup is running. You
might check the time stamp of the file to avoid being indefinitely blocked
by a stale file.
After the backup finishes, just remove the lock file.
You should be careful when creating the file, because more than one instance
of the PingCmd could be running at the same time. Use something that will
atomically create a file only if it does not yet exist.
What should happen for the host(s) that is/are not allowed to proceed
(simulated 'ping failed') is that they are retried on the next scheduled
wakeup. Obviously, their ping counts will get messed up. But, presuming your
backups usually start at the same hour every day for any one host, the hosts
should be put out of sync automatically (which is what you want).
And the MaxBackups should be available for other hosts (i.e. a "blocked"
host should not count).
I'd be worried about stale lock files. You could check lock file validity by
looking in `ps -C BackupPC_dump` for a process with the correct parameters.
Or put a PID in the lock file instead of or along with the host name and
check for existance of the process.
PingCmd is also used for a restore, so (without the 'ps') they will be treated
the same as backups. That means you'll have to set RestorePostUserCmd too.
You can use that for any number of groups of hosts (that are not allowed to
run simultaneously) by simply using a different file for each group.
Presuming it works, that is. It is, of course, completely untested :-).
I just read the suggestion to use BlackoutPeriods. That sounds like a much
better idea to me (though it will not prevent concurrent backups if one was
manually initiated from the web interface, and it won't account for backups
that extend into their BlackoutPeriods).
Regards,
Holger
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
BackupPC-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/