John Drescher wrote:
> I think if I were implementing a feature like that, I'd do it via a
> Retry directive which would have two arguments: retry interval, and
> maximum number of retries before failing the job altogether. The
> directive would be usable within a Job or JobDefs record.
>
> With such a feature, suppose I have a client 'foo' which may well have
> been shut down before the time comes for it to be backed up, but which
> has wake-on-LAN enabled. I could modify its Job definition like this:
>
> Job {
> Name = "Foo Save"
> JobDefs = Backup
> Client = foo
> FileSet = "Foo Full Set"
> Schedule = Night
> Retry = "2,10" # allow up to 2 retries at 10-minute intervals
> }
>
>
> It does currently retry (unless you have a run before job that fails)
> but I am unsure if you can set the number of times to check or the
> interval. I believe you can set the entire time to give up. I just use
> concurrency and allow other jobs to go on while it is trying to contact
> unavailable clients. Eventually these jobs will either run or terminate.
I use a checkhost RunBefore that pings and pokes the client to see if
it's (a) alive and (b) accepting connections on the client port. It
returns a failure if either of these is false.
--
Phil Stracchino, CDK#2 ICBM: 43.5607, -71.355
Renaissance Man, Unix ronin, Perl hacker, Free Stater
[EMAIL PROTECTED] [EMAIL PROTECTED]
It's not the years, it's the mileage.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Bacula-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-devel