On Tuesday 22 December 2009 13:07:00 Alan Brown wrote: > On Mon, 21 Dec 2009, Kern Sibbald wrote: > > I am not convinced that any small tweak such as you are suggesting will > > resolve your problems, which is why we implemented a new set of > > directives. > > I know the reasoning behind the new set of directives and we're using > them, however the tweak will go a long way towards resolving issues. > > > If these are not adequate, I would like to know why. > > I'm a big fan of dealing with all possible failure modes (belt & braces > approaches). The reality is that when job concurrency is set to 1 (the > default), having a Job only check for escalation as it actually starts up > will avoid issues such as I have described. > > Part of the philosophy here includes running an incremental backup > immediately before commencing full or differentials, so that in the event > of failure I'm still able to restore to "today's version" of the backup > rather than yesterday's. > > Because of that I don't like to use the duplicate job weeder - in the > event that the incremental's start is delayed beyond the nominal start > time of the Diff or Full backup, the incremental gets cancelled. > > For sites with small (or small numbers of) backups the philosophy may be > different, but I've had situations where a paniced user has come to me > needing a file restored while a full backup is running and which would > have been missed had the incremental not been run. > > I hope that makes some sense.
No, I am sorry, it doesn't make any sense to me. It sounds like you are counting on some tight sequencing of jobs, and you are assuming that a full backup will fail, or that a user will delete a file that has changed since yesterday but before the full can run. In any case, what you want changed is not clear to me. Running an incremental immediately before a Full sounds to me more like you are asking for CDP (continous data protection), something we do not currently offer. When a backup Job starts running, it goes through a number of steps, and they are roughly the following: 1. Determine what level it is going to run at (Full, Incremental, ...) 2. Apply any pool overides that may change the level 3. Apply duplicate job restrictions 4. Notify SD and FD that a job is starting 4. Allocate SD resources (job may have to wait) 5. FD sends data to SD I don't see that changing the order will make any significant difference, except possibly in some very unusual cases. Unless someone can convince me that there is some fatal flaw in the above, I am not too inclined to change it. Best regards, Kern ------------------------------------------------------------------------------ This SF.Net email is sponsored by the Verizon Developer Community Take advantage of Verizon's best-in-class app development support A streamlined, 14 day to market process makes app distribution fast and easy Join now and get one step closer to millions of Verizon customers http://p.sf.net/sfu/verizon-dev2dev _______________________________________________ Bacula-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/bacula-devel
