Kern Sibbald wrote: > On Monday 28 December 2009 19:57:24 Phil Stracchino wrote: >> Since I got no response from the users list: >> >> When starting a job from the console and modifying the job parameters, >> the 'modify Pool' operation overrides the 'Pool' directive in the job >> definition. > > >> It clearly SHOULD also override the 'Full Backup Pool', >> 'Differential Backup Pool' etc. directives. > > I am not convinced that the manual pool override was ever supposed or > intended > to override the other directives you mention.
If that is the case, then one cannot override the Pool for a manually job whose definition specifies its level-specific backup pools. Since pool overrides in the schedule are now deprecated because of the problems they caused, it means that users must choose between being able to specify different Pools for different levels, or being able to override the pool when running jobs manually for the console. I have just performed a test job with the Full/Differential/Incremental Backup Pool directives in the JobDefs commented out, and it honored the console modifications and ran to tape exactly as it was supposed to. This seems to confirm the issue: 'modify Pool' on a console-run job overrides the Pool directive, but the level-specific Pool override directives override the Pool manually specified in the console. Thus, one cannot override the destination Pool for a manually-started job whose Job resource specifies per-level Pools. I would have to consider this a bug. > Since we do not have the job report, it is a bit hard to tell what really > happened. I can provide full reports on an example job, if you can tell me what exactly you need. The excerpt that I included was just sufficient to show that the console modifications to the job were ignored when the job actually ran. If the Full/Differential/Incremental Backup Pool directives for the job are disabled, then the console Pool/Storage modifications are honored. Side notes: This also raises another issue - Since the Storage used for the job is apparently being automatically selected based on the Pool, why does the Console have a 'modify Storage' capability? I find it difficult to think of a likely real-world configuration in which multiple Storage daemons would share the same Pool, and yet in which it would matter which Storage was actually used for a job. The one scenario I can imagine in which multiple Storage devices accessible to the same Director and clients would reasonably share a Pool is the case in which the backup pool is a large SAN capable of transferring data at a higher rate than any single SD can achieve, and multiple SDs run in parallel on the same storage pool to get higher aggregate throughput. In such a case, jobs would probably be best allocated on a "first available Storage daemon for specified Pool" basis. Do we support this? -- Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355 [email protected] [email protected] [email protected] Renaissance Man, Unix ronin, Perl hacker, Free Stater It's not the years, it's the mileage. ------------------------------------------------------------------------------ This SF.Net email is sponsored by the Verizon Developer Community Take advantage of Verizon's best-in-class app development support A streamlined, 14 day to market process makes app distribution fast and easy Join now and get one step closer to millions of Verizon customers http://p.sf.net/sfu/verizon-dev2dev _______________________________________________ Bacula-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/bacula-devel
