On Wednesday 18 May 2005 22:01, Sean O'Grady wrote:
> Hi,
>
> After seeing two people respond saying that this was feasible and
> checking what Wilson had in his config against mine I did a little more
> digging. (Thanks Arno, your e-mail came in as I was writing this and
> confirmed the info about Pools. Also I'm running 1.36.3. )
>
> I believe I have sorted out what my issue with this is. As I didn't post
> my complete configs and only the ones that I thought would be relevant I
> ended up only giving half the picture. What was missing was that there
> is another set of Pool tapes and different Jobs that run using these
> Pools (that also do data spooling) at the same time as the Jobs I showed
> before.
>
> Looking at src/dird/jobq.c I see the following which hopefully Kern or
> someone else in touch with the code can enlighten a bit more for me.
>
>  >SNIP
>
> if (njcr->store == jcr->store && njcr->pool != jcr->pool) {
>       skip_this_jcr = true;
>       break;
> }

Two jobs with the same Storage resource and different Pools cannot run 
simultaneously because (with the 1.36 implementation) this would imply that 
one tape drive could mount two different Volumes.

>
>  >SNIP
>
> This says to me that as long as the Pools of the Jobs being queued
> match, the Jobs will all run concurrently. Jobs however that have
> mismatching Pools will instead queue and wait for the storage device to
> free when previous jobs complete.
>
> Its probably not this simple but some behaviour equivalent to ...
>
> if (njcr->store == jcr->store && njcr->pool != jcr->pool &&
> njcr->spool_data != true) {
>       skip_this_jcr = true;
>       break;
> }
>
> ... sould allow for Jobs to queue with different Pools that have
> spooling on. To ensure that Jobs complete some further checks of the
> storage daemon and the director that -
>
> 1) when spooling from the client completes is the Storage device
> available for append
> 2) if the Storage device is availble is a Pool object suitable for this
> Job currently loaded (if not load it)
> 3) when the Job completes check the status of Jobs queued and grab the
> next Job where the spooling is complete goto 2) again
>
> My question now changes to "Is there a way for Jobs to run Concurrently
> that use different Pools as long as the Job Definitions are set to Spool
> Data" as outlined in the example above (or something similiar) ?

Use two different Storage resources, so that each one can have a different 
Volume from a different Pool mounted.

>
> Or of course maybe Bacula can already handle this and I'm just missing it
> :)

I believe that you are just missing it, but it is not so obvious.

>
> Thanks,
> Sean
>
> Arno Lehmann wrote:
> > Hi.
> >
> > Sean O'Grady wrote:
> >> Well its good to know that Bacula will do what I need!
> >>
> >> Guess now I need to determine what I've done wrong in my configs ...
> >>
> >> I'm short forming all the config inforation to reduce the size of the
> >> e-mail but I can post my full configs if necessary. Anywhere where I
> >> have "Maximum Concurrent Jobs" I've posted that section of the config.
> >> If there is something else besides "Maximum Concurrent Jobs" needed in
> >> the configs to get this behaviour to happen and I'm missing it, please
> >> let me know.
> >
> > The short form is ok :-)
> >
> > Now, after reading through it I actually don't see any reason why only
> > one job at a time is run.
> >
> > Perhaps someone else can...
> >
> > Still, I have some questions.
> > First, which version of bacula do you use?
> > Then, do you perhaps use job overrides concerning the pools or the
> > priorities in your schedule?
> > And, finally, are all the jobs scheduled to run at the same level, e.g.
> > full, and do they actually do so? Perhaps you have a job running at Full
> > level, and the others are scheduled to run incremental, so they have to
> > wait for the right media (of pool DailyPool).
> >
> > Arno
> >
> >> Any suggestions appreciated!
> >>
> >> Sean
> >>
> >> In bacula-dir.conf ...
> >>
> >> Director {
> >>  Name = mobinet-dir1
> >>  DIRport = 9101                # where we listen for UA connections
> >>  QueryFile = "/etc/bacula/query.sql"
> >>  WorkingDirectory = "/data/bacula/working"
> >>  PidDirectory = "/var/run"
> >>  Maximum Concurrent Jobs = 10
> >>  Password = "****"         # Console password
> >>  Messages = Daemon
> >> }
> >>
> >> JobDefs {
> >>   Name = "MobinetDef"
> >>   Storage = polaris-sd
> >>   Schedule = "Mobinet-Cycle"
> >>   Type = Backup
> >>   Max Start Delay = 32400 # 9 hours
> >>   Max Run Time = 14400 # 4 hours
> >>   Rerun Failed Levels = yes
> >>   Maximum Concurrent Jobs = 5
> >>   Reschedule On Error = yes
> >>   Reschedule Interval = 3600
> >>   Reschedule Times = 2
> >>   Priority = 10
> >>   Messages = Standard
> >>   Pool = Default
> >>   Incremental Backup Pool = MobinetDailyPool
> >>   Differential Backup Pool = MobinetWeeklyPool
> >>   Full Backup Pool = MobinetMonthlyPool
> >>   SpoolData = yes
> >> }
> >>
> >> JobDefs {
> >>   Name = "SiriusWebDef"
> >>   Storage = polaris-sd
> >>   Schedule = "SiriusWeb-Cycle"
> >>   Type = Backup
> >>   Max Start Delay = 32400 # 9 hours
> >>   Max Run Time = 14400 # 4 hours
> >>   Rerun Failed Levels = yes
> >>   Maximum Concurrent Jobs = 5
> >>   Reschedule On Error = yes
> >>   Reschedule Interval = 3600
> >>   Reschedule Times = 2
> >>   Priority = 10
> >>   Messages = Standard
> >>   Pool = Default
> >>   Incremental Backup Pool = MobinetDailyPool
> >>   Differential Backup Pool = MobinetWeeklyPool
> >>   Full Backup Pool = MobinetMonthlyPool
> >>   SpoolData = yes
> >> }
> >>
> >> Storage {
> >>  Name = polaris-sd
> >>  Address = "****"
> >>  SDPort = 9103
> >>  Password = "****"
> >>  Device = "PowerVault 122T VS80"
> >>  Media Type = DLTIV
> >>  Maximum Concurrent Jobs = 10
> >> }
> >>
> >> In bacula-sd.conf
> >>
> >> Storage {                             # definition of myself
> >>  Name = polaris-sd
> >>  SDPort = 9103                  # Director's port
> >> WorkingDirectory = "/data/bacula/working"
> >>  Pid Directory = "/var/run"
> >>  Maximum Concurrent Jobs = 10
> >> }
> >>
> >> Device {
> >>   Name = "PowerVault 122T VS80"
> >>   Media Type = DLTIV
> >>   Archive Device = /dev/nst0
> >>   Changer Device = /dev/sg1
> >>   Changer Command = "/etc/bacula/mtx-changer %c %o %S %a"
> >>   AutoChanger = yes
> >>   AutomaticMount = yes               # when device opened, read it
> >>   AlwaysOpen = yes
> >>   LabelMedia = no
> >>   Spool Directory = /data/bacula/spool
> >>   Maximum Spool Size = 14G
> >> }
> >>
> >> In bacula-fd.conf on all the clients
> >>
> >> FileDaemon {                          # this is me
> >>  Name = polaris-mobinet-ca
> >>  FDport = 9102                  # where we listen for the director
> >>  WorkingDirectory = /data/bacula/working
> >>  Pid Directory = /var/run
> >>  Maximum Concurrent Jobs = 10
> >> }
> >>
> >> Arno Lehmann wrote:
> >>> Hello,
> >>>
> >>> Sean O'Grady wrote:
> >>> ...
> >>>
> >>>> As an alternative which would be even better - All 5 Jobs start @
> >>>> 23:00 spooling data from the client, the first Job to complete the
> >>>> spooling from the client starts writing to the Storage Device.
> >>>> Remaining Jobs queue for the Storage Device as it becomes available
> >>>> and as their spooling completes.
> >>>>
> >>>> Instead what I'm seeing is while the first job executes the
> >>>> additional jobs all have a status of "is waiting on max Storage
> >>>> jobs" and will not begin spooling their data until that first Job
> >>>> has spooled->despooled->written to the Storage Device.
> >>>>
> >>>> My question is of course "is this possible" to have Concurrent Jobs
> >>>> running and spooling in one of the scenarios above (or another I'm
> >>>> missing).
> >>>
> >>> Well, I guess that this must be a setup problem on your side - after
> >>> all, this is what I'm doing here and it works (apart from very few
> >>> cases where jobs are held that *could* start, but I couldn't find out
> >>> why yet).
> >>>
> >>> From your description, I assume that you forgot to set "Maximum
> >>> Concurrent Jobs" in all the necessary places, namely in the storage
> >>> definitions.
> >>>
> >>> I noticed that the same message is printed when the director has to
> >>> wait for a client, though. (This is not yet confirmed, noticed it
> >>> only yesterday and couldn't verify it yet).
> >>>
> >>>> If so I'll send out more details of my config to see if anyone can
> >>>> point out what I'm doing wrong.
> >>>
> >>> First, verify the settings you have - there are directives in the
> >>> client's config, the sd config, and the director configuration where
> >>> you need to apply the right settings for your setup.
> >>>
> >>> Arno
> >>>
> >>>> Thanks,
> >>>> Sean
> >>>>
> >>>> --
> >>>> Sean O'Grady
> >>>> System Administrator
> >>>> Sheridan College
> >>>> Oakville, Ontario
> >>>>
> >>>>
> >>>> -------------------------------------------------------
> >>>> This SF.Net email is sponsored by Oracle Space Sweepstakes
> >>>> Want to be the first software developer in space?
> >>>> Enter now for the Oracle Space Sweepstakes!
> >>>> http://ads.osdn.com/?ad_id=7412&alloc_id=16344&op=click
> >>>> _______________________________________________
> >>>> Bacula-users mailing list
> >>>> Bacula-users@lists.sourceforge.net
> >>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
> -------------------------------------------------------
> This SF.Net email is sponsored by Oracle Space Sweepstakes
> Want to be the first software developer in space?
> Enter now for the Oracle Space Sweepstakes!
> http://ads.osdn.com/?ad_id=7412&alloc_id=16344&op=click
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Best regards,

Kern

  (">
  /\
  V_V


-------------------------------------------------------
This SF.Net email is sponsored by Oracle Space Sweepstakes
Want to be the first software developer in space?
Enter now for the Oracle Space Sweepstakes!
http://ads.osdn.com/?ad_id=7412&alloc_id=16344&op=click
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to