Finally I came up with this:
Pool {
  Name = [...]
  @|"sh -c 'echo Storage = Storage_0$(shuf -i 1-4 -n 1)'"
}

This will randomly configure one of the 4 devices which is exactly what I
looked for. Syntax is strange, but works very fine. This can be even
improved by creating a file "echo 4 > PARALLELJOBS" and read this
"variable" within the shuf command.

Hope you get the idea (having a pool per client and still run jobs in
parallel).


Regards, Bruno Marsal
office +49-7053-9380604 | mobile +49-176-67393223 | Schlehenweg 1, 75387
Neubulach, Germany

On Fri, Jul 24, 2015 at 3:00 PM, Bruno Marsal <[email protected]> wrote:

> On Friday, July 10, 2015 at 7:43:45 AM UTC+2, Bruno Friedmann wrote:
> > On Thursday 09 July 2015 18.13:16 Bruno Marsal wrote:
> > > Hi, consider following setup:
> > > * Storage daemon configured with archive device: /mnt/backup with media
> > > type: file
> > > * Storage daemon connected via 10Gbps Ethernet
> > > * Many jobs. For example: Backup server1, Backup server2, . Backup
> server70
> > > * Each server has its own volume pool: server1-fd-Pool,
> server2-fd-Pool, .
> > > server70-df-Pool
> > >
> > > This configuration worked great but the problem was the jobs are
> running consecutively and would run sometimes several days (in case of full
> backups). Seeking for an option to run the backup jobs in parallel I created
> > > 4 Device definitions on the storage daemon and 4 storage definitions
> on the
> > > director:
> > > On Storage daemon:
> > > * Device { Name = Backup_01; Archive device = /mnt/backup; media type
> = file; [...]}
> > > * Device { Name = Backup_02; Archive device = /mnt/backup; media type
> = file; [...]}
> > > * Device { Name = Backup_03; Archive device = /mnt/backup; media type
> = file; [...]}
> > > * Device { Name = Backup_04; Archive device = /mnt/backup; media type
> = file; [...]} On Director:
> > > * Storage { Name = Storage_01; Device = Backup_01; [...]} ...
> > > * Storage { Name = Storage_04; Device = Backup_04; [...]}
> > >
> > > The pools were randomly configured to use either Storage_01,
> Storage_02,
> > > Storage_03 and Storage_04 --> Now 4 jobs are running in parallel which
> is almost perfect.
> > >
> > > The problem with this configuration is: Whenever a new server is being
> added to be backed up, we manually must choose a storage (01-04) which is
> all but perfect.
> > > Is there a way to configure/modify the job scheduler in order it
> assigns one storage after the other to the scheduled jobs? Could this be
> done using plugins?
> > > Is there some other nicer way to more or less automatically balance
> the jobs to the available storages?
> > >
> > > Bruno
> > >
> >
> > What you're looking for is "spooling", so you could have on storage and
> device, and all jobs will be spooled together
> > then for x reasons write done to the device.
> >
> >
> > --
> >
> > Bruno Friedmann
> > Ioda-Net Sàrl www.ioda-net.ch
> >
> >  openSUSE Member & Board, fsfe fellowship
> >  GPG KEY : D5C9B751C4653227
> >  irc: tigerfoot
>
> Thank you for the answer. Will consider using a single pool for all
> servers/job.
>
> I hoped there is a way to comfortably manage a pool per server but still
> have them on a single fileserver and be able to run backups in parallel.
> Seems there is no easy way to modify the scheduler to fit my needs. At
> least none I am aware of.
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "bareos-users" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/bareos-users/iZhmf9IfE1c/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> [email protected].
> To post to this group, send email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to