I've been running Bacula for about 6 years now to backup four sites to disk, 
and it's been very reliable. I have a single Director in one site and separate 
SDs in each of the four sites. I back up about 440 clients (Linux and Windows 
servers, in this case) spread across the four sites. Full backups begin Friday 
night and run through the weekend. Then I take differentials throughout the 
work week (M-Th).

Recently, one of our sites started generating messages that intervention is 
needed and it's "waiting to reserve a device." I've read through some previous 
mailing list posts concerning this issue as well as areas of the Bacula 
documentation, and I'm hoping increasing the Maximum Concurrent Jobs might help 
with this. However, this setting exists in multiple places with different 
values, and I'm not sure which one(s) I should update. FWIW, there is plenty of 
free space on the SD, about 8TB currently.

Below is where I'm seeing that option and the values I currently have set for 
that site.

In bacula-sd.conf on the SD, it is set to 20 under Storage:

Storage {                             # definition of myself
  Name = bacmedia02-fm.internal.shutterfly.com-sd
  SDPort = 9103                  # Director's port
  WorkingDirectory = "/var/bacula"
  Pid Directory = "/var/run"
  Plugin Directory = "/usr/lib64"
  Maximum Concurrent Jobs = 20
}

Also in bacula-sd.conf on the SD, it is set to 5 under Device:

Autochanger {
  Name = FileChgr1
  Device = FileChgr1-Dev1, FileChgr1-Dev2
  Changer Command = ""
  Changer Device = /dev/null
}

Device {
  Name = FileChgr1-Dev1
  Media Type = File1
  Archive Device = /data
  LabelMedia = yes;                   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;               # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
  Maximum Concurrent Jobs = 5
  Autochanger = yes
}

In bacula-dir.conf on the Director, it is set to 20 under Director:

Director {                            # define myself
  Name = bacdirector01-lv.internal.shutterfly.com-dir
  DIRport = 9101                # where we listen for UA connections
  QueryFile = "/etc/bacula/query.sql"
  WorkingDirectory = "/var/bacula"
  PidDirectory = "/var/run"
  Maximum Concurrent Jobs = 20
  Password = "<redacted>"         # Console password
  Messages = Daemon
}

Also, in bacula-dir.conf on the Director, it is set to 10 under Storage:

Storage {                             # definition of myself
  Name = FortMill2
  SDPort = 9103
  Address = bacmedia02-fm.internal.shutterfly.com
  Password = <redacted>
  Device = FileChgr1
  Media Type = File1
  Maximum Concurrent Jobs = 10
  Autochanger = yes
  Allow Compression = yes
}

It is also set to 10 in bacula-dir.conf on the Director, under Autochanger:

Autochanger {
  Name = File1
# Do not use "localhost" here
  Address = bacdirector01-lv.internal.shutterfly.com                # N.B. Use 
a fully qualified name here
  SDPort = 9103
  Password = "<redacted>"
  Device = FileChgr1
  Media Type = File1
  Maximum Concurrent Jobs = 10        # run up to 10 jobs at the same time
  Autochanger = File1                 # point to ourself
}

And, finally, it is set to 20 in bacula-fd.conf on the clients (this is the 
default, not something I set):

FileDaemon {                          # this is me
  Name = jumphost01-fm.internal.shutterfly.com-fd
  FDport = 9102                  # where we listen for the director
  WorkingDirectory = /opt/bacula/working
  Pid Directory = /var/run
  Maximum Concurrent Jobs = 20
  Plugin Directory = /usr/lib64
}

Does anyone have any advice on which places I should (or should not) try 
increasing the value? I'm not sure which values take precedence and how 
increasing certain ones might impact things. I realize that increasing the 
number of concurrent jobs will increase the disk I/O on the SD, and I'm OK with 
that.

Thanks!

--Shawn
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to