Hello,
What you say sounds very strange. However, I cannot answer your
question:
"Trying to reuse a schedule definition causes backup sets to
be incorrectly named. Is that a correct deduction?"
is not possible because Bacula has no concept of naming a backup
set. In fact, there is no concept of a backup set. In more
simple terms, I cannot understand what is not working for you.
To the best of my knowledge, there is never any problem
referencing a single Schedule resource definition from multiple
Jobs, unless your Jobs have unique requirements and you are using
"overrides" in the Schedule resource. Those overrides (such as
specifying a specific Storage resource) may not be appropriate for
all Jobs.
Best regards,
Kern
On 05/12/2018 02:03 PM, Chris Wilkinson
wrote:
Many thanks for those tips. I was almost there! I have it
running nicely now.
One thing that tripped me up was that the
schedule definition requires the volume pool used by the job
be explicitly set. That seems to mean that each job must have
its own uniquely named schedule(s) defined - even though it
may be identical to another. Trying to reuse a schedule
definition causes backup sets to be incorrectly named.
Is that a correct deduction?
Chris
On
5/9/2018 5:37 PM, Chris Wilkinson wrote:
I am experimenting with using Bacula
to back up several (m) cifs shares on one Nas box to
(n) sub-directories of a cifs share on another. Neither Nas is able
to run a client directly as they are commercial
locked down boxes.
My configuration is a Debian server
and two non-identical Nas boxes. They are on the
same subnet.
My first try at this mounts the
shares of each Nas on Debian. I define devices
pointing to the backup Nas mounts
(bacula-sd.conf).
Device { #1
...
Archive Device =
/mnt/backup_nas/dir_1
...
}
Device
{ #n
...
Archive
Device = /mnt/backup_nas/dir_n
...
}
I define n Storage {...} sections
(bacula-dir.conf) pointing to these devices.
I define m clients pointing
(bacula-dir.conf) to the various shares on the
data Nas. I am not sure if I need to define m
FileDaemon {...} (bacula-fd.conf) sections to make
the link to the clients.
You do not need m FileDaemons, but you do need to mount
the data_nas shares. The Debian server, in addition to
mounting the backup_nas shares that it will write to,
will also need to mount the m data_nas shares that it
will read from. Then you can either define m Job
sections in bacula-dir.conf, where each job specifies
one of the data_nas mountpoints, or alternatively, in
the Job section that backs up the Debian server itself,
add a File= line for each of the m data_nas mountpoints.
Bacula will not by default descend into other mounted
filesystems. All of the filesystems to be backed up must
be specified by a File= line in the Job section that
specifies its mountpoint.
This is probably not a scenario that
Bacula was designed for. Is this a viable
approach?
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users