Re: [Bacula-users] Nas to Nas backup

2018-05-13 Thread Kern Sibbald

  
  
Hello,
Each Device has a single directory associated with that device,
  so in general, if you are writing volumes in different directories
  you will need different MediaTypes.  Though Pools are very useful,
  there is really in most cases no requirement for them.  For a
  Storage to find a particular volume, it must know the MediaType,
  and the MediaType must be associated with a Device that can read
  the particular directory where the Volumes are located.
Best regards,
Kern


On 05/13/2018 10:15 AM, Chris Wilkinson
  wrote:


  
Apologies, my terminology is off. I automatically prefix
  the files written to backup with "job_name". These are written
  to sub-dirs of the storage device (a mounted cifs) also named
  "job_name". This set of files in the sub-dir is what I meant
  by backup set named "job_name".


With this scheme it is necessary to assign a
  uniquely named volume pool to each job. This where the
  necessity to define a schedule/job arises from.


Chris
  
  
On Sun, 13 May 2018, 7:20 a.m. Kern Sibbald,
   wrote:


  
Hello,
What you say sounds very strange.  However, I cannot
  answer your question:
    "Trying to reuse a schedule definition causes
  backup sets to be incorrectly named.  Is that a
  correct deduction?"

is not possible because Bacula has no concept of
  naming a backup set.  In fact, there is no concept of
  a backup set.  In more simple terms, I cannot
  understand what is not working for you.
To the best of my knowledge, there is never any
  problem referencing a single Schedule resource
  definition from multiple Jobs, unless your Jobs have
  unique requirements and you are using "overrides" in
  the Schedule resource.  Those overrides (such as
  specifying a specific Storage resource) may not be
  appropriate for all Jobs.

Best regards,
Kern


On
  05/12/2018 02:03 PM, Chris Wilkinson wrote:


  
Many thanks for those tips. I was almost there!
  I have it running nicely now.


One thing that tripped me up was
  that the schedule definition requires the volume
  pool used by the job be explicitly set. That seems
  to mean that each job must have its own uniquely
  named schedule(s) defined - even though it may be
  identical to another. Trying to reuse a schedule
  definition causes backup sets to be incorrectly
  named.


Is that a correct deduction?
  
  
  Chris 
  
  
  
On Thu, 10 May 2018, 1:45 p.m.
  Josh Fisher, 
  wrote:


   
On
  5/9/2018 5:37 PM, Chris Wilkinson wrote:


  I am experimenting with
using Bacula to back up several (m) cifs
shares on one Nas box to (n)
sub-directories of a cifs share on
another. Neither
  Nas is able to run a client directly
  as they are commercial locked down
  boxes.


My configuration is a
  Debian server and two non-identical
  Nas boxes. They are on the same
  subnet.


My first try at this
  mounts the shares of each Nas on
  Debian. I define devices pointing to
  the backup Nas mounts
   

Re: [Bacula-users] Nas to Nas backup

2018-05-13 Thread Chris Wilkinson
Apologies, my terminology is off. I automatically prefix the files written
to backup with "job_name". These are written to sub-dirs of the storage
device (a mounted cifs) also named "job_name". This set of files in the
sub-dir is what I meant by backup set named "job_name".

With this scheme it is necessary to assign a uniquely named volume pool to
each job. This where the necessity to define a schedule/job arises from.

Chris

On Sun, 13 May 2018, 7:20 a.m. Kern Sibbald,  wrote:

> Hello,
>
> What you say sounds very strange.  However, I cannot answer your question:
>
> "Trying to reuse a schedule definition causes backup sets to be
> incorrectly named.  Is that a correct deduction?"
>
> is not possible because Bacula has no concept of naming a backup set.  In
> fact, there is no concept of a backup set.  In more simple terms, I cannot
> understand what is not working for you.
>
> To the best of my knowledge, there is never any problem referencing a
> single Schedule resource definition from multiple Jobs, unless your Jobs
> have unique requirements and you are using "overrides" in the Schedule
> resource.  Those overrides (such as specifying a specific Storage resource)
> may not be appropriate for all Jobs.
>
> Best regards,
>
> Kern
>
> On 05/12/2018 02:03 PM, Chris Wilkinson wrote:
>
> Many thanks for those tips. I was almost there! I have it running nicely
> now.
>
> One thing that tripped me up was that the schedule definition requires the
> volume pool used by the job be explicitly set. That seems to mean that each
> job must have its own uniquely named schedule(s) defined - even though it
> may be identical to another. Trying to reuse a schedule definition causes
> backup sets to be incorrectly named.
>
> Is that a correct deduction?
>
> Chris
>
>
> On Thu, 10 May 2018, 1:45 p.m. Josh Fisher,  wrote:
>
>>
>> On 5/9/2018 5:37 PM, Chris Wilkinson wrote:
>>
>> I am experimenting with using Bacula to back up several (m) cifs shares
>> on one Nas box to (n) sub-directories of a cifs share on another. Neither
>> Nas is able to run a client directly as they are commercial locked down
>> boxes.
>>
>> My configuration is a Debian server and two non-identical Nas boxes. They
>> are on the same subnet.
>>
>> My first try at this mounts the shares of each Nas on Debian. I define
>> devices pointing to the backup Nas mounts (bacula-sd.conf).
>>
>> Device {   #1
>> ...
>> Archive Device = /mnt/backup_nas/dir_1
>> ...
>> }
>>
>> Device {  #n
>> ...
>> Archive Device = /mnt/backup_nas/dir_n
>> ...
>> }
>>
>> I define n Storage {...} sections (bacula-dir.conf) pointing to these
>> devices.
>>
>> I define m clients pointing (bacula-dir.conf) to the various shares on
>> the data Nas. I am not sure if I need to define m FileDaemon {...}
>> (bacula-fd.conf) sections to make the link to the clients.
>>
>>
>> You do not need m FileDaemons, but you do need to mount the data_nas
>> shares. The Debian server, in addition to mounting the backup_nas shares
>> that it will write to, will also need to mount the m data_nas shares that
>> it will read from. Then you can either define m Job sections in
>> bacula-dir.conf, where each job specifies one of the data_nas mountpoints,
>> or alternatively, in the Job section that backs up the Debian server
>> itself, add a File= line for each of the m data_nas mountpoints. Bacula
>> will not by default descend into other mounted filesystems. All of the
>> filesystems to be backed up must be specified by a File= line in the Job
>> section that specifies its mountpoint.
>>
>>
>>
>> This is probably not a scenario that Bacula was designed for. Is this a
>> viable approach?
>>
>>
>>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>
>
>
> ___
> Bacula-users mailing 
> listBacula-users@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Nas to Nas backup

2018-05-13 Thread Kern Sibbald

  
  
Hello,
What you say sounds very strange.  However, I cannot answer your
  question:
    "Trying to reuse a schedule definition causes backup sets to
  be incorrectly named.  Is that a correct deduction?"

is not possible because Bacula has no concept of naming a backup
  set.  In fact, there is no concept of a backup set.  In more
  simple terms, I cannot understand what is not working for you.
To the best of my knowledge, there is never any problem
  referencing a single Schedule resource definition from multiple
  Jobs, unless your Jobs have unique requirements and you are using
  "overrides" in the Schedule resource.  Those overrides (such as
  specifying a specific Storage resource) may not be appropriate for
  all Jobs.

Best regards,
Kern


On 05/12/2018 02:03 PM, Chris Wilkinson
  wrote:


  
Many thanks for those tips. I was almost there! I have it
  running nicely now.


One thing that tripped me up was that the
  schedule definition requires the volume pool used by the job
  be explicitly set. That seems to mean that each job must have
  its own uniquely named schedule(s) defined - even though it
  may be identical to another. Trying to reuse a schedule
  definition causes backup sets to be incorrectly named.


Is that a correct deduction?
  
  
  Chris 
  
  
  
On Thu, 10 May 2018, 1:45 p.m. Josh Fisher,
   wrote:


   
On
  5/9/2018 5:37 PM, Chris Wilkinson wrote:


  I am experimenting with using Bacula
to back up several (m) cifs shares on one Nas box to
(n) sub-directories of a cifs share on another. Neither Nas is able
  to run a client directly as they are commercial
  locked down boxes.


My configuration is a Debian server
  and two non-identical Nas boxes. They are on the
  same subnet.


My first try at this mounts the
  shares of each Nas on Debian. I define devices
  pointing to the backup Nas mounts
  (bacula-sd.conf).


Device {   #1
...
Archive Device =
  /mnt/backup_nas/dir_1
...
}



  Device
{  #n
  ...
  Archive
Device = /mnt/backup_nas/dir_n
  ...
  }
  

I define n Storage {...} sections
  (bacula-dir.conf) pointing to these devices.


I define m clients pointing
  (bacula-dir.conf) to the various shares on the
  data Nas. I am not sure if I need to define m
  FileDaemon {...} (bacula-fd.conf) sections to make
  the link to the clients.
  


You do not need m FileDaemons, but you do need to mount
the data_nas shares. The Debian server, in addition to
mounting the backup_nas shares that it will write to,
will also need to mount the m data_nas shares that it
will read from. Then you can either define m Job
sections in bacula-dir.conf, where each job specifies
one of the data_nas mountpoints, or alternatively, in
the Job section that backs up the Debian server itself,
add a File= line for each of the m data_nas mountpoints.
Bacula will not by default descend into other mounted
filesystems. All of the filesystems to be backed up must
be specified by a File= line in the Job section that
specifies its mountpoint. 



  


This is probably not a scenario that
  Bacula was designed for. Is this a viable
  approach?
  
  


  

  

  
  
  
  
  

Re: [Bacula-users] Nas to Nas backup

2018-05-12 Thread Chris Wilkinson
Many thanks for those tips. I was almost there! I have it running nicely
now.

One thing that tripped me up was that the schedule definition requires the
volume pool used by the job be explicitly set. That seems to mean that each
job must have its own uniquely named schedule(s) defined - even though it
may be identical to another. Trying to reuse a schedule definition causes
backup sets to be incorrectly named.

Is that a correct deduction?

Chris


On Thu, 10 May 2018, 1:45 p.m. Josh Fisher,  wrote:

>
> On 5/9/2018 5:37 PM, Chris Wilkinson wrote:
>
> I am experimenting with using Bacula to back up several (m) cifs shares on
> one Nas box to (n) sub-directories of a cifs share on another. Neither
> Nas is able to run a client directly as they are commercial locked down
> boxes.
>
> My configuration is a Debian server and two non-identical Nas boxes. They
> are on the same subnet.
>
> My first try at this mounts the shares of each Nas on Debian. I define
> devices pointing to the backup Nas mounts (bacula-sd.conf).
>
> Device {   #1
> ...
> Archive Device = /mnt/backup_nas/dir_1
> ...
> }
>
> Device {  #n
> ...
> Archive Device = /mnt/backup_nas/dir_n
> ...
> }
>
> I define n Storage {...} sections (bacula-dir.conf) pointing to these
> devices.
>
> I define m clients pointing (bacula-dir.conf) to the various shares on the
> data Nas. I am not sure if I need to define m FileDaemon {...}
> (bacula-fd.conf) sections to make the link to the clients.
>
>
> You do not need m FileDaemons, but you do need to mount the data_nas
> shares. The Debian server, in addition to mounting the backup_nas shares
> that it will write to, will also need to mount the m data_nas shares that
> it will read from. Then you can either define m Job sections in
> bacula-dir.conf, where each job specifies one of the data_nas mountpoints,
> or alternatively, in the Job section that backs up the Debian server
> itself, add a File= line for each of the m data_nas mountpoints. Bacula
> will not by default descend into other mounted filesystems. All of the
> filesystems to be backed up must be specified by a File= line in the Job
> section that specifies its mountpoint.
>
>
>
> This is probably not a scenario that Bacula was designed for. Is this a
> viable approach?
>
>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Nas to Nas backup

2018-05-10 Thread Josh Fisher


On 5/9/2018 5:37 PM, Chris Wilkinson wrote:
I am experimenting with using Bacula to back up several (m) cifs 
shares on one Nas box to (n) sub-directories of a cifs share on 
another. Neither Nas is able to run a client directly as they are 
commercial locked down boxes.


My configuration is a Debian server and two non-identical Nas boxes. 
They are on the same subnet.


My first try at this mounts the shares of each Nas on Debian. I define 
devices pointing to the backup Nas mounts (bacula-sd.conf).


Device {   #1
...
Archive Device = /mnt/backup_nas/dir_1
...
}

Device {  #n
...
Archive Device = /mnt/backup_nas/dir_n
...
}

I define n Storage {...} sections (bacula-dir.conf) pointing to these 
devices.


I define m clients pointing (bacula-dir.conf) to the various shares on 
the data Nas. I am not sure if I need to define m FileDaemon {...} 
(bacula-fd.conf) sections to make the link to the clients.


You do not need m FileDaemons, but you do need to mount the data_nas 
shares. The Debian server, in addition to mounting the backup_nas shares 
that it will write to, will also need to mount the m data_nas shares 
that it will read from. Then you can either define m Job sections in 
bacula-dir.conf, where each job specifies one of the data_nas 
mountpoints, or alternatively, in the Job section that backs up the 
Debian server itself, add a File= line for each of the m data_nas 
mountpoints. Bacula will not by default descend into other mounted 
filesystems. All of the filesystems to be backed up must be specified by 
a File= line in the Job section that specifies its mountpoint.





This is probably not a scenario that Bacula was designed for. Is this 
a viable approach?




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users