Re: [Bacula-users] Incremental backup and interruption due to no free LTO

2021-10-08 Thread neumeise1
Hello Samuel,
I also wrote my answer under the text-blocks and summarized it at the end, to 
get it a little bit clearer, because the email is a little bit lengthy. If you 
want to answer to this you can leave everthing out except the summarization. 
That should get the email way shorter again an makes it easyier for other 
people to read by. Thank you.

> Le jeu. 30 sept. 2021 02:23,  a crit :
> > > 
> > > I mean that my goal is to save any new file, once and permanently. 
> > > If monday I have file1 on my nas, I want it to be saved on tape. 
> > > Tuesday I add file2 : I want it to be saved on tape. 
> > > Wednesday, file1 is deleted from NAS: it's a mistake, and I still want to 
> > > keep file1 forever on tape (and be able to restore it ). 
> > > Every file that has existed once on my NAS must be saved permanently on a 
> > > tape. 
> > > 
> > Okay I understand it like this:
> > You have done one full backup at the beginning. After that, you are doing 
> > incremental backups every night to save every new file. If the tape is full 
> > it gets packed away as archive and never gets rewritten? Right?
> > Your primary goal is to save your current data and archive "deleted" files 
> > forever?
> 
> Yes ! Exactly.   Okay! > > I don't use tapes, but I think if you do 
> incremental backups and you want to restore something you need to insert a 
> huge part of the tapes because bacula needs to read them.(I'm not sure about 
> that.)
> > If bacula should to this, you will have a huge problem if you want to 
> > restore a file in lets say 10years.
> 
> Not really. I can do a restore job, searching by finename. If the tape is not 
> in the library, Bacula asks me to put it in...I've tested this procedure a 
> few times, it works.   Okay, I trust you in this.   > > And being honest I 
> really don't like the idea of doing incremental backups endlessly without 
> differential- and full-backups in between(I wrote more about that later) 
> > 
> > > Let me show you my (simplified) configuration : 
> > > 
> > > I mounted ( nfs ) my first NAS on, say, /mnt/NAS1/ 
> > > My file set is : FileSet {
> > > Name = "NAS1"
> > > File = /mnt/NAS1
> > > } 
> > > 
> > > My job is Job {
> > > Name = "BackupNAS1"
> > > JobDefs = "DefaultJob"
> > > Level = Incremental
> > > FileSet="NAS1"
> > > #Accurate = yes # Not clear what I should do here. activate to yes seemed 
> > > to add many unwanted files - probably moved/renamed files ? 
> > > Pool = BACKUP1
> > > Storage = ScalarI3-BACKUP1 # this is my tape library
> > > Schedule = NAS1Daily #run every day
> > > 
> > > } 
> > > 
> > > with 
> > > JobDefs {
> > > Name = "DefaultJob"
> > > Type = Backup
> > > Level = Incremental
> > > Client = lto8-fd
> > > FileSet = "Test File Set"
> > > Messages = Standard
> > > SpoolAttributes = yes
> > > Priority = 10
> > > Write Bootstrap = "/var/lib/bacula/%c.bsr"
> > > } 
> > > 
> > > My pool is : 
> > > Pool {
> > > Name = BACKUP1
> > > Pool Type = Backup
> > > Recycle = no
> > > AutoPrune = no
> > > Volume Retention = 100 years
> > > Job Retention = 100 years
> > > Maximum Volume Bytes = 0
> > > Maximum Volumes = 1000 
> > > Storage = ScalarI3-BACKUP1
> > > Next Pool = BACKUP1
> > > } 
> > 
> > To your .conf:
> > -under JobDefs-DefaultJob :you declare "FileSet = "Test File Set"" and in 
> > your jobdef you declare "FileSet="NAS1"" if that's your standard fileset, 
> > set it like this or try to ommit it in the jobdef. It is a little bit 
> > confusing.
> OK 
> > -you use the "Next Pool"-Ressource in your Pool. Documentation states: it 
> > belongs under Schedule>Run>Next Pool. Either way it describes a migrating 
> > job. I think that's not what you want to do?
> I had tried a "virtual backup", so that all my incremental jobs merge into 
> one, periodically. I thought it was only virtual, only dealing with the 
> catalog data, but it seems I can do that only by recreating a whole bunch of 
> volumes.
> I have hundreds of TeraOctets of datas and I don't want to do that ! So I let 
> the incremental jobs running. Let aside my current problem, it's convenient 
> for what I need...   Okay, I noted that you did "virtual backups". As far as 
> I know is a "virtual full-backup" something where bacula reads incremental- 
> and differential- backups and the last full-backup and constructs a new full 
> backup out of them without sending all of the data over the network. See: 
> https://www.baculasystems.com/incremental-backup-software/ This Site states: 
> "[...]Virtual Full" in Bacula terminology). With this technique Bacula's 
> software calculates a new full backup from all differential and incremental 
> backups that followed the initial full backup, without the requirement of 
> another full data transfer over the network." I also took note that you have 
> a lot of data to manage. 
> > If I would be in your place, I would do it differently(assumed i got your 
> > primary goal right that you want to save your current data and archive 
> > "dele

Re: [Bacula-users] Storage Daemon stopped with NFS mounted storage

2021-10-08 Thread Josh Fisher


On 10/8/21 9:26 AM, Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) wrote:


Thanks John,

Is it advisable that I run bacula-sd on the remote ZFS based filer 
with RAID configured disks.




It depends. The client-SD connection is by its nature a network hog. If 
everything is on the same network, then SD writing to a NFS share or 
iSCSI device is doubling the network load. If there is a SAN, then that 
is not the case, although the sequential write speed of the RAID may be 
much greater than the network throughput. In general, I would say yes. 
But if not, I would still recommend iSCSI instead of NFS.



At the moment bacula-dir & bacula-sd run on a single host. The disk 
space from the filer is used through NFS mounts on the bacula host.


Yateen

*From:*Josh Fisher 
*Sent:* Monday, October 4, 2021 8:27 PM
*To:* bacula-users@lists.sourceforge.net
*Subject:* Re: [Bacula-users] Storage Daemon stopped with NFS mounted 
storage


On 10/2/21 2:52 AM, Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) wrote:

Hi All,

We are using Bacula 9.4.4 with PostGreSQL for disk based backeup.

Disk space is available to Bacula storage daemon as an NFS mount
from a remote ZFS based filer that has RAID configured disks.

Recently one of the disk in the RAID array failed, degrading the
remote ZFS pool.

With NFS, file system caching is on the server hosting the ZFS 
filesystem. Additionally, there is data and metadata caching on the 
client. Data updates are asynchronous, but metadata updates are 
synchronous. Due to the synchronous metadata updates, both data and 
metadata updates persist across NFS client failure. However they do 
not persist across NFS server failure, and that is what happened here, 
I think, although it is not clear why a single disk failure in a RAID 
array would cause an NFS failure.


In short, iSCSI will be less troublesome for use with Bacula SD, since 
the Bacula SD machine will be the only client using the share anyway.


Later we observed the Bacula storage daemon in stopped state.

Question isĀ  : can the disturbance on the NFS mounted disk ( from
the remote ZFS based filer) make bacula-sd to stop?

If you mean bacula-sd crashed, then no, it should not crash if one of 
its storage devices fails.


Thanks

Yateen




___

Bacula-users mailing list

Bacula-users@lists.sourceforge.net  


https://lists.sourceforge.net/lists/listinfo/bacula-users  


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Storage Daemon stopped with NFS mounted storage

2021-10-08 Thread Shaligram Bhagat, Yateen (Nokia - IN/Bangalore)
Thanks John,

Is it advisable that I run bacula-sd on the remote ZFS based filer with RAID 
configured disks.
At the moment bacula-dir & bacula-sd run on a single host. The disk space from 
the filer is used through NFS mounts on the bacula host.

Yateen



From: Josh Fisher 
Sent: Monday, October 4, 2021 8:27 PM
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Storage Daemon stopped with NFS mounted storage



On 10/2/21 2:52 AM, Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) wrote:
Hi All,

We are using Bacula 9.4.4 with PostGreSQL for disk based backeup.

Disk space is available to Bacula storage daemon as an NFS mount from a remote 
ZFS based filer that has RAID configured disks.

Recently one of the disk in the RAID array failed, degrading the remote ZFS 
pool.



With NFS, file system caching is on the server hosting the ZFS filesystem. 
Additionally, there is data and metadata caching on the client. Data updates 
are asynchronous, but metadata updates are synchronous. Due to the synchronous 
metadata updates, both data and metadata updates persist across NFS client 
failure. However they do not persist across NFS server failure, and that is 
what happened here, I think, although it is not clear why a single disk failure 
in a RAID array would cause an NFS failure.

In short, iSCSI will be less troublesome for use with Bacula SD, since the 
Bacula SD machine will be the only client using the share anyway.


Later we observed the Bacula storage daemon in stopped state.

Question is  : can the disturbance on the NFS mounted disk ( from the remote 
ZFS based filer) make bacula-sd to stop?



If you mean bacula-sd crashed, then no, it should not crash if one of its 
storage devices fails.



Thanks
Yateen





___

Bacula-users mailing list

Bacula-users@lists.sourceforge.net

https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users