Re: [Bacula-users] Bacula Restore Session

2021-11-30 Thread Josip Deanovic
Josip DeanovicOn Tuesday 2021-11-30 10:27:08  wrote:
> On Tuesday 2021-11-30 15:29:42 Gary R. Schmidt wrote:
> > Also, be aware that you have created a completely new file system, all
> > the inodes will be different, and the access/modify/change/birth times
> > on all the directories will be set to when the new directory was
> > created  - i.e. when it started to be populated by the restore.
> > 
> > And everything will have different dev and rdev values.
> 
> The documentation says that Bacula decides what files to backup for
> Incremental and Differential backup by comparing the change (st_ctime)
> and modification (st_mtime) times of the file to the time the last
> backup completed.
> 
> Could it be that the "Accurate" option changes that behavior when used
> in the Job resource?
> 
> If that is the case then it could be controlled by the "Accurate" option
> used in the FileSet resource.

Just did a simple test.
What changes during the restore is the change time (st_ctime).
That's why Bacula decides to backup restored files with the next
Incremental backup.

There is an option "mtimeonly=yes" which could be used in the FileSet.
This would solve the problem Neil currently has to deal with.

I would suggest to remove that option once it is not needed any more
as it could have undesirable effects on the long run.

Adding and removing of this option to the FileSet resource will not
trigger promotion of Incremental job to Full unless Include or Exclude
lists are changed as well.


Regards!

-- 
Josip Deanovic


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Restore Session

2021-11-30 Thread Josip Deanovic
On Tuesday 2021-11-30 15:29:42 Gary R. Schmidt wrote:
> Also, be aware that you have created a completely new file system, all 
> the inodes will be different, and the access/modify/change/birth times 
> on all the directories will be set to when the new directory was
> created  - i.e. when it started to be populated by the restore.
> 
> And everything will have different dev and rdev values.

The documentation says that Bacula decides what files to backup for
Incremental and Differential backup by comparing the change (st_ctime)
and modification (st_mtime) times of the file to the time the last
backup completed.

Could it be that the "Accurate" option changes that behavior when used
in the Job resource?

If that is the case then it could be controlled by the "Accurate" option
used in the FileSet resource.


Regards!

-- 
Josip Deanovic


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Restore Session

2021-11-29 Thread Gary R. Schmidt

On 30/11/2021 15:13, Bill Arlofski via Bacula-users wrote:

On 11/29/21 12:07, Neil Balchin wrote:

Recently, I had to reconfigure a large ZFS disk array.  Prior to the 
reconfiguration, It was shared to my bacula-fd instance  across NFS and I was 
doing daily incrementals, the pool was configured with a scratch pool of blank 
tapes and I kept the backups without any pruning or recycling, veal more of an 
archive than a backup.

After reconfiguring the disk system I ran a restore job and all my files went 
back into place with permissions perfectly in tact.

3 days later the scheduled backup job started running and instead of 
recognizing the existing files it ran the backup like a Full, never been backed 
up before job.

I have to do this 4 more times on different disk arrays,  how can I avoid 
re-backing up all the files I’ve just restored ?


Neil B


Hello Neil,

First, before running a new job after the restore, or when editing a Fileset, I 
recommend always using the bconsole commands:

* @tall /tmp/file_list.txt  (opens a log of bconsole i/o)

Note: If @tall does not work, use the older @tee command)

* estimate listing job=xxx level=incremental  (or differential if that is what 
you need)

* @tall (with no filename closes the log and stops logging bconsole i/o)

Then open that file in your favorite editor (vim of course :) and have a look.

This is so that you can see what Bacula thinks it needs to backup. This can 
save you a lot of time (and media if writing to
tapes)


Next, what you have described "should not happen"™  :)

If possible, I would check to make sure that attributes (ownership, 
permissions, etc) of the restored files match what was
backed up.  The linux 'stat' command is great for this.   IF you can run this 
on some files after they are backed up and
before the array is re-built you might be able to identify what is is happening.

Then, you can add the 'Accurate = yes" to your backup job, and the "Accurate = 
" setting in your FileSet's Options{}
block to tell Bacula what attributes to compare when deciding if a file needs 
to be backed up.

See the "accurate=" section in the "FileSet Resource" section of the 
Main Manual for full information on this option.

We have seen cases where a customer had a script that ran each Saturday which 
(much to their surprise) 'touched' every file
in a directory tree, so the next day's Incremental backed up every file as if 
it were a full backup because the modification
or change time (I forget which it was) had changed, and it was one of the 
default things Bacula looks at when determining if
a file needs to be backed up.

I think, by default (if not specified), the default options are 'pcms' - I can 
never remember this for sure though. :)

Also, be aware that you have created a completely new file system, all 
the inodes will be different, and the access/modify/change/birth times 
on all the directories will be set to when the new directory was created 
- i.e. when it started to be populated by the restore.


And everything will have different dev and rdev values.

Since it's a new file system, it all gets backed up anew.

Cheers,
GaryB-)


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Restore Session

2021-11-29 Thread Bill Arlofski via Bacula-users
On 11/29/21 12:07, Neil Balchin wrote:
> Recently, I had to reconfigure a large ZFS disk array.  Prior to the 
> reconfiguration, It was shared to my bacula-fd instance  across NFS and I was 
> doing daily incrementals, the pool was configured with a scratch pool of 
> blank tapes and I kept the backups without any pruning or recycling, veal 
> more of an archive than a backup.
>
> After reconfiguring the disk system I ran a restore job and all my files went 
> back into place with permissions perfectly in tact.
>
> 3 days later the scheduled backup job started running and instead of 
> recognizing the existing files it ran the backup like a Full, never been 
> backed up before job.
>
> I have to do this 4 more times on different disk arrays,  how can I avoid 
> re-backing up all the files I’ve just restored ?
>
>
> Neil B

Hello Neil,

First, before running a new job after the restore, or when editing a Fileset, I 
recommend always using the bconsole commands:

* @tall /tmp/file_list.txt  (opens a log of bconsole i/o)

Note: If @tall does not work, use the older @tee command)

* estimate listing job=xxx level=incremental  (or differential if that is what 
you need)

* @tall (with no filename closes the log and stops logging bconsole i/o)

Then open that file in your favorite editor (vim of course :) and have a look.

This is so that you can see what Bacula thinks it needs to backup. This can 
save you a lot of time (and media if writing to
tapes)


Next, what you have described "should not happen"™  :)

If possible, I would check to make sure that attributes (ownership, 
permissions, etc) of the restored files match what was
backed up.  The linux 'stat' command is great for this.   IF you can run this 
on some files after they are backed up and
before the array is re-built you might be able to identify what is is happening.

Then, you can add the 'Accurate = yes" to your backup job, and the "Accurate = 
" setting in your FileSet's Options{}
block to tell Bacula what attributes to compare when deciding if a file needs 
to be backed up.

See the "accurate=" section in the "FileSet Resource" section of the 
Main Manual for full information on this option.

We have seen cases where a customer had a script that ran each Saturday which 
(much to their surprise) 'touched' every file
in a directory tree, so the next day's Incremental backed up every file as if 
it were a full backup because the modification
or change time (I forget which it was) had changed, and it was one of the 
default things Bacula looks at when determining if
a file needs to be backed up.

I think, by default (if not specified), the default options are 'pcms' - I can 
never remember this for sure though. :)


Hope this helps!


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula Restore Session

2021-11-29 Thread Neil Balchin
Recently, I had to reconfigure a large ZFS disk array.  Prior to the 
reconfiguration, It was shared to my bacula-fd instance  across NFS and I was 
doing daily incrementals, the pool was configured with a scratch pool of blank 
tapes and I kept the backups without any pruning or recycling, veal more of an 
archive than a backup.

After reconfiguring the disk system I ran a restore job and all my files went 
back into place with permissions perfectly in tact.

3 days later the scheduled backup job started running and instead of 
recognizing the existing files it ran the backup like a Full, never been backed 
up before job.

I have to do this 4 more times on different disk arrays,  how can I avoid 
re-backing up all the files I’ve just restored ?


Neil B



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users