I think there is some subtle bug with the 'o' fileset accurate option, so you
could try accurate = ms5 to see if that fixes it (yes, performance will be
degraded).

Also, note that the mtimeonly option is ignored when you use accurate = yes in
the job (unless you include 'M' in the fileset accurate options).

The output of 

show fileset=CHMPROD

might be useful to check that your globalFilesetExclude.inc doesn't have any
bad side effects.

__Martin


>>>>> On Wed, 15 Oct 2025 18:58:31 +0200, Samuel Zaslavsky said:
> 
> Hello everyone,
> 
> Some tricky problem here — let me try to explain as briefly as possible.
> 
> I have a job and a fileset as below:
> 
> Job {
> Name = "CHMPROD"
> FileSet = "CHMPROD"
> JobDefs = NewJob
> Level = Incremental
> accurate = yes
> Schedule = IncrementalForEver
> }
> 
> File Set {
> Name = "CHMPROD"
> Ignore FileSet Changes = yes
> Include {
> @/opt/bacula/etc/bacula-dir/globalFilesetExclude.inc
> Options {
> signature = MD5
> accurate = mso5
> mtimeonly = yes
> onefs = no
> }
> File = "/mnt/BIGNAS1/W4STORAGE1/CHMPROD"
> }
> }
> /mnt/BIGNAS1/W4STORAGE1/CHMPROD is an NFS mount from a NAS.
> The job is incremental, run every night.
> 
> On the NAS, I moved CHMPROD from one volume to another.
> I did everything so that /mnt/BIGNAS1/W4STORAGE1/CHMPROD looks the same as
> before: path is unchanged, files are unchanged — only some changes in inode
> or so, of course...
> But one file (named 'remontage clip H.264.mp4') was backed up again after
> the move from one volume to another.
> 
> If I run:
> SELECT * FROM "file" WHERE "filename" = 'remontage clip H.264.mp4'
> I get
> fileid fileindex jobid pathid filename deltaseq markid lstat md5
> 4310935 48 2070 60831 remontage clip H.264.mp4 0 0 BA HSH IH/ B QC Bk A
> UcP/w gAA CjiA Bn1jG+ Bb4uFA BmPdi8 A A C B+W+bb8HrWfN75LLslzwHg
> 7161684 1 30768 60831 remontage clip H.264.mp4 0 0 BY Jmz IH/ B QC Bk A
> UcP/w gAA CjiA BowAeW Bb4uFA Bo3QSx A A C B+W+bb8HrWfN75LLslzwHg
> 
> jobid 2070 is the first (full) job, jobid 30768 is the job from the day
> after the volume move.
> 
> Let’s dig a little bit in bconsole:
> 
> *.bvfs_decode_lstat lstat="BA HSH IH/ B QC Bk A UcP/w gAA CjiA Bn1jG+
> Bb4uFA BmPdi8 A A C"
> st_nlink=1
> st_mode=33279
> perm=-rwxrwxrwx
> st_uid=1026
> st_gid=100
> st_size=342949872
> st_blocks=669824
> st_ino=29831
> st_ctime=1715329212
> st_mtime=1541595456
> st_atime=1742090686
> st_dev=64
> LinkFI=0
> 
> 
> *.bvfs_decode_lstat lstat="BY Jmz IH/ B QC Bk A UcP/w gAA CjiA BowAeW
> Bb4uFA Bo3QSx A A C"
> st_nlink=1
> st_mode=33279
> perm=-rwxrwxrwx
> st_uid=1026
> st_gid=100
> st_size=342949872
> st_blocks=669824
> st_ino=39347
> st_ctime=1759315121
> st_mtime=1541595456
> st_atime=1757415318
> st_dev=88
> LinkFI=0
> 
> 
> So the difference between the two "file" is in st_ino, st_ctime, st_atime,
> and st_dev.
> In my job definition, I added mtimeonly = yes, so st_ctime and st_atime
> shouldn’t be taken into account.
> I also added accurate = mso5, so I guess inode or device shouldn’t either.
>  My goal here is to save files only once, and back them up again only if
> the content (size or MD5...) has really changed.
> 
> So according to my (limited) understanding, nothing should have been backed
> up again… (?)
> 
> The questions are now:
> 
> - Have I done or understood something wrong?
> - Why was this file backed up again?
> - Why weren’t the other files in the same directory?
> - How can I ensure files are not backed up again in similar cases ?
> 
> In fact, I moved several folders (each corresponding to a job like CHMPROD)
> inside the NAS, and I got very inconsistent results — but most of the time,
> most files are backed up again.
> I intend to move hundreds of terabytes, so I need to understand how to do
> it without re-backing up such a huge amount of data.
> 
> Thanks a lot for your help!!
> 
> Sam
> 


_______________________________________________
Bacula-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to