Hi Martin,

Le jeu. 16 oct. 2025 à 18:48, Martin Simmons <[email protected]> a
écrit :

> I think there is some subtle bug with the 'o' fileset accurate option, so
> you
> could try accurate = ms5 to see if that fixes it (yes, performance will be
> degraded).
>

It would mean that when a file is moved and/or renamed, it would be backed
up again right ?
I would prefer not to :)

Also, note that the mtimeonly option is ignored when you use accurate = yes
> in
> the job (unless you include 'M' in the fileset accurate options).
>

I don't understand exactly what the "M" is doing  ?
Doc says "M Look mtime/ctime like normal incremental backup".
But accurate looks at these values anyway right ?
Can you explain to me a little bit more here ?

>
> The output of
>
> show fileset=CHMPROD
>
> might be useful to check that your globalFilesetExclude.inc doesn't have
> any
> bad side effects.
>
I guess it's OK ?
Here's the output :

FileSet: name=CHMPROD IgnoreFileSetChanges=1
      O e
      WD /*/.Spotlight-V100
      WD /*/.Trashes
      WD /*/.fseventsd
      WD /*/.TemporaryItems
      WD /*/.DocumentRevisions-V100
      WD /*/.AppleDouble
      WD /*/@eaDir
      WD /*/#recycle
      WD /*/@Recycle
      WD /*/@SynoResource
      WF /*/.DS_Store
      WF /*/._*
      WF /*/.apdisk
      WF /*/Icon?
      WF /*/Thumbs.db
      WF /*/desktop.ini
      WF /*/~$*
      WF *.tmp
      WF *.swp
      N
      O MCmso5:mf
      N
      I /mnt/BIGNAS1/W4STORAGE1/CHMPROD
      N


> __Martin
>
>
Many thanks for your help Martin !

Samuel



>
> >>>>> On Wed, 15 Oct 2025 18:58:31 +0200, Samuel Zaslavsky said:
> >
> > Hello everyone,
> >
> > Some tricky problem here — let me try to explain as briefly as possible.
> >
> > I have a job and a fileset as below:
> >
> > Job {
> > Name = "CHMPROD"
> > FileSet = "CHMPROD"
> > JobDefs = NewJob
> > Level = Incremental
> > accurate = yes
> > Schedule = IncrementalForEver
> > }
> >
> > File Set {
> > Name = "CHMPROD"
> > Ignore FileSet Changes = yes
> > Include {
> > @/opt/bacula/etc/bacula-dir/globalFilesetExclude.inc
> > Options {
> > signature = MD5
> > accurate = mso5
> > mtimeonly = yes
> > onefs = no
> > }
> > File = "/mnt/BIGNAS1/W4STORAGE1/CHMPROD"
> > }
> > }
> > /mnt/BIGNAS1/W4STORAGE1/CHMPROD is an NFS mount from a NAS.
> > The job is incremental, run every night.
> >
> > On the NAS, I moved CHMPROD from one volume to another.
> > I did everything so that /mnt/BIGNAS1/W4STORAGE1/CHMPROD looks the same
> as
> > before: path is unchanged, files are unchanged — only some changes in
> inode
> > or so, of course...
> > But one file (named 'remontage clip H.264.mp4') was backed up again after
> > the move from one volume to another.
> >
> > If I run:
> > SELECT * FROM "file" WHERE "filename" = 'remontage clip H.264.mp4'
> > I get
> > fileid fileindex jobid pathid filename deltaseq markid lstat md5
> > 4310935 48 2070 60831 remontage clip H.264.mp4 0 0 BA HSH IH/ B QC Bk A
> > UcP/w gAA CjiA Bn1jG+ Bb4uFA BmPdi8 A A C B+W+bb8HrWfN75LLslzwHg
> > 7161684 1 30768 60831 remontage clip H.264.mp4 0 0 BY Jmz IH/ B QC Bk A
> > UcP/w gAA CjiA BowAeW Bb4uFA Bo3QSx A A C B+W+bb8HrWfN75LLslzwHg
> >
> > jobid 2070 is the first (full) job, jobid 30768 is the job from the day
> > after the volume move.
> >
> > Let’s dig a little bit in bconsole:
> >
> > *.bvfs_decode_lstat lstat="BA HSH IH/ B QC Bk A UcP/w gAA CjiA Bn1jG+
> > Bb4uFA BmPdi8 A A C"
> > st_nlink=1
> > st_mode=33279
> > perm=-rwxrwxrwx
> > st_uid=1026
> > st_gid=100
> > st_size=342949872
> > st_blocks=669824
> > st_ino=29831
> > st_ctime=1715329212
> > st_mtime=1541595456
> > st_atime=1742090686
> > st_dev=64
> > LinkFI=0
> >
> >
> > *.bvfs_decode_lstat lstat="BY Jmz IH/ B QC Bk A UcP/w gAA CjiA BowAeW
> > Bb4uFA Bo3QSx A A C"
> > st_nlink=1
> > st_mode=33279
> > perm=-rwxrwxrwx
> > st_uid=1026
> > st_gid=100
> > st_size=342949872
> > st_blocks=669824
> > st_ino=39347
> > st_ctime=1759315121
> > st_mtime=1541595456
> > st_atime=1757415318
> > st_dev=88
> > LinkFI=0
> >
> >
> > So the difference between the two "file" is in st_ino, st_ctime,
> st_atime,
> > and st_dev.
> > In my job definition, I added mtimeonly = yes, so st_ctime and st_atime
> > shouldn’t be taken into account.
> > I also added accurate = mso5, so I guess inode or device shouldn’t
> either.
> >  My goal here is to save files only once, and back them up again only if
> > the content (size or MD5...) has really changed.
> >
> > So according to my (limited) understanding, nothing should have been
> backed
> > up again… (?)
> >
> > The questions are now:
> >
> > - Have I done or understood something wrong?
> > - Why was this file backed up again?
> > - Why weren’t the other files in the same directory?
> > - How can I ensure files are not backed up again in similar cases ?
> >
> > In fact, I moved several folders (each corresponding to a job like
> CHMPROD)
> > inside the NAS, and I got very inconsistent results — but most of the
> time,
> > most files are backed up again.
> > I intend to move hundreds of terabytes, so I need to understand how to do
> > it without re-backing up such a huge amount of data.
> >
> > Thanks a lot for your help!!
> >
> > Sam
> >
>
_______________________________________________
Bacula-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to