It's all V4.
It's not a capacity problem.
It's a performance problem.
And trying to figure out why it is so slow, I tried looking at what it
is actually doing.
Which brought me to what I was initially asking.
I'm not sure where the stuff in 2846 is coming from before being deleted
again (because of exclusion).
Anyway "new entries in metadata" (represented by files) means handling
millions of files and directories as well.
But again, that was not the question.
The question is, why is that new backup set being populated with stuff
just to be deleted again after that (the exclusion list).
On 2023-10-19 13:17, Guillermo Rozas wrote:
Are you using V3 or V4? According to my understanding, this step
However 2846 seems to be populated with the content from 2845
(including
the stuff I have excluded).
It looks like it's first copying all the stuff from 2845 (even the
excluded path) and then later tries to remove it again from 2846.
Which is also taking forever in the original example as it's a
directory
tree with millions of files.
Also in the original example the disk isn't large enough, so we're
not
even making it to that stage.
should require barely extra space. In V3 the "copy" is hard-linking
and in V4 it's just new entries in metadata files, in neither case an
actual copy of the files in the pool is done (because of
deduplication). Maybe you're running out of inodes?
Best regards,
Guillermo
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/