Bill,
My previous statement wasn't correct. In V4, each directory in a backup
tree consumes 2 inodes, one for the directory and the other for the (empty)
attrib file. In V3, each directory in a backup tree consumes 1 inode for
the directory, and everything else is hardlinked, including the attrib
file.
So when you migrate a V3 backup, the number of inodes to store the backup
trees will double, as you observe. The pool inode usage shouldn't change
much, but with lots of backups the former number dominates.
In a new V4 installation the inode usage will be somewhat lower, since in
V4 incrementals don't store the entire backup tree (just the directories
that have changes get created). In a series of backups where the directory
contents change every backup, including the pool file, V4 will use 3 inodes
per backup directory (directory, attrib file, pool file), while V3 will use
2 (directory, {attrib, pool} linked). So the inode usage is 1.5 - 2x.
I'll add a mention of the inode usage to the documentation.
Craig
On Wed, Mar 29, 2017 at 6:27 PM, Bill Broadley <b...@broadley.org> wrote:
> On 03/29/2017 06:13 PM, Craig Barratt wrote:
> > Bill,
> >
> > Sure, I agree that multiple hardlinks only consume one inode. Each v3
> pool file
> > (when first encountered in a v3 backup) should get moved to the new v4
> pool. So
> > that shouldn't increase the number of inodes. The per-directory backup
> storage
> > in v4 should be more efficient; I'd expect one less inode per v4
> directory. v4
> > does add some reference count files per backup (128), but that's
> rounding error.
> >
> > Can you look in the V3 pool? Eg, is $TOPDIR/cpool/0/0/0 empty?
>
> Yes:
> root@fs1:/backuppc/cpool/0/0/0# ls -al
> total 116
> drwxr-x--- 2 backuppc backuppc 110592 Mar 28 01:01 .
> drwxr-x--- 18 backuppc backuppc 4096 Jan 24 2013 ..
> root@fs1:/backuppc/cpool/0/0/0#
>
>
> > It could be it
> > didn't get cleaned if you turned off the V3 pool before BackupPC_nightly
> ran the
> > next time. If so, I'd expect the old v3 pool is full of v3 attrib
> files, each
> > with one link (ie, not used any longer).
>
> Currently:
> root@fs1:~# df -i /backuppc/
> Filesystem Inodes IUsed IFree IUse% Mounted on
> /dev/md3 238071808 53427304 184644504 23% /backuppc
>
> It was around 11% with V3, it's now 23% and still agrees with the plot I
> made.
>
> I've not added hosts, changed the number of incremental or full backups,
> or many
> any other changes that should increase the node count.
>
> As each backup (like #1405) was migrated the directory would be renamed to
> .old,
> migrated, and then removed. So there would be a steep increase in inodes,
> and a
> drop, but never to the original number.
>
> I have two backuppc servers, each with different pools of clients, they
> both
> went from approximately 11% of the filesystem's inodes being used to 22%.
>
>
>
>
>
>
>
> ------------------------------------------------------------
> ------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> BackupPC-devel mailing list
> BackupPC-devel@lists.sourceforge.net
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel
> Wiki: http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
BackupPC-devel mailing list
BackupPC-devel@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-devel
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/