Tapio Lehtonen wrote on 2018-05-15 14:14:36 +0300 [[BackupPC-users] Out of 
inodes, how to recover?]:
> Backing up a single desktop computer, BackupPC host ran out of
> inodes at 15 million inodes used. Reading old messages from this
> list I was surprised to learn BackupPC uses only one (or two?)
> inodes per backed up file, no matter how  many hard links are used.

I believe the details depend on the BackupPC version you are using (4.x
vs. 3.x).

> I am thus at a loss to explain how come 15 million inodes is not
> enough for backing up this single desktop computer.

Explaining will only help if something is actually going wrong - which
might be the case (something not excluded that should be?).

It may be worth noting, though, that each *directory* in each backup will
also use up one inode, and possibly another one for an attrib file (in 3.x,
attrib files are pooled, in 4.x, I believe they aren't, but I'm not sure).
And it's more like "one inode per backed up version of each file", so
quickly changing data together with a large number of backups might be a

> [...]
> Question: The host is already out of inodes on backuppc partition,
> can it still remove old backups now obsolete since lower
> FullKeepCnt?

For 3.x, I can only imagine that you might need some few free inodes
for log files, the server unix domain socket, and a backup of a
'backups' file. Aside from that, expiring a backup is just deleting
files and directories - and for the directories, you immediately
regain a free inode. If nothing else helps, you can get yourself back
into business by deleting (part of) a directory tree that BackupPC
is supposed to expire. That will delete some directories and thus
free up inodes. For the actual expiration, BackupPC will look at some
meta data and then recursively delete the tree structure without
further inspecting attrib files, so it won't even notice that part of
the tree is already missing. To be safe, start somewhere below the root
and choose the correct backup (in particular, not a full backup with
dependant incrementals):

        rm -r $TopDir/pc/host/num/f%2fmy%2fshare%2fname/ffoo

Again, you might not even need to worry about this. It may just work
by itself.

For 4.x, that is a good question. I could imagine the reverse delta
storage might cause problems on a full file system (storage or inode
wise), but I'm only guessing here. Deleting the oldest backups (i.e.
not intermediate ones) is probably safe.

> [...]
> From what I learned from reading discussions, copying the pool to
> larger disk with more inodes is not feasible.

That is not true. It will take its time, and it may require some thought.
cp, tar, rsync et al. may or may not work for you, BackupPC_copyPool likely
will, but it is rather experimental as in "not widely tested" (please ask
me if you're interested). And this is for 3.x. A 4.x pool should be easily
copied with the usual tools without any problems.

> [...]
> So it is time to
> start planning a new backuppc host and use lower bytes-per-inode or
> use a filesystem with dynamic inodes.

Whether a new host or just a new file system to copy the existing pool to
- I would recommend dynamic inode allocation, if just to avoid running into
the same problem again at a later point in time.

Hope that helps.


Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
BackupPC-users mailing list
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to