Le Tue, 21 Mar 2006 23:41:22 -0800, Hans Reiser <[EMAIL PROTECTED]> a écrit :

>  It may be that we need to port some of
> the block allocation optimizations from V3 to V4 (Jeff's work) to help
> with 90% full filesystems.

Talking of that, I've read about a localized performance problem of
reiserfs 3 in backuppc's mailing list (that is otherwise similar in
performance with xfs for that task). I wonder if it was ever reported
to you, as suggested in this mailing list...

http://sourceforge.net/mailarchive/message.php?msg_id=8646808

My understanding is that backuppc is hitting reiserfs3 hard links worse
case.

Backuppc creates a huge pool of all versions of all files from all
backups, compressed, organized using MD5 hashing (handling collisions
of course), and hardlinked from their different backup views. [Some
metadata is stored separately, so that several files with same content
but different metadata can still be shared on disk. But I digress]
At night, a sweeping process takes place to remove too old backups
(according to user policy), and maybe check if some more background
sharing/compression can be done.

If I remember well, v3 puts directory entries and their corresponding
inodes next to each other on disk. When hardlinks are created, new
directory entries are created, pointing to the same inode. If the first
directory entry is removed, the inode could be no longer stored near
any of the entries pointing to it.

Since backuppc is routinely removing directory entries in FIFO order,
it's almost guaranteed to happen every time. Hence a very bad inodes
distribution on disk after some time...

I don't know what xfs does exactly (blocks of preallocated inodes ?) but
it does better in this case.

Hope it helps,
Pierre.

Reply via email to