On Tue, 2006-03-07 at 11:10, David Brown wrote:

> The depth isn't really the issue.  It is that they are created under one
> tree, and hardlinked to another tree.  The normal FS optimization of
> putting the inodes of files in a given directory near each other breaks
> down, and the directories in the pool end up with files of very diverse
> inodes.
> 
> Just running a 'du' on my pool takes several seconds for each leaf
> directory, very heavily thrashing the drive.

If it hurts, don't do it.  The only operation in backuppc that
traverses directories is the nightly run to remove the expired
links and it only has to go through the pool. Most operations
look things up by name.

> I still say it is going to be a lot easier to change how backuppc works
> than it is going to be to find a filesystem that will deal with this very
> unusual use case well.

All you'll do by trying is lose the atomic nature of the hardlinks.
You aren't ever going have the data at the same time you know all
of it's names so you can store them close together.  Just throw in
lots of ram and let caching do the best it can.

-- 
  Les Mikesell
   [EMAIL PROTECTED]




-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to