On Tue, Sep 1, 2009 at 12:05 AM, Les Mikesell <[email protected]> wrote:

> Jim Leonard wrote:
> > Les Mikesell wrote:
> >> With backuppc the issue is not so much fragmentation within a file as
> >> the distance between the directory entry, the inode, and the file
> >> content.  When creating a new file, filesystems generally attempt to
> >> allocate these close to each other, but when you link an existing file
> >> into a new directory, that obviously can't be done so you end up with a
> >> lot of long seeks when you try to traverse directories picking up the
> >> inode info.
> >
> > For some filesystem implementations, this is true.  For others, it is
> > not, due to judicious use of caching, preloading, and lookahead.
>
> Why would any filesystem 'judiciously' cache things for unlikely use
> patterns?
>
> Specifically to serialize reads and writes.  ZFS does this heavily.  simply
said, why make 10 trips when you can wait 1 second and make 1 trip.

I think that backuppc will naturally 'fragment' files but not blocks.  it is
true that when writing a backup all those files are likely to be more
contiguous but as that backup expires but the hardlinks remain the files
will naturally not be contiguous.  The only solution is to read and re-write
the files to the end of the disk, and then read and write them to the
beginning.  again, this is file fragmentation not block fragmentation.
------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
BackupPC-users mailing list
[email protected]
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to