After studying the code of ffs_reallocblks() for a while, it occurs to me
that the on-the-fly defragmentation of a FFS file (It does this on a per
file basis) only takes place at the end of a file and only when the
previous logical blocks have all been laid out contiguously on the disk
(see also cluster_write()).  This seems to me a lot of limitations to the
FFS defragger.  I wonder if the file was not allocated contiguously
when it was first created, how can it find contiguous space later unless
we delete a lot of files in between?

I hope someone can confirm or correct my understanding. It would be even
better if someone can suggest a way to improve defragmentation if the FFS
defragger is not very efficient.

BTW, if I copy all files from a filesystem to a new filesystem, will the
files be stored more contiguously?  Why?

Any help or suggestion is appreciated.

-Zhihui




To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to