On Thursday 16 August 2007 14:03:56 you wrote: > On Thu, 2007-08-16 at 13:44 +0200, Bernd Schubert wrote: > > Hi, > > > > on our test-cluster I managed to get a very fragmented filesystem (e2fsck > > reports about 9%), mb_history returns a cr value of 5. The documentation > > says that "3 – fs is very fragmented (worst at resource consumption)", > > but it doesn't tell what 5 means. May I assume 5 is even worse than 3? > > > > So no the question is what to do about this fragmention? On our test > > cluster with 100% garbage this is not a problem, but on a customer system > > it certainly is. There was an old ext2 defrag program, would it be > > possible to extent this to support journal and extents data? Where can I > > find some documentation about the ldiskfs format (ext3 + extents)? I > > would like to know which blocks need to be moved. > > Is the filesystem filled with lots of small files? Are you working with > the latest mballoc? It is supposed to be very good at avoiding > fragmentation.
Yes and yes :) This is lustre-1.6.1 and mballoc3 is enabled. However, accidentely I enabled extents debugging (#define AGRESSIVE_TEST in ext3_extents.h). Over night fsstress was running (using same tests as the citi.umich.edu NFSv4 developers are doing). This creates, deletes, etc. rather many small files. This and extents debugging seems to be the cause of the fragmention. I will run fsstress again this night on a fresh filesystem, lets see tomorrow how it looks like. Anyway I really like it to have found a reproducible way to create fragmention :) > > To know the on-disk layout of the blocks belonging to a specific file, > you can use the FIEMAP patches available in bug 10555. (use the ext3 > patch and filefrag). > > CFS is also working on a free space defragmentor (bug 10827) which > should be able to solve such problems. Thanks, going to look into this. Thanks again, Bernd -- Bernd Schubert Q-Leap Networks GmbH _______________________________________________ Lustre-discuss mailing list [email protected] https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
