On Thu, 2009-03-19 at 14:33 -0400, Aaron Everett wrote: > Hello all, Hi,
> We are running 1.6.6 with a shared mgs/mdt and 3 ost’s. We run a set > of tests that write heavily, then we review the results and delete the > data. Usually the load is evenly spread across all 3 ost’s. I noticed > this afternoon that the load does not seem to be distributed. Striping as well as file count and size affects OST distribution as well. Are any of the data involved striped? Are you writing very few large files before you measure distribution? > OST0000 has a load of 50+ with iowait of around 10% > > OST0001 has a load of <1 with >99% idle > > OST0002 has a load of <1 with >99% idle What does lfs df say before and after such a test that produces the above results? Does it bear out even use amongst the OST before, and after the test? > df confirms the lopsided writes: lfs df [-i] from a client is usually more illustrative of use. As I say above, if you can quiesce the filesystem for the test above, do an lfs df; lfs df -i before the test and after. Assuming you were successful in quiescing, you should see the change to the OSTs that your test effected. > OST0000: > > Filesystem Size Used Avail Use% Mounted on > > /dev/sdb1 1.2T 602G 544G 53% /mnt/fortefs/ost0 What's important is what it looked like before the test too. Your test could have, for example, wrote a single object (i.e. file) of nearly 300G for all we can tell from what you've posted so far. b.
signature.asc
Description: This is a digitally signed message part
_______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
