On Thu, 2009-01-22 at 12:11 -0600, Chad Kerner wrote: > Hello, > > We are running lustre 1.6.6 and are seeing a weird error on space > usage. The filesystem is not anywhere near full, but writes are failing > if they hit OST 23. > > if I do an lfs setstripe -i 23 chad, and then do > # dd if=/dev/zero of=chad > dd: writing to `chad': No space left on device > 26+0 records in > 25+0 records out > # > > The actual device is fairly full. > # df /lustre/home/ost_h_24 > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/ost_h_24 564172088 509452368 26061444 96% > /lustre/home/ost_h_24
At 4% free, unless you have changed the "reserved space" on the OSTs' filesystem (see the ops manual) you are into the space that a normal user is not allowed to write (and gets ENOSPC when he does). By default 5% of every device is reserved for root. That said, you really are running that OST quite full. Historically (I'm not sure if this still applies -- maybe one of our ext3 experts can comment) if you run an ext3 filesystem >80% you start to get performance degradations. Are you getting any ENOSPC (-28) errors other than by trying to force a write to that full OST? I ask because unless directed specifically to use a particular OST (as your example does) the MDS should avoid using a full OST. If it's not, that's a bug. b.
signature.asc
Description: This is a digitally signed message part
_______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
