Andreas Dilger <[EMAIL PROTECTED]> writes:
> On Nov 24, 2006 16:12 +0100, Goswin von Brederlow wrote:
>> I'm running Lustre 1.4.6 on a 2.6.15.7 vanilla kernel and am trying to
>> deciver some lustre error messages. The system has 4 systems with 2
>> 2TB OSTs each and default of 4 stripes for files. The lustre is 83%
>> full so there is over 1TB free space.
>
> Try on a client "lfs df" (if this is in 1.4.6, not sure), or alternately
> "grep '[0-9]' /proc/fs/lustre/osc/*/kbytes*" to see free space per OST.
>
> Also check "lfs df -i" or "grep '[0-9]' /proc/fs/lustre/osc/*/files*" to
> see free inodes per OST.
There is sufficient space now. The OSTs are slightly different in size
but none has less than 70G free now. Plent< of inodes too.
Our guess, after your info, is that at the time of the error there
must have been a big job using up all space. And upon failing it has
cleaned up freeing the space again.
>> On the server I get:
>>
>> [612824.958378] LustreError:
>> 17495:0:(lov_request.c:621:lov_update_create_set()) error creating fid
>> 0x6a79400 sub-object on OST idx 5/4: rc = -28
>
> /usr/include/asm/errno.h says -28 is "No space left on device" and the
> message reports OST idx 5 is the one out of space.
Thanks. So the error numbers do correlate. I wasn't sure about that
given the amount of free space left in total.
MfG
Goswin
_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss