Hello,


RHEL4
Kernel 2.6.9-67.0.22smp
Lustre-1.6.6

Lustre MDS report following error:
Jan 22 15:20:40 mds01.beowulf.cluster kernel: LustreError: 24680:0:(lov_request.c:692:lov_update_create_set()) error creating fid 0xeb79c9d sub-object on OST idx 4/1: rc = -28
Which I translate as that one of the OST (index 4/1) is full and has no space left on device.
OSS seem to be consistent and says:
Jan 22 15:21:15 storage08.beowulf.cluster kernel: LustreError: 23507:0:(filter_io_26.c:721:filter_commitrw_write()) error starting transaction: rc = -30
Which  I translate as Client would like to write to an existing file but it can't because file system is read only.
The OST device is still mounted with rw option
/etc/mtab contains line - /dev/dm-8 /mnt/ddn-data/ost7 lustre rw 0 0
I tried to create new objects on this OST (by explicitly striping to it  `lfs setstripe -c1 -i4  testdir`) but it seem that lustre automatically skips it and goes to the next one.
I think that lustre finds out that OST(idx4) is full end it prevents clients from writing to it and allows them to read only access, is that correct?


Now the main question is why Lustre thinks that OST(idx4) is full?
df command on the OSS mounting this OST tells something opposite:

df -h
/dev/dm-8             3.6T  3.0T  465G  87% /mnt/ddn-data/ost7

df -i
/dev/dm-8            244195328 1055690 243139638    1% /mnt/ddn-data/ost7

So it seem that there is still 465Gbytes left and plenty of inodes on that OST.
Is it possible that this OST have meny orphaned objects which takes all the available space?
Is there a way of reclaiming back this free space?

Regards,

Wojciech

--
tunefs.lustre --print /dev/dm-8
checking for existing Lustre data: found CONFIGS/mountdata
Reading CONFIGS/mountdata

Target:     ddn_data-OST0004
Index:      4
Lustre FS:  ddn_data
Mount type: ldiskfs
Flags:      0x2
              (OST )
Persistent mount opts: errors=remount-ro,extents,mballoc



_______________________________________________
Lustre-discuss mailing list
[email protected]
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to