Thanks very much for this suggestion. The fsck using is not dying. It quickly passed the point where the original fscks barfed (using e2fsprogs-1.41.10.sun2).
But the fsck using seems to be going extremely slowly - it ran all night, and is still running. This is very abnormal, as fsck's on the OSTs in this filesystem usually take on order of 30 minutes. I'd like to understand better what fsck is doing at this time. fsck seems to be spending a lot of time in Pass1D, cloning multiply-claimed blocks. But there is no output from fsck in many hours now, 1) fsck.ext4 is using 100% of a 2.2GHz core. The progress of the fsck seems to be CPU bound for a long time (many hours). We're not used to seeing this. 2) Using iostat, I can see the I/O rates are very low (10s of KB/s read and write). 3) Using strace, I can see a pattern of read()/seek()/write()/seek() being repeated over and over. I guess this should not be surprising if fsck is really cloning multiply-claimed blocks. 4) Using pstack, I can see fsck.ext4 is in ext2fs_block_iterate2() - it looks like there is a lot of time being spent in ext2fs_new_block(). I'd like to understand what fsck is doing that takes so much CPU. The OST was pretty full (~90%)... Is it computationally expensive to clone multiply-claimed blocks on a filesystem this full? I'm also wondering if I should let this continue or not. I appended a bit of the strace output. From the offset arg to the lseek() calls, it looks like data is being copied from one side of the spindles to the other(?). Thanks, Craig Prescott UF HPC Center Sample strace output: ... read(3, "\313R\354\222\205%\16\227\221,\226\35\317\22\331,0\312\262\330\252\314wI\2\345^\305\222d\273$"..., 4096) = 4096 lseek(3, 36574076928, SEEK_SET) = 36574076928 write(3, "\35z\354 \252\370\24\317\323\236VL]NF;\335\303\16w&\n\312\236F\0\3664RK\366\304"..., 4096) = 4096 lseek(3, 7424726908928, SEEK_SET) = 7424726908928 ... Colin Faber wrote: > Hi, > > Try upgrading to the latest e2fsprogs package. 1.41.12.2 > > -cf > > > On 12/01/2010 03:20 PM, Craig Prescott wrote: >> I forgot to add - our affected OSS is running Lustre 1.8.4, and >> e2fsprogs-1.41.10.sun2-0redhat. `uname -r` gives >> >> 2.6.18-194.3.1.0.1.el5_lustre.1.8.4 >> >> Thanks, >> Craig Prescott >> UF HPC Center >> >> >> Craig Prescott wrote: >>> Hi, >>> >>> We are trying to fsck an OST that was not unmounted >>> cleanly. But fsck is dying with this error after making some >>> corrections: >>> >>> [r...@xxxxxx tmp]# fsck -f -y /dev/arc1-lv2/OST0003 >>> ... >>> High 16 bits of extent/index block set >>> CLEARED. >>> Inode 306602015 has an invalid extent node (blk 512, lblk 641536) >>> Clear? yes >>> >>> Warning... fsck.ext4 for device /dev/arc1-lv2/OST0003 exited with >>> signal 11. >>> >>> It is repeatable. >>> >>> So we are stuck. We need to fsck our OST, but fsck is dying. Can >>> anyone give us some advice on how to proceed? >>> >>> Thanks, >>> Craig Prescott >>> UF HPC Center >>> _______________________________________________ >>> Lustre-discuss mailing list >>> [email protected] >>> http://lists.lustre.org/mailman/listinfo/lustre-discuss >> _______________________________________________ >> Lustre-discuss mailing list >> [email protected] >> http://lists.lustre.org/mailman/listinfo/lustre-discuss _______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
