I have a file system with some missing/corrupt blocks. However, running hdfs
fsck -delete also fails with errors. How do I get around this?
Thanks
John
[hdfs@metallica yarn]$ hdfs fsck -delete
/rpdm/tmp/ProjectTemp_461_40/TempFolder_4/data00012_00.dld
Connecting to namenode via
(SelectChannelEndPoint.java:410)
at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
From: John Lilley [mailto:john.lil...@redpoint.net]
Sent: Tuesday, March 04, 2014 6:08 AM
To: user@hadoop.apache.org
Subject: Need help: fsck FAILs, refuses to clean up corrupt fs
I have
Ah... found the answer. I had to manually leave safe mode to delete the
corrupt files.
john
From: John Lilley [mailto:john.lil...@redpoint.net]
Sent: Tuesday, March 04, 2014 9:33 AM
To: user@hadoop.apache.org
Subject: RE: Need help: fsck FAILs, refuses to clean up corrupt fs
More information
(SelectChannelEndPoint.java:410)
at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
*From:* John Lilley [mailto:john.lil...@redpoint.net]
*Sent:* Tuesday, March 04, 2014 6:08 AM
*To:* user@hadoop.apache.org
*Subject:* Need help: fsck FAILs, refuses to clean up corrupt fs