The NameNode keeps in safe mode might because it can not reach the block 
reported ratio threshold as some files are corrupted.
You can use hadoop dfsadmin -safemode leave
to leave safe mode at first, and then hadoop fsck / -move or -delete to 
move/delete inconsistent files.

-Yifeng

On Jan 10, 2012, at 6:24 PM, V_sriram wrote:

> 
> I am using hadoop 0.20.append and hbase 0.90.0. I uploaded few data into
> Hbase and then killed HMaster and Namenode for an evaluation purpose. After
> this I added few more data to the Hbase and I could see them in the hbase
> shell.
> 
> Now when I started the Namenode, I am facing problems. The log says that the
> name node is in safe mode and I am not able to add or delete the contents as
> it is in Safemode.
> 
> Also when I just ran
> 
> ./bin/hadoop fsck /
> 
> I get,
> 
> ............Status: HEALTHY Total size: 12034 B (Total open files size: 4762
> B) Total dirs: 22 Total files: 12 (Files currently being written: 3) Total
> blocks (validated): 12 (avg. block size 1002 B) (Total open file blocks (not
> validated): 3) Minimally replicated blocks: 12 (100.0 %) Over-replicated
> blocks: 0 (0.0 %) Under-replicated blocks: 0 (0.0 %) Mis-replicated blocks:
> 0 (0.0 %) Default replication factor: 3 Average block replication: 3.0
> Corrupt blocks: 0 Missing replicas: 0 (0.0 %) Number of data-nodes: 3 Number
> of racks: 1
> 
> The filesystem under path '/' is HEALTHY
> 
> But when I run ./bin/hadoop fsck / -openforwrite
> 
> I get, Total size: 16796 B Total dirs: 22 Total files: 15 Total blocks
> (validated): 15 (avg. block size 1119 B)
> 
> CORRUPT FILES: 2
> 
> Minimally replicated blocks: 13 (86.666664 %) Over-replicated blocks: 0 (0.0
> %) Under-replicated blocks: 0 (0.0 %) Mis-replicated blocks: 0 (0.0 %)
> Default replication factor: 3 Average block replication: 2.6 Corrupt blocks:
> 0 Missing replicas: 0 (0.0 %) Number of data-nodes: 3 Number of racks: 1
> 
> The filesystem under path '/' is CORRUPT
> 
> along with the info of corrupt blocks.
> 
> Also tried using
> 
> ./bin/hadoop fsck / -move
> 
> But even after that getting the same list of corrupt blocks. Any idea
> regarding how to tackle this and recover my contents?
> 
> -- 
> View this message in context: 
> http://old.nabble.com/Hadoop-corrupt-blocks-after-killing-name-node----during-adding-Hbase-data-tp33109897p33109897.html
> Sent from the HBase User mailing list archive at Nabble.com.
> 

Reply via email to