I am using hadoop 0.20.append and hbase 0.90.0. I uploaded few data into
Hbase and then killed HMaster and Namenode for an evaluation purpose. After
this I added few more data to the Hbase and I could see them in the hbase
shell.

Now when I started the Namenode, I am facing problems. The log says that the
name node is in safe mode and I am not able to add or delete the contents as
it is in Safemode.

Also when I just ran

./bin/hadoop fsck /

I get,

............Status: HEALTHY Total size: 12034 B (Total open files size: 4762
B) Total dirs: 22 Total files: 12 (Files currently being written: 3) Total
blocks (validated): 12 (avg. block size 1002 B) (Total open file blocks (not
validated): 3) Minimally replicated blocks: 12 (100.0 %) Over-replicated
blocks: 0 (0.0 %) Under-replicated blocks: 0 (0.0 %) Mis-replicated blocks:
0 (0.0 %) Default replication factor: 3 Average block replication: 3.0
Corrupt blocks: 0 Missing replicas: 0 (0.0 %) Number of data-nodes: 3 Number
of racks: 1

The filesystem under path '/' is HEALTHY

But when I run ./bin/hadoop fsck / -openforwrite

I get, Total size: 16796 B Total dirs: 22 Total files: 15 Total blocks
(validated): 15 (avg. block size 1119 B)

CORRUPT FILES: 2

Minimally replicated blocks: 13 (86.666664 %) Over-replicated blocks: 0 (0.0
%) Under-replicated blocks: 0 (0.0 %) Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3 Average block replication: 2.6 Corrupt blocks:
0 Missing replicas: 0 (0.0 %) Number of data-nodes: 3 Number of racks: 1

The filesystem under path '/' is CORRUPT

along with the info of corrupt blocks.

Also tried using

./bin/hadoop fsck / -move

But even after that getting the same list of corrupt blocks. Any idea
regarding how to tackle this and recover my contents?

-- 
View this message in context: 
http://old.nabble.com/Hadoop-corrupt-blocks-after-killing-name-node----during-adding-Hbase-data-tp33109897p33109897.html
Sent from the HBase User mailing list archive at Nabble.com.

Reply via email to