Hello yesterday my cluster of machines stopped, since then the namenode
doesn't want to start in hadoop-nutch-namenode.log i get a following error
each time when i try to start it
2007-12-05 07:44:26,649 ERROR dfs.NameNode - java.io.EOFException
at org.apache.hadoop.io.ArrayWritable.readFields(ArrayWritable.java
:90)
at org.apache.hadoop.dfs.FSEditLog.loadFSEdits(FSEditLog.java:501)
at org.apache.hadoop.dfs.FSImage.loadFSEdits(FSImage.java:733)
at org.apache.hadoop.dfs.FSImage.loadFSImage(FSImage.java:620)
at org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java
:222)
at org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java
:76)
at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:221)
at org.apache.hadoop.dfs.NameNode.init(NameNode.java:130)
at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:168)
at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:795)
at org.apache.hadoop.dfs.NameNode.main(NameNode.java:804)
Any suggestions on that error. Does that mean that namenode data is
corrupted ?
I'm using hadoop 0.15
--
Karol Rybak
Programista / Programmer
Sekcja aplikacji / Applications section
Wyższa Szkoła Informatyki i Zarządzania / University of Internet Technology
and Management
+48(17)8661277