Hey guys,
Really trying to get our namenode back up and running after a full disk
error last night. I've freed up a lot of space, however the NameNode still
fails to startup:
2011-06-12 10:26:09,042 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemStatusMBean
2011-06-12 10:26:09,083 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files = 614919
2011-06-12 10:26:22,293 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files under construction = 17
2011-06-12 10:26:22,300 INFO org.apache.hadoop.hdfs.server.common.Storage:
Image file of size 102029859 loaded in 13 seconds.
2011-06-12 10:26:22,510 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.lang.NumberFormatException: For input string: ""
at
java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
at java.lang.Integer.parseInt(Integer.java:468)
at java.lang.Short.parseShort(Short.java:120)
at java.lang.Short.parseShort(Short.java:78)
at
org.apache.hadoop.hdfs.server.namenode.FSEditLog.readShort(FSEditLog.java:1269)
at
org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:550)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:992)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:812)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:364)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
We currently have our config setup as follows:
<property>
<name>dfs.name.dir</name>
<value>/data1/hadoop/dfs/name,/data2/hadoop/dfs/name,/data3/hadoop/dfs/name,/data4/hadoop/dfs/name</value>
</property>
I've looked in each of those directories for an image/edits.new file, but
only the "edits" files exist.
Can anyone please guide me on the next step here to get this back up and
running?
Thanks!
Ryan