A few weeks ago I set up the secondary namenode to run on a different machine
as follows:

- on the NN I put the host of the 2NN server inside the slaves file
- on the 2NN I added the following to the hadoop-site.xml file

<property>
 <name>dfs.http.address</name>
 <value>master001.com:50070</value>
 <description>
The address and the base port where the dfs namenode web ui will listen on.
   If the port is 0 then the server will start on a free port.
 </description>
</property>

I've been seeing this error in the 2NN log:
2009-10-20 20:41:15,536 INFO org.apache.hadoop.dfs.NameNode.Secondary: Posted URL master001.com:50070putimage=1&port=50090&machine=127.0.0.1&token=-16:1244615693:0:1256083639000:1256083542234 2009-10-20 20:41:15,540 ERROR org.apache.hadoop.dfs.NameNode.Secondary: Exception in doCheckpoint: 2009-10-20 20:41:15,540 ERROR org.apache.hadoop.dfs.NameNode.Secondary: java.io.FileNotFoundException: http://master001.com:50070/getimage?putimage=1&port=50090&machine=127.0.0.1&token=-16:1244615693:0:1256083639000:1256083542234 at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1303) at org.apache.hadoop.dfs.TransferFsImage.getFileClient(TransferFsImage.java:150)


I'm stuck at this point and have no idea how to fix this :/ Does anyone have any ideas?
I'm using Hadoop 0.18.3 btw..

thanks,
M

Reply via email to