I solve this problem,in hdfs-site.xml,i add the following config
<property> <name>dfs.http.address</name> <value>namenode.host.address:50070</value> <description> The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> yibo820217 wrote: > > hi > > here I choose a machine as a namenode,and a machine as a secondary > namenode, > a machine as a datanode. > when i start up hadoop(bin/start-all.sh), > there are some errors in secondary namenode,like this: > > 2009-10-21 15:34:30,317 INFO org.apache.hadoop.hdfs.server.common.Storage: > Recovering storage directory /data1/hadoopfs/namesecondary from failed > checkpoint. > 2009-10-21 15:34:30,319 ERROR > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in > doCheckpoint: > 2009-10-21 15:34:30,320 ERROR > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: > java.net.ConnectException: Connection refused > at java.net.PlainSocketImpl.socketConnect(Native Method) > at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) > at > java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:193) > at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) > at java.net.Socket.connect(Socket.java:519) > at java.net.Socket.connect(Socket.java:469) > at sun.net.NetworkClient.doConnect(NetworkClient.java:163) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) > at sun.net.www.http.HttpClient.<init>(HttpClient.java:233) > at sun.net.www.http.HttpClient.New(HttpClient.java:306) > at sun.net.www.http.HttpClient.New(HttpClient.java:323) > at > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:837) > at > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:778) > at > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:703) > at > sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1026) > at > org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:151) > at > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.downloadCheckpointFiles(SecondaryNameNode.java:256) > at > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:313) > at > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:225) > at java.lang.Thread.run(Thread.java:619) > > > the log in namenode is like this: > > 2009-10-21 15:31:25,996 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from > 100.207.100.33 > ........ > > can anybody tell me the reason? > > Thanks! > -- View this message in context: http://www.nabble.com/Exception-in-doCheckpoint%3A-Connection-refused-tp25987885p26022917.html Sent from the Hadoop core-user mailing list archive at Nabble.com.
