Harsh, That was part of the problem. I fixed hdfs-site.xml, and now I can connect to the http://master:50070, but when I hit http://master:50070/getimage, I get a 401. Also, I'm now seeing this in the log on my 2NN:
2012-02-01 15:17:04,896 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Web server init done 2012-02-01 15:17:04,896 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Secondary Web-server up at: 0.0.0.0:50090 2012-02-01 15:17:04,896 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Secondary image servlet up at: 0.0.0.0:50090 2012-02-01 15:17:04,896 WARN org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint Period :3600 secs (60 min) 2012-02-01 15:17:04,896 WARN org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Log Size Trigger :67108864 bytes (65536 KB) 2012-02-01 15:22:04,967 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint: 2012-02-01 15:22:04,968 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:211) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at java.net.Socket.connect(Socket.java:478) at sun.net.NetworkClient.doConnect(NetworkClient.java:163) at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at sun.net.www.http.HttpClient.<init>(HttpClient.java:233) at sun.net.www.http.HttpClient.New(HttpClient.java:306) at sun.net.www.http.HttpClient.New(HttpClient.java:323) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:970) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:911) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:836) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1172) at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:160) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$3.run(SecondaryNameNode.java:347) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$3.run(SecondaryNameNode.java:336) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.downloadCheckpointFiles(SecondaryNameNode.java:336) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:411) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:312) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:275) at java.lang.Thread.run(Thread.java:662) -Gabe On Jan 31, 2012, at 10:23 PM, Harsh J wrote: > Gabriel, > > Do you get a download if you visit http://master:50070/getimage in > your browser or do you get a 404? It is likely that the URL threw a > 404 which lead to the FileNotFoundException. Possibly something wrong > with the NN webapp server. > > On Wed, Feb 1, 2012 at 3:19 AM, Gabriel Rosendorf > <grosend...@e3smartenergy.com> wrote: >> I'm getting this recurring exception on my secondary NameNode. >> >> 2012-01-31 21:46:12,710 INFO org.apache.hadoop.hdfs.util.GSet: >> recommended=2097152, actual=2097152 >> 2012-01-31 21:46:12,712 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser >> 2012-01-31 21:46:12,713 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup >> 2012-01-31 21:46:12,713 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true >> 2012-01-31 21:46:12,713 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >> dfs.block.invalidate.limit=100 >> 2012-01-31 21:46:12,713 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), >> accessTokenLifetime=0 min(s) >> 2012-01-31 21:46:12,713 INFO >> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring >> more than 10 times >> 2012-01-31 21:46:12,714 INFO org.apache.hadoop.hdfs.server.common.Storage: >> Number of files = 6 >> 2012-01-31 21:46:12,715 INFO org.apache.hadoop.hdfs.server.common.Storage: >> Number of files under construction = 0 >> 2012-01-31 21:46:12,715 INFO org.apache.hadoop.hdfs.server.common.Storage: >> Edits file /app/hadoop/dfs/namesecondary/current/edits of size 4 edits # 0 >> loaded in 0 seconds. >> 2012-01-31 21:46:12,715 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: >> 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: >> 0 Number of syncs: 0 SyncTimes(ms): 0 >> 2012-01-31 21:46:12,720 INFO org.apache.hadoop.hdfs.server.common.Storage: >> Image file of size 559 saved in 0 seconds. >> 2012-01-31 21:46:12,735 INFO org.apache.hadoop.hdfs.server.common.Storage: >> Image file of size 559 saved in 0 seconds. >> 2012-01-31 21:46:12,869 INFO >> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Posted URL >> master:50070putimage=1&port=50090&machine=0.0.0.0&token=-31:2049933803:0:1328044870000:1328044562914 >> 2012-01-31 21:46:12,876 ERROR >> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in >> doCheckpoint: >> 2012-01-31 21:46:12,876 ERROR >> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: >> java.io.FileNotFoundException: >> http://master:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=-31:2049933803:0:1328044870000:1328044562914 >> at >> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1434) >> at >> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:160) >> at >> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.putFSImage(SecondaryNameNode.java:377) >> at >> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:418) >> at >> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:312) >> at >> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:275) >> at java.lang.Thread.run(Thread.java:662) >> >> Any ideas? >> >> Best, >> Gabriel Rosendorf > > > > -- > Harsh J > Customer Ops. Engineer > Cloudera | http://tiny.cloudera.com/about