[
https://issues.apache.org/jira/browse/HADOOP-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Todd Lipcon updated HADOOP-5626:
--------------------------------
Attachment: hadoop-5626.txt
Fixes the behavior described in this ticket. Modifications to test improve the
speed (100 seconds to 30 seconds on my machine where 0.0.0.0 lookup is very
slow) and also verify the new behavior (this test fails with the old behavior
as noted in HADOOP-3694)
> SecondaryNamenode may report incorrect info host name
> -----------------------------------------------------
>
> Key: HADOOP-5626
> URL: https://issues.apache.org/jira/browse/HADOOP-5626
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Reporter: Carlos Valiente
> Priority: Minor
> Fix For: 0.21.0
>
> Attachments: HADOOP-5626.patch, hadoop-5626.txt
>
>
> I have set up {{dfs.secondary.http.address}} like this:
> {code}
> <property>
> <name>dfs.secondary.http.address</name>
> <value>secondary.example.com:50090</value>
> </property>
> {code}
> In my setup {{secondary.example.com}} resolves to an IP address (say,
> 192.168.0.10) which is not the same as the host's name (as returned by
> {{InetAddress.getLocalHost().getHostAddress()}}, say 192.168.0.1).
> In this situation, edit log related transfers fail. From the namenode log:
> {code}
> 2009-04-05 13:32:39,128 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from
> 192.168.0.10
> 2009-04-05 13:32:39,168 WARN org.mortbay.log: /getimage: java.io.IOException:
> GetImage failed. java.net.ConnectException: Connection refused
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
> at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
> at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
> at java.net.Socket.connect(Socket.java:519)
> at java.net.Socket.connect(Socket.java:469)
> at sun.net.NetworkClient.doConnect(NetworkClient.java:163)
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:394)
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:529)
> at sun.net.www.http.HttpClient.<init>(HttpClient.java:233)
> at sun.net.www.http.HttpClient.New(HttpClient.java:306)
> at sun.net.www.http.HttpClient.New(HttpClient.java:323)
> at
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:837)
> at
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:778)
> at
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:703)
> at
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1026)
> at
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:151)
> ...
> {code}
> From the secondary namenode log:
> {code}
> 2009-04-05 13:42:39,238 ERROR
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in
> doCheckpoint:
> 2009-04-05 13:42:39,238 ERROR
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> java.io.FileNotFoundException:
> http://nn.example.com:50070/getimage?putimage=1&port=50090&machine=
> 192.168.0.1&token=-19:1243068779:0:1238929357000:1238929031783
> at
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1288)
> at
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:151)
> at
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.putFSImage(SecondaryNameNode.java:294)
> at
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:333)
> at
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:239)
> at java.lang.Thread.run(Thread.java:619)
> {code}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.