[
https://issues.apache.org/jira/browse/HADOOP-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12678971#action_12678971
]
Doug Cutting commented on HADOOP-5191:
--------------------------------------
This problem is independent of fs.default.name. That only needs to be set in
the provided test to start the namenode, and it's setting is unrelated to the
failure.
The bug is that DistributedFileSystem calls NameNode#getUri() to create the
client FileSystem's uri. The client cannot know all of the addresses and names
of the namenode. Accesses to a namenode with different addresses and/or
hostnames should result in different DistributedFileSystem instances.
> After creation and startup of the hadoop namenode on AIX or Solaris, you will
> only be allowed to connect to the namenode via hostname but not IP.
> -------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5191
> URL: https://issues.apache.org/jira/browse/HADOOP-5191
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.19.1
> Environment: AIX 6.1 or Solaris
> Reporter: Bill Habermaas
> Assignee: Bill Habermaas
> Priority: Minor
> Attachments: 5191-1.patch, hadoop-5191.patch
>
>
> After creation and startup of the hadoop namenode on AIX or Solaris, you will
> only be allowed to connect to the namenode via hostname but not IP.
> fs.default.name=hdfs://p520aix61.mydomain.com:9000
> Hostname for box is p520aix and the IP is 10.120.16.68
> If you use the following url, "hdfs://10.120.16.68", to connect to the
> namenode, the exception that appears below occurs. You can only connect
> successfully if "hdfs://p520aix61.mydomain.com:9000" is used.
> Exception in thread "Thread-0" java.lang.IllegalArgumentException: Wrong FS:
> hdfs://10.120.16.68:9000/testdata, expected:
> hdfs://p520aix61.mydomain.com:9000
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:320)
> at
> org.apache.hadoop.dfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:84)
> at
> org.apache.hadoop.dfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:122)
> at
> org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:390)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:667)
> at TestHadoopHDFS.run(TestHadoopHDFS.java:116)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.