[
https://issues.apache.org/jira/browse/HADOOP-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13678643#comment-13678643
]
Hadoop QA commented on HADOOP-9617:
-----------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12586843/HADOOP-9617.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 2 new
or modified test files.
{color:red}-1 javac{color}. The applied patch generated 1153 javac
compiler warnings (more than the trunk's current 1152 warnings).
{color:green}+1 javadoc{color}. The javadoc tool did not generate any
warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in
hadoop-hdfs-project/hadoop-hdfs.
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HADOOP-Build/2624//testReport/
Javac warnings:
https://builds.apache.org/job/PreCommit-HADOOP-Build/2624//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output:
https://builds.apache.org/job/PreCommit-HADOOP-Build/2624//console
This message is automatically generated.
> HA HDFS client is too strict with validating URI authorities
> ------------------------------------------------------------
>
> Key: HADOOP-9617
> URL: https://issues.apache.org/jira/browse/HADOOP-9617
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs, ha
> Affects Versions: 2.0.5-alpha
> Reporter: Aaron T. Myers
> Assignee: Aaron T. Myers
> Attachments: HADOOP-9617.patch, HADOOP-9617.patch
>
>
> HADOOP-9150 changed the way FS URIs are handled to prevent attempted DNS
> resolution of logical URIs. This has the side effect of changing the way
> Paths are verified when passed to a FileSystem instance created with an
> authority that differs from the authority of the Path. Previous to
> HADOOP-9150, a default port would be added to either authority in the event
> that either URI did not have a port. Post HADOOP-9150, no default port is
> added. This means that a FileSystem instance created using the URI
> "hdfs://ha-logical-uri:8020" will no longer process paths containing just the
> authority "hdfs://ha-logical-uri", and will throw an error like the following:
> {noformat}
> java.lang.IllegalArgumentException: Wrong FS:
> hdfs://ns1/user/hive/warehouse/sample_07/sample_07.csv, expected:
> hdfs://ns1:8020
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:173)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:249)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:82)
> {noformat}
> Though this is not necessarily incorrect behavior, it is a
> backward-incompatible change that at least breaks certain clients' ability to
> connect to an HA HDFS cluster.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira