[ 
https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-12053:
-----------------------------------
    Attachment: HADOOP-12053.001.patch

I think this problem is quite general in DelegateFileSystem does not delegate 
for getting the default port, and then all the derived file systems are forced 
to override getUriDefaultPort. 001 with suggested fix.

> Harfs defaulturiport should be Zero ( should not -1)
> ----------------------------------------------------
>
>                 Key: HADOOP-12053
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12053
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Brahma Reddy Battula
>            Assignee: Brahma Reddy Battula
>         Attachments: HADOOP-12053.001.patch
>
>
> The harfs overrides the "getUriDefaultPort" method of AbstractFilesystem, and 
> returns "-1" . But "-1" can't pass the "checkPath" method when the 
> {{fs.defaultfs}} is setted without port(like hdfs://hacluster)
>  *Test Code :* 
> {code}
> for (FileStatus file : files) {
>           String[] edges = file.getPath().getName().split("-");
>           if (applicationId.toString().compareTo(edges[0]) >= 0 && 
> applicationId.toString().compareTo(edges[1]) <= 0) {
>             Path harPath = new Path("har://" + 
> file.getPath().toUri().getPath());
>             harPath = harPath.getFileSystem(conf).makeQualified(harPath);
>             remoteAppDir = LogAggregationUtils.getRemoteAppLogDir(
>                 harPath, applicationId, appOwner,
>                 LogAggregationUtils.getRemoteNodeLogDirSuffix(conf));
>             if 
> (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir))
>  {
>                 remoteDirSet.add(remoteAppDir);
>             }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to