[
https://issues.apache.org/jira/browse/HADOOP-10813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14057800#comment-14057800
]
Steve Loughran commented on HADOOP-10813:
-----------------------------------------
Everyone would be reluctant to change what exceptions HDFS raises in today's
failure modes (or more precisely, do it carefully), that UnknownHostException
is clearly something relayed up from below.
If it is a specific "UnknownVolumeException" you are thinking of, that could be
something to add, so that if/when Hadoop adds volumes, the stack traces are
ready.
> Define general filesystem exceptions (usable by any HCFS)
> ---------------------------------------------------------
>
> Key: HADOOP-10813
> URL: https://issues.apache.org/jira/browse/HADOOP-10813
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs
> Affects Versions: 2.2.0
> Reporter: Martin Bukatovic
> Priority: Minor
>
> While Hadoop defines filesystem API which makes possible to use different
> filesystem implementation than HDFS (aka HCFS), we are missing HCFS
> exceptions for some failures wrt to namenode federation.
> For namenode federation, one can specify different namenode like this:
> {{hdfs://namenode_hostname/some/path}}. So when the given namenode doesn't
> exist, {{UnknownHostException}} is thrown:
> {noformat}
> $ hadoop fs -mkdir -p hdfs://bugcheck/foo/bar
> -mkdir: java.net.UnknownHostException: bugcheck
> Usage: hadoop fs [generic options] -mkdir [-p] <path> ...
> {noformat}
> Which is ok for HDFS, but there are other hadoop filesystem with different
> implementation and raising {{UnknownHostException}} doesn't make sense for
> them. For example the following path: {{glusterfs://bugcheck/foo/bar}} points
> to file {{/foo/bar}} on GlusterFS volume named {{bugcheck}}. That said, the
> meaning is the same compared to HDFS, both namenode hostname and glusterfs
> volume specifies different filesystem tree available for Hadoop.
> Would it make sense to define general HCFS exception which would wrap such
> cases so that it would be possible to fail in the same way when given
> filesystem tree is not available/defined, not matter which hadoop filesystem
> is used?
--
This message was sent by Atlassian JIRA
(v6.2#6252)