[
https://issues.apache.org/jira/browse/HDFS-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502815#comment-16502815
]
Hudson commented on HDFS-13511:
-------------------------------
FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14370 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/14370/])
HDFS-13511. Provide specialized exception when block length cannot be (xiao:
rev 774c1f199e11d886d0c0a1069325f0284da35deb)
* (add)
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/CannotObtainBlockLengthException.java
* (edit)
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
> Provide specialized exception when block length cannot be obtained
> ------------------------------------------------------------------
>
> Key: HDFS-13511
> URL: https://issues.apache.org/jira/browse/HDFS-13511
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Ted Yu
> Assignee: Gabor Bota
> Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13511.001.patch, HDFS-13511.002.patch,
> HDFS-13511.003.patch
>
>
> In downstream project, I saw the following code:
> {code}
> FSDataInputStream inputStream = hdfs.open(new Path(path));
> ...
> if (options.getRecoverFailedOpen() && dfs != null &&
> e.getMessage().toLowerCase()
> .startsWith("cannot obtain block length for")) {
> {code}
> The above tightly depends on the following in DFSInputStream#readBlockLength
> {code}
> throw new IOException("Cannot obtain block length for " + locatedblock);
> {code}
> The check based on string matching is brittle in production deployment.
> After discussing with [[email protected]], better approach is to introduce
> specialized IOException, e.g. CannotObtainBlockLengthException so that
> downstream project doesn't have to rely on string matching.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]