[
https://issues.apache.org/jira/browse/HDFS-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16509988#comment-16509988
]
Xiao Chen commented on HDFS-13511:
----------------------------------
Just backported to branch-3.1.
> Provide specialized exception when block length cannot be obtained
> ------------------------------------------------------------------
>
> Key: HDFS-13511
> URL: https://issues.apache.org/jira/browse/HDFS-13511
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Ted Yu
> Assignee: Gabor Bota
> Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HDFS-13511.001.patch, HDFS-13511.002.patch,
> HDFS-13511.003.patch
>
>
> In downstream project, I saw the following code:
> {code}
> FSDataInputStream inputStream = hdfs.open(new Path(path));
> ...
> if (options.getRecoverFailedOpen() && dfs != null &&
> e.getMessage().toLowerCase()
> .startsWith("cannot obtain block length for")) {
> {code}
> The above tightly depends on the following in DFSInputStream#readBlockLength
> {code}
> throw new IOException("Cannot obtain block length for " + locatedblock);
> {code}
> The check based on string matching is brittle in production deployment.
> After discussing with [[email protected]], better approach is to introduce
> specialized IOException, e.g. CannotObtainBlockLengthException so that
> downstream project doesn't have to rely on string matching.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]