[ 
https://issues.apache.org/jira/browse/HDFS-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500693#comment-16500693
 ] 

Gabor Bota commented on HDFS-13511:
-----------------------------------

Thanks [~xiaochen] for the review, I've uploaded patch v003 with the fix.

> Provide specialized exception when block length cannot be obtained
> ------------------------------------------------------------------
>
>                 Key: HDFS-13511
>                 URL: https://issues.apache.org/jira/browse/HDFS-13511
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Ted Yu
>            Assignee: Gabor Bota
>            Priority: Major
>         Attachments: HDFS-13511.001.patch, HDFS-13511.002.patch, 
> HDFS-13511.003.patch
>
>
> In downstream project, I saw the following code:
> {code}
>         FSDataInputStream inputStream = hdfs.open(new Path(path));
> ...
>         if (options.getRecoverFailedOpen() && dfs != null && 
> e.getMessage().toLowerCase()
>             .startsWith("cannot obtain block length for")) {
> {code}
> The above tightly depends on the following in DFSInputStream#readBlockLength
> {code}
>     throw new IOException("Cannot obtain block length for " + locatedblock);
> {code}
> The check based on string matching is brittle in production deployment.
> After discussing with [~ste...@apache.org], better approach is to introduce 
> specialized IOException, e.g. CannotObtainBlockLengthException so that 
> downstream project doesn't have to rely on string matching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to