[
https://issues.apache.org/jira/browse/HDFS-12619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16209165#comment-16209165
]
Wei-Chiu Chuang commented on HDFS-12619:
----------------------------------------
Committing the patch based on [~xiaochen] and [~hanishakoneru]'s +1.
> Do not catch and throw unchecked exceptions if IBRs fail to process
> -------------------------------------------------------------------
>
> Key: HDFS-12619
> URL: https://issues.apache.org/jira/browse/HDFS-12619
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1
> Reporter: Wei-Chiu Chuang
> Assignee: Wei-Chiu Chuang
> Priority: Minor
> Attachments: HDFS-12619.001.patch
>
>
> HDFS-9198 added the following code
> {code:title=BlockManager#processIncrementalBlockReport}
> public void processIncrementalBlockReport(final DatanodeID nodeID,
> final StorageReceivedDeletedBlocks srdb) throws IOException {
> ...
> try {
> processIncrementalBlockReport(node, srdb);
> } catch (Exception ex) {
> node.setForceRegistration(true);
> throw ex;
> }
> }
> {code}
> In Apache Hadoop 2.7.x ~ 3.0, the code snippet is accepted by Java compiler.
> However, when I attempted to backport it to a CDH5.3 release (based on Apache
> Hadoop 2.5.0), the compiler complains the exception is unhandled, because the
> method defines it throws IOException instead of Exception.
> While the code compiles for Apache Hadoop 2.7.x ~ 3.0, I feel it is not a
> good practice to catch an unchecked exception and then rethrow it. How about
> rewriting it with a finally block and a conditional variable?
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]