Wei-Chiu Chuang created HDFS-12619: -------------------------------------- Summary: Do not catch and throw unchecked exceptions if IBRs fail to process Key: HDFS-12619 URL: https://issues.apache.org/jira/browse/HDFS-12619 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Affects Versions: 3.0.0-alpha1, 2.7.3, 2.8.0 Reporter: Wei-Chiu Chuang Assignee: Wei-Chiu Chuang Priority: Minor
HDFS-9198 added the following code {code:title=BlockManager#processIncrementalBlockReport} public void processIncrementalBlockReport(final DatanodeID nodeID, final StorageReceivedDeletedBlocks srdb) throws IOException { assert namesystem.hasWriteLock(); final DatanodeDescriptor node = datanodeManager.getDatanode(nodeID); if (node == null || !node.isRegistered()) { blockLog.warn("BLOCK* processIncrementalBlockReport" + " is received from dead or unregistered node {}", nodeID); throw new IOException( "Got incremental block report from unregistered or dead node"); } try { processIncrementalBlockReport(node, srdb); } catch (Exception ex) { node.setForceRegistration(true); throw ex; } } {code} In Apache Hadoop 2.7.x ~ 3.0, the code snippet is accepted by Java compiler. However, when I attempted to backport it to a CDH5.3 release (based on Apache Hadoop 2.5.0), the compiler complains the exception is unhandled, because the method defines it throws IOException instead of Exception. While the code compiles for Apache Hadoop 2.7.x ~ 3.0, I feel it is not a good practice to catch an unchecked exception and then rethrow it. How about rewriting it with a finally block and a conditional variable? -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org