[ 
https://issues.apache.org/jira/browse/HDFS-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926050#comment-16926050
 ] 

CR Hota commented on HDFS-14774:
--------------------------------

Hey [~jojochuang], 

Do you have any follow up questions or shall we close this?

> RBF: Improve RouterWebhdfsMethods#chooseDatanode() error handling
> -----------------------------------------------------------------
>
>                 Key: HDFS-14774
>                 URL: https://issues.apache.org/jira/browse/HDFS-14774
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Wei-Chiu Chuang
>            Assignee: CR Hota
>            Priority: Minor
>
>  HDFS-13972 added the following code:
> {code}
> try {
>       dns = rpcServer.getDatanodeReport(DatanodeReportType.LIVE);
>     } catch (IOException e) {
>       LOG.error("Cannot get the datanodes from the RPC server", e);
>     } finally {
>       // Reset ugi to remote user for remaining operations.
>       RouterRpcServer.resetCurrentUser();
>     }
>     HashSet<Node> excludes = new HashSet<Node>();
>     if (excludeDatanodes != null) {
>       Collection<String> collection =
>           getTrimmedStringCollection(excludeDatanodes);
>       for (DatanodeInfo dn : dns) {
>         if (collection.contains(dn.getName())) {
>           excludes.add(dn);
>         }
>       }
>     }
> {code}
> If {{rpcServer.getDatanodeReport()}} throws an exception, {{dns}} will become 
> null. This does't look like the best way to handle the exception. Should 
> router retry upon exception? Does it perform retry automatically under the 
> hood?
> [~crh] [~brahmareddy]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to