[ 
https://issues.apache.org/jira/browse/HADOOP-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12625207#action_12625207
 ] 

Steve Loughran commented on HADOOP-4009:
----------------------------------------

-you could search the code for  the string "catch(RemoteException" and see what 
comes up. One of the real risks here is that its the failure mode code that is 
being changed, and that's always the code that doesn't get enough coverage, 
enough testing and enough real-world use, because its not until things start to 
go wrong in interesting ways that the code gets followed. Which makes it harder 
to say "we've fixed everything" once this change (Which seems good, BTW).

One possibility: make the HDFS exception a subclass of RemoteException, with 
all the existing semantics. Old code may still work.

> Declare HDFS exceptions in the HDFS interface and also in class FileSystem 
> and rethrow the encapsulated exception
> -----------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4009
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4009
>             Project: Hadoop Core
>          Issue Type: Sub-task
>            Reporter: Sanjay Radia
>
> Server-side exceptions are encapsulated in the remote exception (as  the 
> class name and the error string ).
> The client side and FileSystem does not declare or thrown the these 
> encapsulated exception.
> Work Items
>  * Declare the exceptions in FileSystem and the HDFS interface (but still as 
> subclasses of IOException)
>  * Rethrow encapsulated declared exceptions if they are the declared 
> exception.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to