[ 
https://issues.apache.org/jira/browse/HADOOP-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507701#comment-13507701
 ] 

Karthik Kambatla commented on HADOOP-9107:
------------------------------------------

>From HADOOP-6221:
bq. I think a good tactic would be rather than trying to make the old RPC stack 
interruptible, focus on making Avro something that you can interrupt, so that 
going forward you can interrupt client programs trying to talk to unresponsive 
servers.

Steve, is there a reason for not making the old RPC stack interruptible?

I feel we should do both - what Hari is proposing here, and what HADOOP-6221 
addresses.
                
> Hadoop IPC client eats InterruptedException and sets interrupt on the thread 
> which is not documented
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9107
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9107
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 1.1.0, 2.0.2-alpha
>            Reporter: Hari Shreedharan
>
> This code in Client.java looks fishy:
> {code}
>   public Writable call(RPC.RpcKind rpcKind, Writable rpcRequest,
>       ConnectionId remoteId) throws InterruptedException, IOException {
>     Call call = new Call(rpcKind, rpcRequest);
>     Connection connection = getConnection(remoteId, call);
>     connection.sendParam(call);                 // send the parameter
>     boolean interrupted = false;
>     synchronized (call) {
>       while (!call.done) {
>         try {
>           call.wait();                           // wait for the result
>         } catch (InterruptedException ie) {
>           // save the fact that we were interrupted
>           interrupted = true;
>         }
>       }
>       if (interrupted) {
>         // set the interrupt flag now that we are done waiting
>         Thread.currentThread().interrupt();
>       }
>       if (call.error != null) {
>         if (call.error instanceof RemoteException) {
>           call.error.fillInStackTrace();
>           throw call.error;
>         } else { // local exception
>           InetSocketAddress address = connection.getRemoteAddress();
>           throw NetUtils.wrapException(address.getHostName(),
>                   address.getPort(),
>                   NetUtils.getHostname(),
>                   0,
>                   call.error);
>         }
>       } else {
>         return call.getRpcResult();
>       }
>     }
>   }
> {code}
> Blocking calls are expected to throw InterruptedException if that is 
> interrupted. Also it seems like this method waits on the call objects even if 
> it  is interrupted. Currently, this method does not throw an 
> InterruptedException, nor is it documented that this method interrupts the 
> thread calling it. If it is interrupted, this method should still throw 
> InterruptedException, it should not matter if the call was successful or not.
> This is a major issue for clients which do not call this directly, but call 
> HDFS client API methods to write to HDFS, which may be interrupted by the 
> client due to timeouts, but does not throw InterruptedException. Any HDFS 
> client calls can interrupt the thread but it is not documented anywhere. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to