[ 
https://issues.apache.org/jira/browse/HDFS-6478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14065212#comment-14065212
 ] 

Hudson commented on HDFS-6478:
------------------------------

SUCCESS: Integrated in Hadoop-trunk-Commit #5899 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5899/])
HDFS-6478. RemoteException can't be retried properly for non-HA scenario. 
Contributed by Ming Ma. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1611410)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/NamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestIsMethodSupported.java


> RemoteException can't be retried properly for non-HA scenario
> -------------------------------------------------------------
>
>                 Key: HDFS-6478
>                 URL: https://issues.apache.org/jira/browse/HDFS-6478
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Ming Ma
>            Assignee: Ming Ma
>             Fix For: 2.6.0
>
>         Attachments: HDFS-6478-2.patch, HDFS-6478-3.patch, HDFS-6478-4.patch, 
> HDFS-6478.patch
>
>
> For HA case, the call stack is DFSClient -> RetryInvocationHandler -> 
> ClientNamenodeProtocolTranslatorPB -> ProtobufRpcEngine. ProtobufRpcEngine. 
> ProtobufRpcEngine throws ServiceException and expects the caller to unwrap 
> it; ClientNamenodeProtocolTranslatorPB is the component that takes care of 
> that.
> {noformat}
>         at org.apache.hadoop.ipc.Client.call
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke
>         at com.sun.proxy.$Proxy26.getFileInfo
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo
>         at sun.reflect.GeneratedMethodAccessor24.invoke
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke
>         at java.lang.reflect.Method.invoke
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke
>         at com.sun.proxy.$Proxy27.getFileInfo
>         at org.apache.hadoop.hdfs.DFSClient.getFileInfo
>         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus
> {noformat}
> However, for non-HA case, the call stack is DFSClient -> 
> ClientNamenodeProtocolTranslatorPB -> RetryInvocationHandler -> 
> ProtobufRpcEngine. RetryInvocationHandler gets ServiceException and can't be 
> retried properly.
> {noformat}
> at org.apache.hadoop.ipc.Client.call
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke
> at com.sun.proxy.$Proxy9.getListing
> at sun.reflect.NativeMethodAccessorImpl.invoke0
> at sun.reflect.NativeMethodAccessorImpl.invoke
> at sun.reflect.DelegatingMethodAccessorImpl.invoke
> at java.lang.reflect.Method.invoke
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke
> at com.sun.proxy.$Proxy9.getListing
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing
> at org.apache.hadoop.hdfs.DFSClient.listPaths
> {noformat}
> Perhaps, we can fix it by have NN wrap RetryInvocationHandler around 
> ClientNamenodeProtocolTranslatorPB and other PBs, instead of the current wrap 
> order.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to