[
https://issues.apache.org/jira/browse/HDFS-6478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14063283#comment-14063283
]
Hadoop QA commented on HDFS-6478:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12655993/HDFS-6478-3.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 2 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 2.0.3) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/7356//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7356//console
This message is automatically generated.
> RemoteException can't be retried properly for non-HA scenario
> -------------------------------------------------------------
>
> Key: HDFS-6478
> URL: https://issues.apache.org/jira/browse/HDFS-6478
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Ming Ma
> Assignee: Ming Ma
> Attachments: HDFS-6478-2.patch, HDFS-6478-3.patch, HDFS-6478.patch
>
>
> For HA case, the call stack is DFSClient -> RetryInvocationHandler ->
> ClientNamenodeProtocolTranslatorPB -> ProtobufRpcEngine. ProtobufRpcEngine.
> ProtobufRpcEngine throws ServiceException and expects the caller to unwrap
> it; ClientNamenodeProtocolTranslatorPB is the component that takes care of
> that.
> {noformat}
> at org.apache.hadoop.ipc.Client.call
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke
> at com.sun.proxy.$Proxy26.getFileInfo
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo
> at sun.reflect.GeneratedMethodAccessor24.invoke
> at sun.reflect.DelegatingMethodAccessorImpl.invoke
> at java.lang.reflect.Method.invoke
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke
> at com.sun.proxy.$Proxy27.getFileInfo
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo
> at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus
> {noformat}
> However, for non-HA case, the call stack is DFSClient ->
> ClientNamenodeProtocolTranslatorPB -> RetryInvocationHandler ->
> ProtobufRpcEngine. RetryInvocationHandler gets ServiceException and can't be
> retried properly.
> {noformat}
> at org.apache.hadoop.ipc.Client.call
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke
> at com.sun.proxy.$Proxy9.getListing
> at sun.reflect.NativeMethodAccessorImpl.invoke0
> at sun.reflect.NativeMethodAccessorImpl.invoke
> at sun.reflect.DelegatingMethodAccessorImpl.invoke
> at java.lang.reflect.Method.invoke
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke
> at com.sun.proxy.$Proxy9.getListing
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing
> at org.apache.hadoop.hdfs.DFSClient.listPaths
> {noformat}
> Perhaps, we can fix it by have NN wrap RetryInvocationHandler around
> ClientNamenodeProtocolTranslatorPB and other PBs, instead of the current wrap
> order.
--
This message was sent by Atlassian JIRA
(v6.2#6252)