[
https://issues.apache.org/jira/browse/HDFS-5399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13891630#comment-13891630
]
Hadoop QA commented on HDFS-5399:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12627004/HDFS-5399.001.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:red}-1 javadoc{color}. The javadoc tool appears to have generated 2
warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.io.retry.TestFailoverProxy
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer
The following test timeouts occurred in
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/6030//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6030//console
This message is automatically generated.
> Revisit SafeModeException and corresponding retry policies
> ----------------------------------------------------------
>
> Key: HDFS-5399
> URL: https://issues.apache.org/jira/browse/HDFS-5399
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 2.3.0
> Reporter: Jing Zhao
> Assignee: Jing Zhao
> Attachments: HDFS-5399.000.patch, HDFS-5399.001.patch,
> HDFS-5399.003.patch, hdfs-5399.002.patch
>
>
> Currently for NN SafeMode, we have the following corresponding retry policies:
> # In non-HA setup, for certain API call ("create"), the client will retry if
> the NN is in SafeMode. Specifically, the client side's RPC adopts
> MultipleLinearRandomRetry policy for a wrapped SafeModeException when retry
> is enabled.
> # In HA setup, the client will retry if the NN is Active and in SafeMode.
> Specifically, the SafeModeException is wrapped as a RetriableException in the
> server side. Client side's RPC uses FailoverOnNetworkExceptionRetry policy
> which recognizes RetriableException (see HDFS-5291).
> There are several possible issues in the current implementation:
> # The NN SafeMode can be a "Manual" SafeMode (i.e., started by administrator
> through CLI), and the clients may not want to retry on this type of SafeMode.
> # Client may want to retry on other API calls in non-HA setup.
> # We should have a single generic strategy to address the mapping between
> SafeMode and retry policy for both HA and non-HA setup. A possible
> straightforward solution is to always wrap the SafeModeException in the
> RetriableException to indicate that the clients should retry.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)