[
https://issues.apache.org/jira/browse/HDFS-4257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947595#comment-13947595
]
Hadoop QA commented on HDFS-4257:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12636854/h4257_20140325.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:red}-1 javadoc{color}. The javadoc tool appears to have generated 1
warning messages.
See
https://builds.apache.org/job/PreCommit-HDFS-Build/6508//artifact/trunk/patchprocess/diffJavadocWarnings.txt
for details.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/6508//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6508//console
This message is automatically generated.
> The ReplaceDatanodeOnFailure policies could have a forgiving option
> -------------------------------------------------------------------
>
> Key: HDFS-4257
> URL: https://issues.apache.org/jira/browse/HDFS-4257
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: hdfs-client
> Affects Versions: 2.0.2-alpha
> Reporter: Harsh J
> Assignee: Tsz Wo Nicholas Sze
> Priority: Minor
> Attachments: h4257_20140325.patch
>
>
> Similar question has previously come over HDFS-3091 and friends, but the
> essential problem is: "Why can't I write to my cluster of 3 nodes, when I
> just have 1 node available at a point in time.".
> The policies cover the 4 options, with {{Default}} being default:
> {{Disable}} -> Disables the whole replacement concept by throwing out an
> error (at the server) or acts as {{Never}} at the client.
> {{Never}} -> Never replaces a DN upon pipeline failures (not too desirable in
> many cases).
> {{Default}} -> Replace based on a few conditions, but whose minimum never
> touches 1. We always fail if only one DN remains and none others can be added.
> {{Always}} -> Replace no matter what. Fail if can't replace.
> Would it not make sense to have an option similar to Always/Default, where
> despite _trying_, if it isn't possible to have > 1 DN in the pipeline, do not
> fail. I think that is what the former write behavior was, and what fit with
> the minimum replication factor allowed value.
> Why is it grossly wrong to pass a write from a client for a block with just 1
> remaining replica in the pipeline (the minimum of 1 grows with the
> replication factor demanded from the write), when replication is taken care
> of immediately afterwards? How often have we seen missing blocks arise out of
> allowing this + facing a big rack(s) failure or so?
--
This message was sent by Atlassian JIRA
(v6.2#6252)