[ 
https://issues.apache.org/jira/browse/HDFS-4257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14116586#comment-14116586
 ] 

Hadoop QA commented on HDFS-4257:
---------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12665583/h4257_20140831.patch
  against trunk revision 258c7d0.

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

                  org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
                  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7863//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7863//console

This message is automatically generated.

> The ReplaceDatanodeOnFailure policies could have a forgiving option
> -------------------------------------------------------------------
>
>                 Key: HDFS-4257
>                 URL: https://issues.apache.org/jira/browse/HDFS-4257
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: hdfs-client
>    Affects Versions: 2.0.2-alpha
>            Reporter: Harsh J
>            Assignee: Tsz Wo Nicholas Sze
>            Priority: Minor
>         Attachments: h4257_20140325.patch, h4257_20140325b.patch, 
> h4257_20140326.patch, h4257_20140819.patch, h4257_20140831.patch
>
>
> Similar question has previously come over HDFS-3091 and friends, but the 
> essential problem is: "Why can't I write to my cluster of 3 nodes, when I 
> just have 1 node available at a point in time.".
> The policies cover the 4 options, with {{Default}} being default:
> {{Disable}} -> Disables the whole replacement concept by throwing out an 
> error (at the server) or acts as {{Never}} at the client.
> {{Never}} -> Never replaces a DN upon pipeline failures (not too desirable in 
> many cases).
> {{Default}} -> Replace based on a few conditions, but whose minimum never 
> touches 1. We always fail if only one DN remains and none others can be added.
> {{Always}} -> Replace no matter what. Fail if can't replace.
> Would it not make sense to have an option similar to Always/Default, where 
> despite _trying_, if it isn't possible to have > 1 DN in the pipeline, do not 
> fail. I think that is what the former write behavior was, and what fit with 
> the minimum replication factor allowed value.
> Why is it grossly wrong to pass a write from a client for a block with just 1 
> remaining replica in the pipeline (the minimum of 1 grows with the 
> replication factor demanded from the write), when replication is taken care 
> of immediately afterwards? How often have we seen missing blocks arise out of 
> allowing this + facing a big rack(s) failure or so?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to