[ 
https://issues.apache.org/jira/browse/HDFS-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664807#comment-16664807
 ] 

Adam Antal commented on HDFS-12716:
-----------------------------------

Hi,

Great patch, thanks for the work guys.

Looking over the new feature I came across a few things that weren't clear to 
me. I got error like "Value configured is either greater than -1 or >= ..." 
when I set the config dfs.datanode.failed.volumes.tolerated to -2, so it wasn't 
the case (it was configured lower than -1).

When I tried to find out in the hdfs-default.xml that what I did wrong with the 
configs I couldn't understand the phrase "The range of the value is -1 now, -1 
represents the minimum of volume valids is 1." I assumed the minimum of the 
range is -1 then.

As I saw in the code in DataNode.java:startDataNode
{code:java}
  int volFailuresTolerated = dnConf.getVolFailuresTolerated();
  int volsConfigured = dnConf.getVolsConfigured();
  if (volFailuresTolerated < MAX_VOLUME_FAILURE_TOLERATED_LIMIT
      || volFailuresTolerated >= volsConfigured) {
    throw new DiskErrorException("Invalid value configured for "
        + "dfs.datanode.failed.volumes.tolerated - " + volFailuresTolerated
        + ". Value configured is either greater than -1 or >= "
        + "to the number of configured volumes (" + volsConfigured + ").");
  }
{code}
Here the error message seems a bit misleading. The error comes up when the 
given quantity in the configuration set to volsConfigured is set lower than -1 
but in that case the error should say something like "Value configured is 
either _less_ than -1 or >= ".

Also the general error message in DataNode.java
{code:java}
public static final String MAX_VOLUME_FAILURES_TOLERATED_MSG = "should be 
greater than -1";
{code}
Shouldn't be then "should be greater than _or equal to_ -1" to be precise, as 
-1 is a valid choice?

As you worked on the issue [~RANith], could please help me out, whether I 
misunderstood something?

>  'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes 
> to be available
> ---------------------------------------------------------------------------------------------
>
>                 Key: HDFS-12716
>                 URL: https://issues.apache.org/jira/browse/HDFS-12716
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: usharani
>            Assignee: Ranith Sardar
>            Priority: Major
>             Fix For: 2.10.0, 3.2.0, 3.0.4, 3.1.2
>
>         Attachments: HDFS-12716-branch-2.patch, HDFS-12716.002.patch, 
> HDFS-12716.003.patch, HDFS-12716.004.patch, HDFS-12716.005.patch, 
> HDFS-12716.006.patch, HDFS-12716.patch, HDFS-12716_branch-2.patch
>
>
>   Currently 'dfs.datanode.failed.volumes.tolerated' supports number of 
> tolerated failed volumes to be mentioned. This configuration change requires 
> restart of datanode. Since datanode volumes can be changed dynamically, 
> keeping this configuration same for all may not be good idea.
>     Support 'dfs.datanode.failed.volumes.tolerated' to accept special 
> 'negative value 'x' to tolerate failures of upto "n-x"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to