[ 
https://issues.apache.org/jira/browse/HDFS-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15605066#comment-15605066
 ] 

Brahma Reddy Battula commented on HDFS-11031:
---------------------------------------------

Thanks for updating the path..Overall LGTM. 

Minor nits:

1) do we require following..?
{{import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY;}} 

can't be done like {{DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY}}

2)Thinking following variable can be plural,as we are configuring two dir's..

{code}final String newDir = badDataDir.toString() + "," + 
data5.toString();{code}

3) we are starting only one DataNode..?  
{code}// bring up one more DataNode
  cluster.startDataNodes(newConf, 1, false, null, null);{code}

4) can we improve the following..? like...tolerated true if one volume failure 
allowed
 {code}* @param tolerated allowed one volume failures if true else false{code}

> Add additional unit test for DataNode startup behavior when volumes fail
> ------------------------------------------------------------------------
>
>                 Key: HDFS-11031
>                 URL: https://issues.apache.org/jira/browse/HDFS-11031
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode, test
>            Reporter: Mingliang Liu
>            Assignee: Mingliang Liu
>         Attachments: HDFS-11031-branch-2.001.patch, 
> HDFS-11031-branch-2.002.patch, HDFS-11031.000.patch, HDFS-11031.001.patch, 
> HDFS-11031.002.patch
>
>
> There are several cases to add in {{TestDataNodeVolumeFailure}}:
> - DataNode should not start in case of volumes failure
> - DataNode should not start in case of lacking data dir read/write permission
> - ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to