[
https://issues.apache.org/jira/browse/HDFS-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16555130#comment-16555130
]
Brahma Reddy Battula commented on HDFS-12716:
---------------------------------------------
Thanks for updating the patch. Apart from the following minor nits patch LGTM.
Sorry for delaying the review.
i) Can you change "MAX_VOLUME_FAILURE_LIMIT" to
"MAX_VOLUME_FAILURE_TOLERATED_LIMIT",[~linyiqun], do you think same..?
ii) Can you change the following message in *DataNode.java#startDataNode?*
"Value configured is either less than 0 " to 'Value configured is either
greater than -1 "
iii) Remove the following in *FsDatasetImpl.java#hasEnoughResource()*
623 // OS behavior
> 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes
> to be available
> ---------------------------------------------------------------------------------------------
>
> Key: HDFS-12716
> URL: https://issues.apache.org/jira/browse/HDFS-12716
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Reporter: usharani
> Assignee: Ranith Sardar
> Priority: Major
> Attachments: HDFS-12716.002.patch, HDFS-12716.003.patch,
> HDFS-12716.004.patch, HDFS-12716.005.patch, HDFS-12716.patch
>
>
> Currently 'dfs.datanode.failed.volumes.tolerated' supports number of
> tolerated failed volumes to be mentioned. This configuration change requires
> restart of datanode. Since datanode volumes can be changed dynamically,
> keeping this configuration same for all may not be good idea.
> Support 'dfs.datanode.failed.volumes.tolerated' to accept special
> 'negative value 'x' to tolerate failures of upto "n-x"
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]