[
https://issues.apache.org/jira/browse/HDFS-1592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034428#comment-13034428
]
Eli Collins commented on HDFS-1592:
-----------------------------------
Did you test the patch on trunk?
Currently if a storage directory has failed the BPOfferService daemon will fail
to start. This patch only throws an exception if there are an insufficient
number of valid volumes, it doesn't do anything to ensure that the BP actually
comes up even if there is a failed storage directory. Ie it doesn't implement
the expected behavior.
You should be able to write a test that fails a volume using Mockito (see
examples in other tests), the fault injection framework, or via having the test
manage the data dirs itself (eg pass false for the 3rd argument in
startDataNodes) and fail them individually yourself.
> Datanode startup doesn't honor volumes.tolerated
> -------------------------------------------------
>
> Key: HDFS-1592
> URL: https://issues.apache.org/jira/browse/HDFS-1592
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 0.20.204.0
> Reporter: Bharath Mundlapudi
> Assignee: Bharath Mundlapudi
> Fix For: 0.20.204.0, 0.23.0
>
> Attachments: HDFS-1592-1.patch, HDFS-1592-2.patch,
> HDFS-1592-rel20.patch
>
>
> Datanode startup doesn't honor volumes.tolerated for hadoop 20 version.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira