[ 
https://issues.apache.org/jira/browse/HDFS-1592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13033885#comment-13033885
 ] 

Bharath Mundlapudi commented on HDFS-1592:
------------------------------------------

Yes, what you mentioned w.r.t usecases are right.

    * A DN will successfully start with a failed volume as long as it's 
configured to tolerate a failed volume
    * A DN will fail to start if more than the number of tolerated volumes are 
failed

This is the expected behavior with this patch. 

I had some difficulty in failing the disks through the unit tests. If we set 
the directory permissions to not writable, then once we run datanode, it will 
reset the directory permissions and test will always succeed. 

These tests were done outside of unit tests through umount -l etc. All the 
above mentioned cases were manually tested. 




> Datanode startup doesn't honor volumes.tolerated 
> -------------------------------------------------
>
>                 Key: HDFS-1592
>                 URL: https://issues.apache.org/jira/browse/HDFS-1592
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.20.204.0
>            Reporter: Bharath Mundlapudi
>            Assignee: Bharath Mundlapudi
>             Fix For: 0.20.204.0, 0.23.0
>
>         Attachments: HDFS-1592-1.patch, HDFS-1592-2.patch, 
> HDFS-1592-rel20.patch
>
>
> Datanode startup doesn't honor volumes.tolerated for hadoop 20 version.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to