[ 
https://issues.apache.org/jira/browse/HDFS-3368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13280426#comment-13280426
 ] 

Aaron T. Myers commented on HDFS-3368:
--------------------------------------

I think the problem is not in Hadoop having _too many_ configuration options 
per se, but rather in having _too many that one needs to change_. The key 
difference being that we just need to give it a good default value. Making it 
undocumented is fine and perhaps even desirable, since you're right - most 
people will never need or want to change it. But, if someone (maybe me or you, 
one day) does find some need to change it, they'll be very happy they're able 
to do so without either recompiling or waiting for the next Hadoop release. I 
see no benefit to having it hard-coded as opposed to an undocumented config 
parameter, and some potential benefit to having it configurable.
                
> Missing blocks due to bad DataNodes comming up and down.
> --------------------------------------------------------
>
>                 Key: HDFS-3368
>                 URL: https://issues.apache.org/jira/browse/HDFS-3368
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.22.0, 1.0.0, 2.0.0, 3.0.0
>            Reporter: Konstantin Shvachko
>            Assignee: Konstantin Shvachko
>         Attachments: blockDeletePolicy-0.22.patch, 
> blockDeletePolicy-trunk.patch, blockDeletePolicy.patch
>
>
> All replicas of a block can be removed if bad DataNodes come up and down 
> during cluster restart resulting in data loss.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to