[ https://issues.apache.org/jira/browse/HDFS-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12891837#action_12891837 ]
Konstantin Shvachko commented on HDFS-1094: ------------------------------------------- > We can ignore rack failure, which is predominantly an availability problem > not a data loss problem. We should NOT ignore rack failures. If this simplifies the probabilistic models it's fine, but in practice rack failures should be accounted for. If data is not available for hours as a result of this, which is typical, the clients will start complaining. Also this would be a degradation from the current policy. > Intelligent block placement policy to decrease probability of block loss > ------------------------------------------------------------------------ > > Key: HDFS-1094 > URL: https://issues.apache.org/jira/browse/HDFS-1094 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node > Reporter: dhruba borthakur > Assignee: Rodrigo Schmidt > Attachments: calculate_probs.py, failure_rate.py, prob.pdf, prob.pdf > > > The current HDFS implementation specifies that the first replica is local and > the other two replicas are on any two random nodes on a random remote rack. > This means that if any three datanodes die together, then there is a > non-trivial probability of losing at least one block in the cluster. This > JIRA is to discuss if there is a better algorithm that can lower probability > of losing a block. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.