[ https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13420775#comment-13420775 ]
stack commented on HDFS-3702: ----------------------------- One option might be to put in place a block policy that wrote the first replica local for all files but those that had a WAL-looking file path; i.e. look at the file path and made determination based on it (Dhruba suggests it over in HDFS-1451 which asks that we be able to set policy per file). > Add an option for NOT writing the blocks locally if there is a datanode on > the same box as the client > ----------------------------------------------------------------------------------------------------- > > Key: HDFS-3702 > URL: https://issues.apache.org/jira/browse/HDFS-3702 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs client > Affects Versions: 1.0.3, 2.0.0-alpha > Reporter: nkeywal > Priority: Minor > > This is useful for Write-Ahead-Logs: these files are writen for recovery > only, and are not read when there are no failures. > Taking HBase as an example, these files will be read only if the process that > wrote them (the 'HBase regionserver') dies. This will likely come from a > hardware failure, hence the corresponding datanode will be dead as well. So > we're writing 3 replicas, but in reality only 2 of them are really useful. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira