[ https://issues.apache.org/jira/browse/HDFS-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12973068#action_12973068 ]
Todd Lipcon commented on HDFS-1539: ----------------------------------- dhruba: do you plan to run this on your warehouse cluster or just scribe tiers? If so it would be very interesting to find out whether it affects throughput. If there is no noticeable hit I would argue to make it the default. > prevent data loss when a cluster suffers a power loss > ----------------------------------------------------- > > Key: HDFS-1539 > URL: https://issues.apache.org/jira/browse/HDFS-1539 > Project: Hadoop HDFS > Issue Type: Improvement > Components: data-node, hdfs client, name-node > Reporter: dhruba borthakur > Assignee: dhruba borthakur > Attachments: syncOnClose1.txt > > > we have seen an instance where a external outage caused many datanodes to > reboot at around the same time. This resulted in many corrupted blocks. > These were recently written blocks; the current implementation of HDFS > Datanodes do not sync the data of a block file when the block is closed. > 1. Have a cluster-wide config setting that causes the datanode to sync a > block file when a block is finalized. > 2. Introduce a new parameter to the FileSystem.create() to trigger the new > behaviour, i.e. cause the datanode to sync a block-file when it is finalized. > 3. Implement the FSDataOutputStream.hsync() to cause all data written to the > specified file to be written to stable storage. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.