Hi Hairong, What is the risk for this change? How much more testing do you think will be needed?
-Eric From: Hairong Kuang <hair...@fb.com<mailto:hair...@fb.com>> Date: Thu, 1 Dec 2011 20:22:10 -0800 To: Internal Use <ehw...@fb.com<mailto:ehw...@fb.com>> Cc: Zheng Shao <zs...@fb.com<mailto:zs...@fb.com>>, "hdfs-dev@hadoop.apache.org<mailto:hdfs-dev@hadoop.apache.org>" <hdfs-dev@hadoop.apache.org<mailto:hdfs-dev@hadoop.apache.org>> Subject: another HDFS configuration for scribeH Hi Eric, I was debugging a bizar data corruption case in the silver cluster today and realized that there is a very important configuration that scribeH cluster should set. Could you please set dfs.datanode.synconclose to be true in ScribeH for next week's push? This is will guarantee that block data get persisted to disk on close, so preventing data loss when datanodes get rebooted. Thanks, Hairong