[
https://issues.apache.org/jira/browse/HDFS-3703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453631#comment-13453631
]
Hadoop QA commented on HDFS-3703:
---------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12544728/HDFS-3703-trunk-read-only.patch
against trunk revision .
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 1 new or modified test
files.
+1 javac. The applied patch does not increase the total number of javac
compiler warnings.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 eclipse:eclipse. The patch built with eclipse:eclipse.
+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9)
warnings.
+1 release audit. The applied patch does not increase the total number of
release audit warnings.
-1 core tests. The patch failed these unit tests in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.TestDatanodeBlockScanner
org.apache.hadoop.hdfs.TestPersistBlocks
+1 contrib tests. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/3174//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3174//console
This message is automatically generated.
> Decrease the datanode failure detection time
> --------------------------------------------
>
> Key: HDFS-3703
> URL: https://issues.apache.org/jira/browse/HDFS-3703
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: data-node, name-node
> Affects Versions: 1.0.3, 2.0.0-alpha
> Reporter: nkeywal
> Assignee: Jing Zhao
> Attachments: HDFS-3703-branch2.patch, HDFS-3703.patch,
> HDFS-3703-trunk-read-only.patch, HDFS-3703-trunk-read-only.patch,
> HDFS-3703-trunk-read-only.patch, HDFS-3703-trunk-read-only.patch,
> HDFS-3703-trunk-read-only.patch, HDFS-3703-trunk-with-write.patch
>
>
> By default, if a box dies, the datanode will be marked as dead by the
> namenode after 10:30 minutes. In the meantime, this datanode will still be
> proposed by the nanenode to write blocks or to read replicas. It happens as
> well if the datanode crashes: there is no shutdown hooks to tell the nanemode
> we're not there anymore.
> It especially an issue with HBase. HBase regionserver timeout for production
> is often 30s. So with these configs, when a box dies HBase starts to recover
> after 30s and, while 10 minutes, the namenode will consider the blocks on the
> same box as available. Beyond the write errors, this will trigger a lot of
> missed reads:
> - during the recovery, HBase needs to read the blocks used on the dead box
> (the ones in the 'HBase Write-Ahead-Log')
> - after the recovery, reading these data blocks (the 'HBase region') will
> fail 33% of the time with the default number of replica, slowering the data
> access, especially when the errors are socket timeout (i.e. around 60s most
> of the time).
> Globally, it would be ideal if HDFS settings could be under HBase settings.
> As a side note, HBase relies on ZooKeeper to detect regionservers issues.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira