[
https://issues.apache.org/jira/browse/HDFS-9305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975194#comment-14975194
]
Hadoop QA commented on HDFS-9305:
---------------------------------
\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch | 20m 35s | Pre-patch trunk has 1 extant
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author | 0m 0s | The patch does not contain any
@author tags. |
| {color:green}+1{color} | tests included | 0m 0s | The patch appears to
include 1 new or modified test files. |
| {color:green}+1{color} | javac | 8m 52s | There were no new javac warning
messages. |
| {color:green}+1{color} | javadoc | 11m 45s | There were no new javadoc
warning messages. |
| {color:green}+1{color} | release audit | 0m 35s | The applied patch does
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle | 1m 39s | There were no new checkstyle
issues. |
| {color:red}-1{color} | whitespace | 0m 0s | The patch has 3 line(s) that
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install | 2m 9s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse | 0m 41s | The patch built with
eclipse:eclipse. |
| {color:green}+1{color} | findbugs | 2m 50s | The patch does not introduce
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native | 3m 57s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 63m 50s | Tests failed in hadoop-hdfs. |
| | | 116m 57s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
| | hadoop.hdfs.server.blockmanagement.TestNodeCount |
| | hadoop.hdfs.TestWriteReadStripedFile |
| | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL |
http://issues.apache.org/jira/secure/attachment/12768795/HDFS-9305.02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 3cc7377 |
| Pre-patch Findbugs warnings |
https://builds.apache.org/job/PreCommit-HDFS-Build/13197/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
|
| whitespace |
https://builds.apache.org/job/PreCommit-HDFS-Build/13197/artifact/patchprocess/whitespace.txt
|
| hadoop-hdfs test log |
https://builds.apache.org/job/PreCommit-HDFS-Build/13197/artifact/patchprocess/testrun_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/13197/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/13197/console |
This message was automatically generated.
> Delayed heartbeat processing causes storm of subsequent heartbeats
> ------------------------------------------------------------------
>
> Key: HDFS-9305
> URL: https://issues.apache.org/jira/browse/HDFS-9305
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.7.1
> Reporter: Chris Nauroth
> Assignee: Arpit Agarwal
> Attachments: HDFS-9305.01.patch, HDFS-9305.02.patch
>
>
> A DataNode typically sends a heartbeat to the NameNode every 3 seconds. We
> expect heartbeat handling to complete relatively quickly. However, if
> something unexpected causes heartbeat processing to get blocked, such as a
> long GC or heavy lock contention within the NameNode, then heartbeat
> processing would be delayed. After recovering from this delay, the DataNode
> then starts sending a storm of heartbeat messages in a tight loop. In a
> large cluster with many DataNodes, this storm of heartbeat messages could
> cause harmful load on the NameNode and make overall cluster recovery more
> difficult.
> The bug appears to be caused by incorrect timekeeping inside
> {{BPServiceActor}}. The next heartbeat time is always calculated as a delta
> from the previous heartbeat time, without any compensation for possible long
> latency on an individual heartbeat RPC. The only mitigation would be
> restarting all DataNodes to force a reset of the heartbeat schedule, or
> simply wait out the storm until the scheduling catches up and corrects itself.
> This problem would not manifest after a NameNode restart. In that case, the
> NameNode would respond to the first heartbeat by telling the DataNode to
> re-register, and {{BPServiceActor#reRegister}} would reset the heartbeat
> schedule to the current time. I believe the problem would only manifest if
> the NameNode process kept alive, but processed heartbeats unexpectedly slowly.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)