[
https://issues.apache.org/jira/browse/HDFS-12044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16071095#comment-16071095
]
Hadoop QA commented on HDFS-12044:
----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s{color} | {color:green} The patch appears to include 1 new or modified test
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
34s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m
30s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m
42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
58s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}
0m 39s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new +
184 unchanged - 0 fixed = 185 total (was 184) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m
16s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 37s{color}
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
20s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 51s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
| | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
| | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
| | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
| | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
| | hadoop.hdfs.web.TestWebHdfsTimeouts |
| | hadoop.hdfs.TestFileChecksum |
| Timed out junit tests |
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12044 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12875339/HDFS-12044.03.patch |
| Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
unit findbugs checkstyle |
| uname | Linux 6c831fdeb141 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh
|
| git revision | trunk / 147df30 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs |
https://builds.apache.org/job/PreCommit-HDFS-Build/20123/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
|
| findbugs |
https://builds.apache.org/job/PreCommit-HDFS-Build/20123/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
|
| checkstyle |
https://builds.apache.org/job/PreCommit-HDFS-Build/20123/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/20123/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/20123/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/20123/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
> Mismatch between BlockManager#maxReplicationStreams and
> ErasureCodingWorker.stripedReconstructionPool pool size causes slow and
> bursty recovery
> -----------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-12044
> URL: https://issues.apache.org/jira/browse/HDFS-12044
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: erasure-coding
> Affects Versions: 3.0.0-alpha3
> Reporter: Lei (Eddy) Xu
> Assignee: Lei (Eddy) Xu
> Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12044.00.patch, HDFS-12044.01.patch,
> HDFS-12044.02.patch, HDFS-12044.03.patch
>
>
> {{ErasureCodingWorker#stripedReconstructionPool}} is with {{corePoolSize=2}}
> and {{maxPoolSize=8}} as default. And it rejects more tasks if the queue is
> full.
> When {{BlockManager#maxReplicationStream}} is larger than
> {{ErasureCodingWorker#stripedReconstructionPool#corePoolSize/maxPoolSize}},
> for example, {{maxReplicationStream=20}} and {{corePoolSize=2 ,
> maxPoolSize=8}}. Meanwhile, NN sends up to {{maxTransfer}} reconstruction
> tasks to DN for each heartbeat, and it is calculated in {{FSNamesystem}}:
> {code}
> final int maxTransfer = blockManager.getMaxReplicationStreams() -
> xmitsInProgress;
> {code}
> However, at any giving time,
> {{{ErasureCodingWorker#stripedReconstructionPool}} takes 2 {{xmitInProcess}}.
> So for each heartbeat in 3s, NN will send about {{20-2 = 18}} reconstruction
> tasks to the DN, and DN throw away most of them if there were 8 tasks in the
> queue already. So NN needs to take longer to re-consider these blocks were
> under-replicated to schedule new tasks.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]