[
https://issues.apache.org/jira/browse/HDFS-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16163970#comment-16163970
]
Hadoop QA commented on HDFS-12412:
----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m
0s{color} | {color:red} The patch doesn't appear to include any new or modified
tests. Please justify why no new tests are needed for this patch. Also please
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
39s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated
0 new + 415 unchanged - 2 fixed = 415 total (was 417) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 38s{color}
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
22s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 5s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestPread |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
| | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
| | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
| | hadoop.hdfs.tools.TestDFSAdminWithHA |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
| | hadoop.hdfs.qjournal.TestNNWithQJM |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
| | hadoop.hdfs.TestQuota |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
| | hadoop.hdfs.TestLeaseRecoveryStriped |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
| | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
| | hadoop.hdfs.TestReconstructStripedFile |
| Timed out junit tests |
org.apache.hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData |
| | org.apache.hadoop.hdfs.TestWriteReadStripedFile |
| | org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail |
| | org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
| | org.apache.hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA |
| | org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12412 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12886518/HDFS-12412.00.patch |
| Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
unit findbugs checkstyle |
| uname | Linux 5872c864f6a5 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh
|
| git revision | trunk / 86f4d1c |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/21106/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/21106/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U:
hadoop-hdfs-project/hadoop-hdfs |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/21106/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
> Remove ErasureCodingWorker.stripedReadPool
> ------------------------------------------
>
> Key: HDFS-12412
> URL: https://issues.apache.org/jira/browse/HDFS-12412
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: erasure-coding
> Affects Versions: 3.0.0-alpha3
> Reporter: Lei (Eddy) Xu
> Assignee: Lei (Eddy) Xu
> Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12412.00.patch, HDFS-12412.01.patch
>
>
> In {{ErasureCodingWorker}}, it uses {{stripedReconstructionPool}} to schedule
> the EC recovery tasks, while uses {{stripedReadPool}} for the reader threads
> in each recovery task. We only need one of them to throttle the speed of
> recovery process, because each EC recovery task has a fix number of source
> readers (i.e., 3 for RS(3,2)). And because of the findings in HDFS-12044, the
> speed of EC recovery can be throttled by {{strippedReconstructionPool}} with
> {{xmitsInProgress}}.
> Moreover, keeping {{stripedReadPool}} makes customer difficult to understand
> and calculate the right balance between
> {{dfs.datanode.ec.reconstruction.stripedread.threads}},
> {{dfs.datanode.ec.reconstruction.stripedblock.threads.size}} and
> {{maxReplicationStreams}}. For example, a small {{stripread.threads}}
> (comparing to which {{reconstruction.threads.size}} implies), will
> unnecessarily limit the speed of recovery, which leads to larger MTTR.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]