[
https://issues.apache.org/jira/browse/HDFS-15759?focusedWorklogId=579857&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579857
]
ASF GitHub Bot logged work on HDFS-15759:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 09/Apr/21 09:13
Start Date: 09/Apr/21 09:13
Worklog Time Spent: 10m
Work Description: hadoop-yetus commented on pull request #2868:
URL: https://github.com/apache/hadoop/pull/2868#issuecomment-816543817
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 1m 8s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 1s | | codespell was not available. |
| +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available.
|
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| +1 :green_heart: | test4tests | 0m 0s | | The patch appears to
include 5 new or modified test files. |
|||| _ branch-3.2 Compile Tests _ |
| +0 :ok: | mvndep | 4m 27s | | Maven dependency ordering for branch |
| +1 :green_heart: | mvninstall | 24m 20s | | branch-3.2 passed |
| +1 :green_heart: | compile | 15m 30s | | branch-3.2 passed |
| +1 :green_heart: | checkstyle | 2m 34s | | branch-3.2 passed |
| +1 :green_heart: | mvnsite | 2m 41s | | branch-3.2 passed |
| +1 :green_heart: | javadoc | 2m 21s | | branch-3.2 passed |
| +1 :green_heart: | spotbugs | 5m 8s | | branch-3.2 passed |
| +1 :green_heart: | shadedclient | 15m 21s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +0 :ok: | mvndep | 0m 24s | | Maven dependency ordering for patch |
| +1 :green_heart: | mvninstall | 1m 53s | | the patch passed |
| +1 :green_heart: | compile | 14m 59s | | the patch passed |
| +1 :green_heart: | javac | 14m 59s | | the patch passed |
| +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks
issues. |
| +1 :green_heart: | checkstyle | 2m 34s | | the patch passed |
| +1 :green_heart: | mvnsite | 2m 44s | | the patch passed |
| +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML
file. |
| +1 :green_heart: | javadoc | 2m 16s | | the patch passed |
| +1 :green_heart: | spotbugs | 5m 24s | | the patch passed |
| +1 :green_heart: | shadedclient | 15m 39s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| -1 :x: | unit | 15m 32s |
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2868/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
| hadoop-common in the patch failed. |
| -1 :x: | unit | 216m 46s |
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2868/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
| hadoop-hdfs in the patch failed. |
| +1 :green_heart: | asflicense | 1m 6s | | The patch does not
generate ASF License warnings. |
| | | 351m 46s | | |
| Reason | Tests |
|-------:|:------|
| Failed junit tests |
hadoop.io.compress.snappy.TestSnappyCompressorDecompressor |
| | hadoop.io.compress.TestCompressorDecompressor |
| | hadoop.hdfs.server.namenode.TestFsck |
| | hadoop.hdfs.TestRollingUpgrade |
| | hadoop.hdfs.server.datanode.TestBlockRecovery |
| | hadoop.hdfs.server.namenode.TestRedudantBlocks |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.41 ServerAPI=1.41 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2868/2/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/2868 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
| uname | Linux 4cbf0596110f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | branch-3.2 / 78e228f24d8356db22d8ff3013a64724e18a2372 |
| Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2868/2/testReport/ |
| Max. process+thread count | 3088 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2868/2/console |
| versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
This message was automatically generated.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 579857)
Time Spent: 9h 20m (was: 9h 10m)
> EC: Verify EC reconstruction correctness on DataNode
> ----------------------------------------------------
>
> Key: HDFS-15759
> URL: https://issues.apache.org/jira/browse/HDFS-15759
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: datanode, ec, erasure-coding
> Affects Versions: 3.4.0
> Reporter: Toshihiko Uchida
> Assignee: Toshihiko Uchida
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Time Spent: 9h 20m
> Remaining Estimate: 0h
>
> EC reconstruction on DataNode has caused data corruption: HDFS-14768,
> HDFS-15186 and HDFS-15240. Those issues occur under specific conditions and
> the corruption is neither detected nor auto-healed by HDFS. It is obviously
> hard for users to monitor data integrity by themselves, and even if they find
> corrupted data, it is difficult or sometimes impossible to recover them.
> To prevent further data corruption issues, this feature proposes a simple and
> effective way to verify EC reconstruction correctness on DataNode at each
> reconstruction process.
> It verifies correctness of outputs decoded from inputs as follows:
> 1. Decoding an input with the outputs;
> 2. Compare the decoded input with the original input.
> For instance, in RS-6-3, assume that outputs [d1, p1] are decoded from inputs
> [d0, d2, d3, d4, d5, p0]. Then the verification is done by decoding d0 from
> [d1, d2, d3, d4, d5, p1], and comparing the original and decoded data of d0.
> When an EC reconstruction task goes wrong, the comparison will fail with high
> probability.
> Then the task will also fail and be retried by NameNode.
> The next reconstruction will succeed if the condition triggered the failure
> is gone.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]