[
https://issues.apache.org/jira/browse/HDFS-15759?focusedWorklogId=563687&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-563687
]
ASF GitHub Bot logged work on HDFS-15759:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 10/Mar/21 11:45
Start Date: 10/Mar/21 11:45
Worklog Time Spent: 10m
Work Description: touchida commented on pull request #2585:
URL: https://github.com/apache/hadoop/pull/2585#issuecomment-795299874
Both failed and crashed tests are unrelated to this PR.
[HDFS-14115](https://issues.apache.org/jira/browse/HDFS-14115) already reports
`TestNamenodeCapacityReport#testXceiverCount`'s flakiness.
I gave a comment and converted to a subtask of
[HDFS-15646](https://issues.apache.org/jira/browse/HDFS-15646).
- Failed unit test
```
[ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed:
34.435 s <<< FAILURE! - in
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
[ERROR]
testXceiverCount(org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport)
Time elapsed: 16.273 s <<< FAILURE!
java.lang.AssertionError: expected:<0.0> but was:<2.0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:553)
at org.junit.Assert.assertEquals(Assert.java:683)
at
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.checkClusterHealth(TestNamenodeCapacityReport.java:425)
at
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.testXceiverCountInternal(TestNamenodeCapacityReport.java:371)
at
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.testXceiverCount(TestNamenodeCapacityReport.java:252)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
```
- Crashed test
```
org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor
org.apache.maven.surefire.booter.SurefireBooterForkException:
ExecutionException The forked VM terminated without properly saying goodbye. VM
crash or System.exit called?
Command was /bin/sh -c cd
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2585/src/hadoop-hdfs-project/hadoop-hdfs
&& /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xmx2048m
-XX:+HeapDumpOnOutOfMemoryError -DminiClusterDedicatedDirs=true -jar
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2585/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter1378299567730299031.jar
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2585/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
2021-03-09T15-50-11_441-jvmRun2 surefire4178654986093628610tmp
surefire_476456145897712690006tmp
Error occurred in starting fork, check output in log
Process Exit Code: 1
Crashed tests:
org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:511)
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:458)
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:299)
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:247)
at
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1149)
at
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:991)
at
org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:837)
at
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:210)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
at
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:957)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:289)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225)
at
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406)
at
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347)
Caused by: org.apache.maven.surefire.booter.SurefireBooterForkException: The
forked VM terminated without properly saying goodbye. VM crash or System.exit
called?
Command was /bin/sh -c cd
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2585/src/hadoop-hdfs-project/hadoop-hdfs
&& /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xmx2048m
-XX:+HeapDumpOnOutOfMemoryError -DminiClusterDedicatedDirs=true -jar
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2585/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter1378299567730299031.jar
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2585/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
2021-03-09T15-50-11_441-jvmRun2 surefire4178654986093628610tmp
surefire_476456145897712690006tmp
Error occurred in starting fork, check output in log
Process Exit Code: 1
Crashed tests:
org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:670)
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter.access$600(ForkStarter.java:116)
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter$2.call(ForkStarter.java:445)
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter$2.call(ForkStarter.java:421)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 563687)
Time Spent: 1h (was: 50m)
> EC: Verify EC reconstruction correctness on DataNode
> ----------------------------------------------------
>
> Key: HDFS-15759
> URL: https://issues.apache.org/jira/browse/HDFS-15759
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: datanode, ec, erasure-coding
> Affects Versions: 3.4.0
> Reporter: Toshihiko Uchida
> Assignee: Toshihiko Uchida
> Priority: Major
> Labels: pull-request-available
> Time Spent: 1h
> Remaining Estimate: 0h
>
> EC reconstruction on DataNode has caused data corruption: HDFS-14768,
> HDFS-15186 and HDFS-15240. Those issues occur under specific conditions and
> the corruption is neither detected nor auto-healed by HDFS. It is obviously
> hard for users to monitor data integrity by themselves, and even if they find
> corrupted data, it is difficult or sometimes impossible to recover them.
> To prevent further data corruption issues, this feature proposes a simple and
> effective way to verify EC reconstruction correctness on DataNode at each
> reconstruction process.
> It verifies correctness of outputs decoded from inputs as follows:
> 1. Decoding an input with the outputs;
> 2. Compare the decoded input with the original input.
> For instance, in RS-6-3, assume that outputs [d1, p1] are decoded from inputs
> [d0, d2, d3, d4, d5, p0]. Then the verification is done by decoding d0 from
> [d1, d2, d3, d4, d5, p1], and comparing the original and decoded data of d0.
> When an EC reconstruction task goes wrong, the comparison will fail with high
> probability.
> Then the task will also fail and be retried by NameNode.
> The next reconstruction will succeed if the condition triggered the failure
> is gone.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]