[
https://issues.apache.org/jira/browse/HDFS-15759?focusedWorklogId=564598&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-564598
]
ASF GitHub Bot logged work on HDFS-15759:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 11/Mar/21 13:28
Start Date: 11/Mar/21 13:28
Worklog Time Spent: 10m
Work Description: touchida commented on pull request #2585:
URL: https://github.com/apache/hadoop/pull/2585#issuecomment-796735039
All failures are unrelated to this PR.
The compilation failure was caused by
[HADOOP-17573](https://issues.apache.org/jira/browse/HADOOP-17573) and I filed
the failed test in
[HDFS-15888](https://issues.apache.org/jira/browse/HDFS-15888).
- Compilation failure
```
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on
project hadoop-huaweicloud: Compilation failure
[ERROR]
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2585/src/hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFileSystem.java:[396,57]
error: incompatible types: BlockingThreadPoolExecutorService cannot be
converted to ListeningExecutorService
[ERROR] -> [Help 1]
```
- Failed test
```
[ERROR]
testCheckpointBeforeNameNodeInitializationIsComplete(org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints)
Time elapsed: 108.763 s <<< FAILURE!
java.lang.AssertionError: Expected non-empty
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2585/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/current/fsimage_0000000000000000012
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at
org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertNNHasCheckpoints(FSImageTestUtil.java:515)
at
org.apache.hadoop.hdfs.server.namenode.ha.HATestUtil.waitForCheckpoint(HATestUtil.java:347)
at
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testCheckpointBeforeNameNodeInitializationIsComplete(TestStandbyCheckpoints.java:318)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
```
- Crashed test
```
org.apache.hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier
org.apache.maven.surefire.booter.SurefireBooterForkException:
ExecutionException The forked VM terminated without properly saying goodbye. VM
crash or System.exit called?
Command was /bin/sh -c cd
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2585/src/hadoop-hdfs-project/hadoop-hdfs
&& /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xmx2048m
-XX:+HeapDumpOnOutOfMemoryError -DminiClusterDedicatedDirs=true -jar
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2585/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter8242204434917227305.jar
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2585/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
2021-03-10T15-51-02_674-jvmRun1 surefire1001858419377671696tmp
surefire_4743977225930616291091tmp
Error occurred in starting fork, check output in log
Process Exit Code: 1
Crashed tests:
org.apache.hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:511)
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:458)
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:299)
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:247)
at
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1149)
at
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:991)
at
org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:837)
at
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:210)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
at
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:957)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:289)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225)
at
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406)
at
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347)
Caused by: org.apache.maven.surefire.booter.SurefireBooterForkException: The
forked VM terminated without properly saying goodbye. VM crash or System.exit
called?
Command was /bin/sh -c cd
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2585/src/hadoop-hdfs-project/hadoop-hdfs
&& /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xmx2048m
-XX:+HeapDumpOnOutOfMemoryError -DminiClusterDedicatedDirs=true -jar
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2585/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter8242204434917227305.jar
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2585/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
2021-03-10T15-51-02_674-jvmRun1 surefire1001858419377671696tmp
surefire_4743977225930616291091tmp
Error occurred in starting fork, check output in log
Process Exit Code: 1
Crashed tests:
org.apache.hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:670)
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter.access$600(ForkStarter.java:116)
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter$2.call(ForkStarter.java:445)
at
org.apache.maven.plugin.surefire.booterclient.ForkStarter$2.call(ForkStarter.java:421)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 564598)
Time Spent: 1h 10m (was: 1h)
> EC: Verify EC reconstruction correctness on DataNode
> ----------------------------------------------------
>
> Key: HDFS-15759
> URL: https://issues.apache.org/jira/browse/HDFS-15759
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: datanode, ec, erasure-coding
> Affects Versions: 3.4.0
> Reporter: Toshihiko Uchida
> Assignee: Toshihiko Uchida
> Priority: Major
> Labels: pull-request-available
> Time Spent: 1h 10m
> Remaining Estimate: 0h
>
> EC reconstruction on DataNode has caused data corruption: HDFS-14768,
> HDFS-15186 and HDFS-15240. Those issues occur under specific conditions and
> the corruption is neither detected nor auto-healed by HDFS. It is obviously
> hard for users to monitor data integrity by themselves, and even if they find
> corrupted data, it is difficult or sometimes impossible to recover them.
> To prevent further data corruption issues, this feature proposes a simple and
> effective way to verify EC reconstruction correctness on DataNode at each
> reconstruction process.
> It verifies correctness of outputs decoded from inputs as follows:
> 1. Decoding an input with the outputs;
> 2. Compare the decoded input with the original input.
> For instance, in RS-6-3, assume that outputs [d1, p1] are decoded from inputs
> [d0, d2, d3, d4, d5, p0]. Then the verification is done by decoding d0 from
> [d1, d2, d3, d4, d5, p1], and comparing the original and decoded data of d0.
> When an EC reconstruction task goes wrong, the comparison will fail with high
> probability.
> Then the task will also fail and be retried by NameNode.
> The next reconstruction will succeed if the condition triggered the failure
> is gone.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]