[
https://issues.apache.org/jira/browse/HADOOP-14539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16089271#comment-16089271
]
Wenxin He commented on HADOOP-14539:
------------------------------------
Thanks for running all tests, [~ajisakaa].
I tried to rerun these failed tests, but did not find the same
NoSuchFieldError.
Also, I checked the code, found {{LOG}} in
LightWeightGSet.computeCapacity(LightWeightGSet.java:395) is OK. It is declared
in org.apache.hadoop.util.GSet.
Did I miss something?
The following is how I ran the tests:
# checkout the latest trunk
{noformat}
commit 02b141ac6059323ec43e472ca36dc570fdca386f
Author: Anu Engineer <[email protected]>
Date: Sun Jul 16 10:59:34 2017 -0700
HDFS-11786. Add support to make copyFromLocal multi threaded. Contributed
by Mukul Kumar Singh.
{noformat}
# compile and run the specified tests
{noformat}
mvn clean install -DskipTests -Dmaven.javadoc.skip=true
mvn -f hadoop-hdfs-project/hadoop-hdfs/pom.xml test
-Dtest=TestLazyPersistReplicaRecovery,TestHAAppend,TestFSImage
-Dmaven.test.failure.ignore=true
{noformat}
Three tests failed which maybe caused by my environment:
{noformat}
-------------------------------------------------------
T E S T S
-------------------------------------------------------
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.112 sec - in
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery
Running org.apache.hadoop.hdfs.server.namenode.TestFSImage
Tests run: 14, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 65.195 sec <<<
FAILURE! - in
org.apache.hadoop.hdfs.server.namenode.TestFSImage
testHasNonEcBlockUsingStripedIDForLoadUCFile(org.apache.hadoop.hdfs.server.namenode.TestFSImage)
Time elapsed: 3.34
sec <<< ERROR!
java.lang.IllegalStateException: failed to create a child event loop
at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method)
at sun.nio.ch.EPollArrayWrapper.<init>(EPollArrayWrapper.java:130)
at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:69)
at
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:126)
at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:120)
at
io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:87)
at
io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:64)
at
io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:49)
at
io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:61)
at
io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:52)
at
io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:44)
at
io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:36)
at
org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.<init>(DatanodeHttpServer.java:131)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:954)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1402)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:497)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2752)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2655)
at
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1621)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:868)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:491)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
at
org.apache.hadoop.hdfs.server.namenode.TestFSImage.testHasNonEcBlockUsingStripedIDForLoadUCFile
(TestFSImage.java:625)
testSupportBlockGroup(org.apache.hadoop.hdfs.server.namenode.TestFSImage) Time
elapsed: 3.677 sec <<< ERROR!
java.lang.IllegalStateException: failed to create a child event loop
at sun.nio.ch.IOUtil.makePipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:65)
at
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:126)
at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:120)
at
io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:87)
at
io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:64)
at
io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:49)
at
io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:61)
at
io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:52)
at
io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:44)
at
io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:36)
at
org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.<init>(DatanodeHttpServer.java:131)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:954)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1402)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:497)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2752)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2655)
at
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1621)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:868)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:491)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
at
org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSupportBlockGroup(TestFSImage.java:469)
testCompression(org.apache.hadoop.hdfs.server.namenode.TestFSImage) Time
elapsed: 21.451 sec <<< ERROR!
java.io.IOException: Failed to save in any storage directories while saving
namespace.
at
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1174)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1131)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:169)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1153)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:401)
at
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:243)
at
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1029)
at
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:915)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:847)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:491)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
at
org.apache.hadoop.hdfs.server.namenode.TestFSImage.testPersistHelper(TestFSImage.java:109)
at
org.apache.hadoop.hdfs.server.namenode.TestFSImage.setCompressCodec(TestFSImage.java:103)
at
org.apache.hadoop.hdfs.server.namenode.TestFSImage.testCompression(TestFSImage.java:97)
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.793 sec - in
org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Results :
Tests in error:
TestFSImage.testHasNonEcBlockUsingStripedIDForLoadUCFile:625 » IllegalState
fa...
TestFSImage.testSupportBlockGroup:469 » IllegalState failed to create a child
...
TestFSImage.testCompression:97->setCompressCodec:103->testPersistHelper:109 »
IO
Tests run: 17, Failures: 0, Errors: 3, Skipped: 0
{noformat}
# then, apply the patch, recompile and run the specified tests, the result is
the same as before patched(the three test failures):
{noformat}
git am HADOOP-14539.003.patch
mvn clean install -DskipTests -Dmaven.javadoc.skip=true
mvn -f hadoop-hdfs-project/hadoop-hdfs/pom.xml test
-Dtest=TestLazyPersistReplicaRecovery,TestHAAppend,TestFSImage
-Dmaven.test.failure.ignore=true
{noformat}
{noformat}
-------------------------------------------------------
T E S T S
-------------------------------------------------------
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.407 sec - in
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery
Running org.apache.hadoop.hdfs.server.namenode.TestFSImage
Tests run: 14, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 58.457 sec <<<
FAILURE! - in org.apache.hadoop.hdfs.server.namenode.TestFSImage
testHasNonEcBlockUsingStripedIDForLoadUCFile(org.apache.hadoop.hdfs.server.namenode.TestFSImage)
Time elapsed: 1.476 sec <<< ERROR!
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes
- current valid volumes: 1, volumes configured: 2, volumes failed: 1, volume
failures tolerated: 0
at
org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:220)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2745)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2655)
at
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1621)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:868)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:491)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
at
org.apache.hadoop.hdfs.server.namenode.TestFSImage.testHasNonEcBlockUsingStripedIDForLoadUCFile(TestFSImage.java:625)
testSupportBlockGroup(org.apache.hadoop.hdfs.server.namenode.TestFSImage) Time
elapsed: 4.068 sec <<< ERROR!
java.lang.IllegalStateException: failed to create a child event loop
at sun.nio.ch.IOUtil.makePipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:65)
at
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:126)
at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:120)
at
io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:87)
at
io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:64)
at
io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:49)
at
io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:61)
at
io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:52)
at
io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:44)
at
io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:36)
at
org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.<init>(DatanodeHttpServer.java:131)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:954)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1402)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:497)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2752)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2655)
at
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1621)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:868)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:491)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
at
org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSupportBlockGroup(TestFSImage.java:469)
testCompression(org.apache.hadoop.hdfs.server.namenode.TestFSImage) Time
elapsed: 21.234 sec <<< ERROR!
java.io.IOException: Failed to save in any storage directories while saving
namespace.
at
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1174)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1131)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:169)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1153)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:401)
at
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:243)
at
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1029)
at
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:915)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:847)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:491)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450)
at
org.apache.hadoop.hdfs.server.namenode.TestFSImage.testPersistHelper(TestFSImage.java:109)
at
org.apache.hadoop.hdfs.server.namenode.TestFSImage.setCompressCodec(TestFSImage.java:103)
at
org.apache.hadoop.hdfs.server.namenode.TestFSImage.testCompression(TestFSImage.java:97)
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.828 sec - in
org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Results :
Tests in error:
TestFSImage.testHasNonEcBlockUsingStripedIDForLoadUCFile:625 » DiskError Too
m...
TestFSImage.testSupportBlockGroup:469 » IllegalState failed to create a child
...
TestFSImage.testCompression:97->setCompressCodec:103->testPersistHelper:109 »
IO
Tests run: 17, Failures: 0, Errors: 3, Skipped: 0
[ERROR] There are test failures.
Please refer to
/home/hewenxin/Codebase/wenxinhe/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
for the individual test results.
[INFO]
[INFO] --- maven-antrun-plugin:1.7:run (hdfs-test-bats-driver) @ hadoop-hdfs ---
[INFO] Executing tasks
main:
[exec]
[exec]
[exec] ERROR: bats not installed. Skipping bash tests.
[exec] ERROR: Please install bats as soon as possible.
[exec]
[exec]
[INFO] Executed tasks
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:44 min
[INFO] Finished at: 2017-07-17T11:35:02+08:00
[INFO] Final Memory: 41M/1477M
[INFO] ------------------------------------------------------------------------
{noformat}
> Move commons logging APIs over to slf4j in hadoop-common
> --------------------------------------------------------
>
> Key: HADOOP-14539
> URL: https://issues.apache.org/jira/browse/HADOOP-14539
> Project: Hadoop Common
> Issue Type: Sub-task
> Reporter: Akira Ajisaka
> Assignee: Wenxin He
> Attachments: HADOOP-14539.001.patch, HADOOP-14539.002.patch,
> HADOOP-14539.003.patch
>
>
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]