[ https://issues.apache.org/jira/browse/HDFS-10935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15568361#comment-15568361 ]
Wei-Chiu Chuang edited comment on HDFS-10935 at 10/12/16 10:46 AM: ------------------------------------------------------------------- I ran the tests in multiple JDK versions ranging from JDK 1.8.0_05 to JDK 1.8.0_102 on CentOS 6.5 (I tested both with/without native libisal library), and I got the following error for all tests in TestFileChecksum.: Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 7.46 sec <<< FAILURE! - in org.apache.hadoop.hdfs.TestFileChecksum testStripedFileChecksumWithMissedDataBlocksRangeQuery5(org.apache.hadoop.hdfs.TestFileChecksum) Time elapsed: 7.269 sec <<< ERROR! java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at io.netty.util.concurrent.ThreadPerTaskExecutor.execute(ThreadPerTaskExecutor.java:33) at io.netty.util.concurrent.SingleThreadEventExecutor.doStartThread(SingleThreadEventExecutor.java:692) at io.netty.util.concurrent.SingleThreadEventExecutor.shutdownGracefully(SingleThreadEventExecutor.java:499) at io.netty.util.concurrent.MultithreadEventExecutorGroup.shutdownGracefully(MultithreadEventExecutorGroup.java:160) at io.netty.util.concurrent.AbstractEventExecutorGroup.shutdownGracefully(AbstractEventExecutorGroup.java:70) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.close(DatanodeHttpServer.java:259) at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:1932) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1985) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1962) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1936) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1929) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:870) at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:491) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450) at org.apache.hadoop.hdfs.TestFileChecksum.setup(TestFileChecksum.java:78) Maybe it's just my env issue. Can't tell. was (Author: jojochuang): I ran the tests in multiple JDK versions ranging from JDK 1.8.0_05 to JDK 1.8.0_102 on CentOS 6.5 (I tested both with/without native libisal library), and I got the following error for all tests in TestFileChecksum.: testStripedFileChecksumWithMissedDataBlocksRangeQuery5(org.apache.hadoop.hdfs.TestFileChecksum) Time elapsed: 4.436 sec <<< ERROR! java.lang.IllegalStateException: failed to create a child event loop at sun.nio.ch.IOUtil.makePipe(Native Method) at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:65) at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36) at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:125) at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:119) at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:97) at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:31) at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:77) at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:50) at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:72) at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:58) at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:46) at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:38) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.<init>(DatanodeHttpServer.java:132) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:913) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1353) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:489) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2671) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2559) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1613) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:860) at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:491) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450) at org.apache.hadoop.hdfs.TestFileChecksum.setup(TestFileChecksum.java:78) Maybe it's just my env issue. Can't tell. > TestFileChecksum tests are failing after HDFS-10460 (Mac only?) > --------------------------------------------------------------- > > Key: HDFS-10935 > URL: https://issues.apache.org/jira/browse/HDFS-10935 > Project: Hadoop HDFS > Issue Type: Bug > Environment: JDK 1.8.0_91 on Mac OS X Yosemite 10.10.5 > Reporter: Wei-Chiu Chuang > Assignee: SammiChen > > On my Mac, TestFileChecksum has been been failing since HDFS-10460. However, > the jenkins jobs have not reported the failures. Maybe it's an issue with my > Mac or JDK. > 9 out of 21 tests failed. > {noformat} > java.lang.AssertionError: Checksum mismatches! > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at > org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery(TestFileChecksum.java:227) > at > org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery10(TestFileChecksum.java:336) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org