[ https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16221069#comment-16221069 ]
Allen Wittenauer commented on HDFS-12711: ----------------------------------------- https://builds.apache.org/job/PreCommit-HDFS-Build2/2 This is better or worse, depending upon your point of view. Many of these: {code} estDeleteEZWithMultipleUsers(org.apache.hadoop.hdfs.TestTrashWithEncryptionZones) Time elapsed: 5.134 sec <<< ERROR! java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:717) at io.netty.util.concurrent.SingleThreadEventExecutor.shutdownGracefully(SingleThreadEventExecutor.java:557) at io.netty.util.concurrent.MultithreadEventExecutorGroup.shutdownGracefully(MultithreadEventExecutorGroup.java:146) at io.netty.util.concurrent.AbstractEventExecutorGroup.shutdownGracefully(AbstractEventExecutorGroup.java:69) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.close(DatanodeHttpServer.java:272) at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:1986) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNode(MiniDFSCluster.java:1868) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:1858) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1837) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1811) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1804) at org.apache.hadoop.hdfs.TestTrashWithEncryptionZones.teardown(TestTrashWithEncryptionZones.java:118) {code} which eventually turn into these: {code} /bin/sh: 1: Cannot fork /bin/sh: 1: Cannot fork /bin/sh: 1: Cannot fork /bin/sh: 1: Cannot fork /bin/sh: 1: Cannot fork /bin/sh: 1: Cannot fork {code} but that's not it's final form! Oh no, it then turns into reams and reams of this: {code} # # There is insufficient memory for the Java Runtime Environment to continue. # Cannot create GC thread. Out of system resources. # An error report file with more information is saved as: # /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/hs_err_pid20306.log # {code} Now we just have to wait and see if the walls put up were enough to stop H4 from crashing. (the 1k default process limit is *probably* a smidge too low, but not by too much.) > deadly hdfs test > ---------------- > > Key: HDFS-12711 > URL: https://issues.apache.org/jira/browse/HDFS-12711 > Project: Hadoop HDFS > Issue Type: Test > Affects Versions: 2.9.0, 2.8.2 > Reporter: Allen Wittenauer > Priority: Critical > Attachments: HDFS-12711.branch-2.00.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org