[
https://issues.apache.org/jira/browse/HADOOP-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12650525#action_12650525
]
Steve Loughran commented on HADOOP-4721:
----------------------------------------
Stack Trace (resent)
[junit] Exception in thread "Main Thread" java.lang.OutOfMemoryError:
allocLargeObjectOrArray - Object size: 164626424, Num elements: 164626404
[junit] at java.util.Arrays.copyOf(Arrays.java:2786)
[junit] at
java.io.ByteArrayOutputStream.toByteArray(ByteArrayOutputStream.java:133)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:448)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
[junit] at
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
[junit] Running org.apache.hadoop.mapred.TestSetupAndCleanupFailure
[junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
[junit] Test org.apache.hadoop.mapred.TestSetupAndCleanupFailure FAILED
(timeout)
This is in ant,
if (startTestSuiteSuccess) {
sendOutAndErr(new String(outStrm.toByteArray()),
new String(errStrm.toByteArray()));
}
And it is triggered by either of the streams being too big for junit to handle.
This is something to x-file with Ant, but there is also a root cause somewhere
to deal with -what is creating 164MB worth of stdout or stderr?
> OOM in .TestSetupAndCleanupFailure
> ----------------------------------
>
> Key: HADOOP-4721
> URL: https://issues.apache.org/jira/browse/HADOOP-4721
> Project: Hadoop Core
> Issue Type: Bug
> Components: test
> Affects Versions: 0.20.0
> Environment: 64 bit linux with JRockit 64-bit JVM
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Minor
>
> The root cause may be my lifecycle changes, but I'm seeing an OOM in
> TestSetupAndCleanupFailure
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.