[
https://issues.apache.org/jira/browse/HADOOP-4052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12627459#action_12627459
]
Steve Loughran commented on HADOOP-4052:
----------------------------------------
What would seem a good solution here would be allow anyone to override the
memory limit in their build.properties file; have a test.ulimit value that
could be overridden.
That avoids the test having to be clever about JVM size; make it up to the user
whose tests fail.
> org.apache.hadoop.streaming.TestUlimit fails on JRockit 64-bit; not enough
> memory
> ---------------------------------------------------------------------------------
>
> Key: HADOOP-4052
> URL: https://issues.apache.org/jira/browse/HADOOP-4052
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/streaming
> Affects Versions: 0.19.0
> Environment: Linux morzine 2.6.22-15-generic #1 SMP Fri Jul 11
> 18:56:36 UTC 2008 x86_64 GNU/Linux
> java version "1.6.0_02"
> Java(TM) SE Runtime Environment (build 1.6.0_02-b05)
> BEA JRockit(R) (build R27.4.0-90-89592-1.6.0_02-20070928-1715-linux-x86_64,
> compiled mode)
> Reporter: Steve Loughran
>
> the testUlimit test sets a memory limit that is too small for Java to start.
> So it fails with a -1 response instead, which breaks the test.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.