[ 
https://issues.apache.org/jira/browse/HADOOP-3644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-3644:
----------------------------------

    Resolution: Won't Fix
        Status: Resolved  (was: Patch Available)

HADOOP-3655 seems to address the need and changing the default to suit 
particular platform/JVM considerations doesn't seem correct.

That said, it's very tempting to just raise the limit. Are there any reasons 
why we shouldn't? The unit tests aren't so long-lived or memory-intensive that 
I'd fear masking errors with too high a limit. I'm going to close this as 
"Won't fix" for now, since it's been in patch queue limbo for a couple weeks 
and there is a solution already committed, but wouldn't object if it were 
reopened should someone want to argue that it's too low for platform-agnostic 
reasons.

> TestLocalJobControl test gets OutOfMemoryError on 64-bit Java
> -------------------------------------------------------------
>
>                 Key: HADOOP-3644
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3644
>             Project: Hadoop Core
>          Issue Type: Bug
>            Reporter: Matei Zaharia
>            Priority: Trivial
>             Fix For: 0.17.2, 0.18.0
>
>         Attachments: testmem.patch
>
>
> The TestLocalJobControl unit test fails on 64-bit Java on Linux with an 
> OutOfMemoryError. Here is the exact Java environment:
> $ java -version
> java version "1.5.0_07"
> Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_07-b03)
> Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_07-b03, mixed mode)
> The test runs fine with 32-bit Java. The problem is likely that some of the 
> data structures become bigger when using 64-bit pointers. As a fix, I've 
> suggested simply increasing the memory available to JUnit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to