[
https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088851#comment-14088851
]
Allen Wittenauer commented on HADOOP-10759:
-------------------------------------------
BTW, just so folks don't have look it up, this is what hadoop-env.sh says:
{code}
# The maximum amount of heap to use, in MB. Default is 1000.
#export HADOOP_HEAPSIZE=
#export HADOOP_NAMENODE_INIT_HEAPSIZE=""
{code}
Removing the JAVA_HEAP_MAX *breaks* that default. There is also *no* mention
of JAVA_HEAP_MAX in anything user facing in the source. HADOOP_HEAPSIZE, of
course, is defined right up there.
> Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
> --------------------------------------------------
>
> Key: HADOOP-10759
> URL: https://issues.apache.org/jira/browse/HADOOP-10759
> Project: Hadoop Common
> Issue Type: Bug
> Components: bin
> Affects Versions: 2.4.0
> Environment: Linux64
> Reporter: sam liu
> Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-10759.patch, HADOOP-10759.patch
>
>
> In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there
> is a hard code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be
> removed.
--
This message was sent by Atlassian JIRA
(v6.2#6252)