[
https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088843#comment-14088843
]
Allen Wittenauer commented on HADOOP-10759:
-------------------------------------------
Defaults are *always* hard-coded. That's what makes them defaults...
bq. export JAVA_HEAP_MAX=4096m then run the CLI
Except this is incorrect usage. Users should be doing either:
HADOOP_CLIENT_OPTS=-Xmx (which, granted, results in multiple Xmx settings...)
or using the preferred HADOOP_HEAPSIZE=xxx setting.
You'll note that the lines below the removed one overrides JAVA_HEAP_MAX by the
setting of HADOOP_HEAPSIZE...
For all intents and purposes, JAVA_HEAP_MAX isn't really meant to be set by
users in Hadoop 2.x and previous releases. It's an *internal* environment
variable. The big tip off is that it doesn't begin with HADOOP_ .
> Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
> --------------------------------------------------
>
> Key: HADOOP-10759
> URL: https://issues.apache.org/jira/browse/HADOOP-10759
> Project: Hadoop Common
> Issue Type: Bug
> Components: bin
> Affects Versions: 2.4.0
> Environment: Linux64
> Reporter: sam liu
> Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-10759.patch, HADOOP-10759.patch
>
>
> In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there
> is a hard code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be
> removed.
--
This message was sent by Atlassian JIRA
(v6.2#6252)