[ 
https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087813#comment-14087813
 ] 

Allen Wittenauer commented on HADOOP-10759:
-------------------------------------------

Also, before I forget to mention it, while I can appreciate this feature:

bq. However, for smaller machine such as a virtual machine, it would be nicer 
if it can scale dynamically.

The rest of isn't true.

bq.  it may use up to 1GB for heap on machine that has greater than 4GB 
physical memory. 

directly contradicts

bq. Another benefit of removing this hard coded value is to make sure that the 
Hadoop command line is not capped to 1GB for trivial operation 

If the algorithm as reported in the Oracle documentation is true, removing the 
default JAVA_HEAP_MAX will *still* limit the dynamic value to 1GB max.... thus 
accomplishing nothing. Additionally, HADOOP_CLIENT_OPTS and HADOOP_HEAPSIZE can 
both be used to override the 1GB max from the command line.

One other thing:  JAVA_HEAP_MAX is set in a few more places in the current 
shell code base (e.g., bin/yarn).  Just removing it from hadoop-config.sh is 
incomplete.

> Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
> --------------------------------------------------
>
>                 Key: HADOOP-10759
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10759
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: bin
>    Affects Versions: 2.4.0
>         Environment: Linux64
>            Reporter: sam liu
>            Priority: Minor
>             Fix For: 2.6.0
>
>         Attachments: HADOOP-10759.patch, HADOOP-10759.patch
>
>
> In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there 
> is a hard code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be 
> removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to