[
https://issues.apache.org/jira/browse/HADOOP-10093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13821326#comment-13821326
]
Hudson commented on HADOOP-10093:
---------------------------------
FAILURE: Integrated in Hadoop-Mapreduce-trunk #1607 (See
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1607/])
HADOOP-10093. hadoop-env.cmd sets HADOOP_CLIENT_OPTS with a max heap size that
is too small. Contributed by Shanyu Zhao. (cnauroth:
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1541343)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
*
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.cmd
> hadoop-env.cmd sets HADOOP_CLIENT_OPTS with a max heap size that is too small.
> ------------------------------------------------------------------------------
>
> Key: HADOOP-10093
> URL: https://issues.apache.org/jira/browse/HADOOP-10093
> Project: Hadoop Common
> Issue Type: Bug
> Components: conf
> Affects Versions: 2.2.0
> Reporter: shanyu zhao
> Assignee: shanyu zhao
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10093.patch
>
>
> HADOOP-9211 increased the default max heap size set by hadoop-env.sh to 512m.
> The same change needs to be applied to hadoop-env.cmd for Windows.
--
This message was sent by Atlassian JIRA
(v6.1#6144)