Please don't cross-post.

HADOOP_HEAPSIZE of 1024 is too low. You might want to bump it up to 16G or 
more, depending on:
* #jobs
* Scheduler you use.

Arun

On Oct 11, 2013, at 9:58 AM, Viswanathan J <jayamviswanat...@gmail.com> wrote:

> Hi,
> 
> I'm running a 14 nodes Hadoop cluster with tasktrackers running in all nodes.
> 
> Have set the jobtracker default memory size in hadoop-env.sh
> 
> HADOOP_HEAPSIZE="1024"
> 
> Have set the mapred.child.java.opts value in mapred-site.xml as,
> 
> <property>
>   <name>mapred.child.java.opts</name>
> <value>-Xmx2048m</value>
> 
> 
> -- 
> Regards,
> Viswa.J

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Reply via email to