Hi Arun,

Will not cross post hereafter.

I had the same heap size value, same no.of jobs,scheduler and it is works
fine in hadoop 1.0.4 version for 8 to 9 months, but I'm facing this JT OOME
issue in hadoop 1.2.1 version only.

Even though I tried to set heap size max of 16G but it eats the whole
memory.

Thanks,
Viswa
On Oct 15, 2013 7:30 AM, "Arun C Murthy" <a...@hortonworks.com> wrote:

> Please don't cross-post.
>
> HADOOP_HEAPSIZE of 1024 is too low. You might want to bump it up to 16G or
> more, depending on:
> * #jobs
> * Scheduler you use.
>
> Arun
>
> On Oct 11, 2013, at 9:58 AM, Viswanathan J <jayamviswanat...@gmail.com>
> wrote:
>
> Hi,
>
> I'm running a 14 nodes Hadoop cluster with tasktrackers running in all
> nodes.
>
> Have set the jobtracker default memory size in hadoop-env.sh
>
> *HADOOP_HEAPSIZE="1024"*
> *
> *
> Have set the mapred.child.java.opts value in mapred-site.xml as,
>
> <property>
>   <name>mapred.child.java.opts</name>
> <value>-Xmx2048m</value>
>
>
> --
> Regards,
> Viswa.J
>
>
> --
> Arun C. Murthy
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

Reply via email to