Hi Harsh,

Thanks for your reply!

I have implemented the tool interface to incorporate your suggestion.

So when I run my job can I just pass -Dmapred.child.java.opts=Xmx2000m?

Thanks & regards
Arko


On Sat, Sep 7, 2013 at 4:54 AM, Harsh J <ha...@cloudera.com> wrote:

> You can pass that config set as part of your job (jobConf.set(…) or
> job.getConfiguration().set(…)). Alternatively, if you implement Tool,
> and use its grabbed Configruation, you can also pass it via
> -Dname=value argument when running the job (the option has to precede
> any custom options).
>
> On Sat, Sep 7, 2013 at 2:06 AM, Arko Provo Mukherjee
> <arkoprovomukher...@gmail.com> wrote:
> > Hello All,
> >
> > I am running my job on a Hadoop Cluster and it fails due to insufficient
> > Java Heap Memory.
> >
> > I searched in google, and found that I need to add the following into the
> > conf files:
> >   <property>
> >     <name>mapred.child.java.opts</name>
> >     <value>-Xmx2000m</value>
> >   </property>
> >
> > However, I don't want to request the administrator to change settings as
> it
> > is a long process.
> >
> > Is there a way I can ask Hadoop to use more Heap Space in the Slave nodes
> > without changing the conf files via some command line parameter?
> >
> > Thanks & regards
> > Arko
>
>
>
> --
> Harsh J
>

Reply via email to