That's right. You can verify it when you run your job by looking at the "job file" link at the top. That shows you all the params used to start the job.
Just be aware to make sure you don't put your cluster into an unstable state when you do that. E.g. look at how many mappers / reducers that can run concurrently and check you're not offering each more memory than the machine has spare. Hope this helps, Tim On Sat, Sep 7, 2013 at 8:20 PM, Arko Provo Mukherjee < arkoprovomukher...@gmail.com> wrote: > Hi Harsh, > > Thanks for your reply! > > I have implemented the tool interface to incorporate your suggestion. > > So when I run my job can I just pass -Dmapred.child.java.opts=Xmx2000m? > > Thanks & regards > Arko > > > On Sat, Sep 7, 2013 at 4:54 AM, Harsh J <ha...@cloudera.com> wrote: > >> You can pass that config set as part of your job (jobConf.set(…) or >> job.getConfiguration().set(…)). Alternatively, if you implement Tool, >> and use its grabbed Configruation, you can also pass it via >> -Dname=value argument when running the job (the option has to precede >> any custom options). >> >> On Sat, Sep 7, 2013 at 2:06 AM, Arko Provo Mukherjee >> <arkoprovomukher...@gmail.com> wrote: >> > Hello All, >> > >> > I am running my job on a Hadoop Cluster and it fails due to insufficient >> > Java Heap Memory. >> > >> > I searched in google, and found that I need to add the following into >> the >> > conf files: >> > <property> >> > <name>mapred.child.java.opts</name> >> > <value>-Xmx2000m</value> >> > </property> >> > >> > However, I don't want to request the administrator to change settings >> as it >> > is a long process. >> > >> > Is there a way I can ask Hadoop to use more Heap Space in the Slave >> nodes >> > without changing the conf files via some command line parameter? >> > >> > Thanks & regards >> > Arko >> >> >> >> -- >> Harsh J >> > >