Thanks Harsh!

That did the trick after some other search. This link was helpful:
http://hadoop.apache.org/common/docs/r1.0.3/api/org/apache/hadoop/util/GenericOptionsParser.html

The right way to do it was the following:
hadoop jar some.jar some.class.to.execute
-Dmapred.map.child.java.opts="-Xmx2048M"
-Dmapred.reduce.child.java.opts="-Xmx2048M" param1 param2

Just in case, someone else has the same problem.

Greetings,
Mat

2012/6/13 Harsh J <ha...@cloudera.com>

> That is a per job property and you can raise it when submitting a job
> itself.
>
> You can pass it via -D args (See
>
> http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/util/Tool.html
> )
> or via a conf.set(name, value) from inside the code. Mostly, any prop
> that is not of the format mapred(uce).{jobtracker,tasktracker}.*
> aren't going to be requiring a restart of services.
>
> On Wed, Jun 13, 2012 at 6:24 PM, Matthias Zengler
> <matthias.zeng...@googlemail.com> wrote:
> > I want to change the mapreduce.(map|reduce).java.opts for a bigger heap
> > space of my map and reduce parts.
> >
> > 2012/6/13 Harsh J <ha...@cloudera.com>
> >
> >> Matthias,
> >>
> >> It depends on what config you're trying to change. There are several
> >> per-job configs that do not require changes on JT/TT mapred-site.xml
> >> and can be passed via -D parameters to a job's configuration directly
> >> from CLI (if you use the Tool and ToolRunner methods of writing a
> >> driver), or manually via the JobConf/Job.getConfiguration() objects
> >> inside the code.
> >>
> >> So question to you: Which set of configs are you wishing to change
> >> every job run? Cause only config changes that apply to JT or TTs need
> >> those services to be restarted, the rest can be applied on a per job
> >> basis, without requiring a restart of anything.
> >>
> >> On Wed, Jun 13, 2012 at 5:50 PM, Matthias Zengler
> >> <matthias.zeng...@googlemail.com> wrote:
> >> > Hi,
> >> >
> >> > I've got a question regarding hadoop configuration. Is it possible to
> >> pass
> >> > configuration parameters on job start up?
> >> > Something like that:
> >> >
> >> > hadoop -HADOOP_HEAPSIZE=4G jar some.jar some.class.to.execute param1
> >> param2
> >> >
> >> > Or do I have to restart the hadoop cluster every time I want to change
> >> > something even if it is just for a specific job or workflow?
> >> > We have some jobs running which needs a lot of time and we want to
> start
> >> > another one with a slightly different configuration because it needs
> more
> >> > memory to finish.
> >> > We are using CDH3.
> >> >
> >> > Greetings,
> >> > Mat
> >>
> >>
> >>
> >> --
> >> Harsh J
> >>
>
>
>
> --
> Harsh J
>

Reply via email to