Matthias, It depends on what config you're trying to change. There are several per-job configs that do not require changes on JT/TT mapred-site.xml and can be passed via -D parameters to a job's configuration directly from CLI (if you use the Tool and ToolRunner methods of writing a driver), or manually via the JobConf/Job.getConfiguration() objects inside the code.
So question to you: Which set of configs are you wishing to change every job run? Cause only config changes that apply to JT or TTs need those services to be restarted, the rest can be applied on a per job basis, without requiring a restart of anything. On Wed, Jun 13, 2012 at 5:50 PM, Matthias Zengler <matthias.zeng...@googlemail.com> wrote: > Hi, > > I've got a question regarding hadoop configuration. Is it possible to pass > configuration parameters on job start up? > Something like that: > > hadoop -HADOOP_HEAPSIZE=4G jar some.jar some.class.to.execute param1 param2 > > Or do I have to restart the hadoop cluster every time I want to change > something even if it is just for a specific job or workflow? > We have some jobs running which needs a lot of time and we want to start > another one with a slightly different configuration because it needs more > memory to finish. > We are using CDH3. > > Greetings, > Mat -- Harsh J