Hi,

Every Map/Reduce app has a Reporter, You can set the configuration parameter {mapred.task.timeout} of Reporter to your desired value.

Good Luck.

On 01/30/2012 04:14 PM, praveenesh kumar wrote:
Yeah, I am aware of that, but it needs you to explicity monitor the job and
look for jobid and then hadoop job -kill command.
What I want to know - "Is there anyway to do all this automatically by
providing some timer or something -- that if my job is taking more than
some predefined time, it would get killed automatically

Thanks,
Praveenesh

On Mon, Jan 30, 2012 at 12:38 PM, Prashant Kommireddi
<prash1...@gmail.com>wrote:

You might want to take a look at the kill command : "hadoop job -kill
<jobid>".

Prashant

On Sun, Jan 29, 2012 at 11:06 PM, praveenesh kumar<praveen...@gmail.com
wrote:
Is there anyway through which we can kill hadoop jobs that are taking
enough time to execute ?

What I want to achieve is - If some job is running more than
"_some_predefined_timeout_limit", it should be killed automatically.

Is it possible to achieve this, through shell scripts or any other way ?

Thanks,
Praveenesh


Reply via email to