Is there anyway through which we can kill hadoop jobs that are taking
enough time to execute ?

What I want to achieve is - If some job is running more than
"_some_predefined_timeout_limit", it should be killed automatically.

Is it possible to achieve this, through shell scripts or any other way ?

Thanks,
Praveenesh

Reply via email to