We sometimes have hundreds of map or reduce tasks for a job. I think it is
hard to find all of them and kill the corresponding jvm processes. If we do
not want to restart hadoop, is there any automatic methods?

2011/7/5 <jeff.schm...@shell.com>

> Um kill  -9 "pid" ?
>
> -----Original Message-----
> From: Juwei Shi [mailto:shiju...@gmail.com]
> Sent: Friday, July 01, 2011 10:53 AM
> To: common-u...@hadoop.apache.org; mapreduce-user@hadoop.apache.org
> Subject: Jobs are still in running state after executing "hadoop job
> -kill jobId"
>
> Hi,
>
> I faced a problem that the jobs are still running after executing
> "hadoop
> job -kill jobId". I rebooted the cluster but the job still can not be
> killed.
>
> The hadoop version is 0.20.2.
>
> Any idea?
>
> Thanks in advance!
>
> --
> - Juwei
>
>

Reply via email to