We sometimes have hundreds of map or reduce tasks for a job. I think it is hard to find all of them and kill the corresponding jvm processes. If we do not want to restart hadoop, is there any automatic methods?
2011/7/5 <[email protected]> > Um kill -9 "pid" ? > > -----Original Message----- > From: Juwei Shi [mailto:[email protected]] > Sent: Friday, July 01, 2011 10:53 AM > To: [email protected]; [email protected] > Subject: Jobs are still in running state after executing "hadoop job > -kill jobId" > > Hi, > > I faced a problem that the jobs are still running after executing > "hadoop > job -kill jobId". I rebooted the cluster but the job still can not be > killed. > > The hadoop version is 0.20.2. > > Any idea? > > Thanks in advance! > > -- > - Juwei > >
