Harsh,

It works. Thanks a lot!!!

2011/7/2 Harsh J <ha...@cloudera.com>

> Juwei,
>
> Its odd that a killed job should get "recovered" back into running
> state. Can you not simply disable the JT recovery feature (I believe
> its turned off by default)?
>
> On Fri, Jul 1, 2011 at 10:47 PM, Juwei Shi <shiju...@gmail.com> wrote:
> > Thanks Harsh.
> >
> > There are "recover" jobs after I re-boot mapreduce/hdfs.
> >
> > Is there any other way to delete the status records of the running jobs?
> > Then they will not recover after restarting JT?
> >
> > 2011/7/2 Harsh J <ha...@cloudera.com>
> >>
> >> Juwei,
> >>
> >> Please do not cross-post to multiple lists. I believe this question
> >> suits the mapreduce-user@ list so am replying only on there.
> >>
> >> On Fri, Jul 1, 2011 at 9:22 PM, Juwei Shi <shiju...@gmail.com> wrote:
> >> > Hi,
> >> >
> >> > I faced a problem that the jobs are still running after executing
> >> > "hadoop
> >> > job -kill jobId". I rebooted the cluster but the job still can not be
> >> > killed.
> >>
> >> What does the JT logs say after you attempt to kill a job ID? Does the
> >> same Job ID keep running even after or are you seeing other jobs
> >> continue to launch?
> >>
> >> --
> >> Harsh J
> >
> > --
> > - Juwei
> >
>
> --
> Harsh J
>

-- 
- Juwei

Reply via email to