It is the result of the 'speculative execution' feature of Hadoop
Map/Reduce. Nothing really to worry about, but you may want to have
control over it (JobConf has some methods that set it to true/false
per phase or overall).

2011/4/21 ajing.wang <[email protected]>:
> Hi ,guys.
>   I'm confused with mapred's running state. see the table of job running 
> state.
>
>
> KindTotal Tasks(successful+failed+killed)Successful tasksFailed tasksKilled 
> tasksStart TimeFinish Time
> Setup110021-四月-2011 17:56:5221-四月-2011 17:56:53 (1sec)
> Map14120221-四月-2011 17:56:5521-四月-2011 17:58:59 (2mins, 4sec)
> Reduce880021-四月-2011 17:57:1321-四月-2011 17:59:04 (1mins, 51sec)
> Cleanup110021-四月-2011 17:58:5521-四月-2011 17:58:56 (1sec)
>
>
>
>  why map process have 2 killed tasks. and what factor make hadoop kill the 
> task?
> Thanks.
> jbm.



-- 
Harsh J

Reply via email to