This simple map-recude application will take nearly 1 hour to finish running
on the two-node cluster ,due to lots of Failed/Killed task attempts, while
in the single node cluster this application only takes 1 minite ... I am
quite confusing why there are so many Failed/Killed attempts ..

On Tue, Apr 7, 2009 at 10:40 PM, xiaolin guo <[email protected]> wrote:

> I am trying to setup a small hadoop cluster , everything was ok before I
> moved from single node cluster to two-node cluster. I followed the article
> http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)<http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Multi-Node_Cluster%29>to
>  config master and slaves.However, when I tried to run the example
> wordcount map-reduce application , the reduce task got stuck in 19% for a
> log time . Then I got a notice:"INFO mapred.JobClient: TaskId :
> attempt_200904072219_0001_m_000002_0, Status : FAILED too many fetch
> errors"  and an error message : Error reading task outputslave.
>
> All map tasks in both task nodes had been finished which could be verified
> in task tracker pages.
>
> Both nodes work well in single node mode . And the Hadoop file system seems
> to be healthy in multi-node mode.
>
> Can anyone help me with this issue?  Have already got entangled in this
> issue for a long time ...
>
> Thanks very much!
>

Reply via email to