Hi Tao, I think the first step is to make sure your hadoop is actually running and with adequate HDFS space. You can try checking the number of active nodes of the cluster and the size of the HDFS space in the UI or via the command line.
Xin On Mon, Sep 13, 2010 at 11:36 AM, Tao You <[email protected]> wrote: > Hi, > We deployed Hadoop on several datacenters in EC2. We configed > mapred.job.tracker, fs.default.name and slaves by > external hostname(DNS)s. We can start the hadoop service, but the > inter-communication among mapers and reducers > didnot work. When we did wordcount, error occured as following. > "INFO mapred.JobClient: Task Id : > attempt_201009121539_0002_m_000003_1, Status : FAILED > Too many fetch-failures > WARN mapred.JobClient: Error reading task > outputip-10-48-98-34.eu-west-1.compute.internal" > Is there a way to fix this problem? > Thanks, > Tao You >
