On Mi 15 feb 2012 00:10:52 +0200, Steve Lewis wrote:
> All of my current jobs and the wordcount I used when learning are all
> failing with the same error
>
> : Error reading task outputhttp://
> glados9.systemsbiology.net:50060/tasklog?plaintext=true&taskid=attempt_201202141331_0008_m_000001_2&filter=stdout
>
> glados9 is a slave node and other slave nodes are listed as well
>
> I have tried deleting the userlogs directory of all slave nodes without
> success -
> I see no useful logs and am at a real loss of what to do I suspect some
> directory somewhere has too many entries but other than userlogs I am not
> sure where else to look

I've also seen that output, and it was related to the job failing to 
run (I think it was due to a different version of slf4j on the 
classpath; make sure you use the one hadoop is compiled with or a 
compatible one).

 Check the task tracker and job tracker logs on the data node where the 
job ran to see why it failed.

Reply via email to