Hello Junyoung,

Are you multithreading your reduce tasks?

On Mon, May 2, 2011 at 12:52 PM, Jun Young Kim <[email protected]> wrote:
> hi, all.
>
> I got so many failures on a reducing step.
>
> see this error.
>
> java.io.IOException: Failed to delete earlier output of task:
> attempt_201105021341_0021_r_000001_0
>        at
> org.apache.hadoop.mapred.FileOutputCommitter.moveTaskOutputs(FileOutputCommitter.java:157)
>        at
> org.apache.hadoop.mapred.FileOutputCommitter.moveTaskOutputs(FileOutputCommitter.java:173)
>        at
> org.apache.hadoop.mapred.FileOutputCommitter.moveTaskOutputs(FileOutputCommitter.java:173)
>        at
> org.apache.hadoop.mapred.FileOutputCommitter.commitTask(FileOutputCommitter.java:133)
>        at
> org.apache.hadoop.mapred.OutputCommitter.commitTask(OutputCommitter.java:233)
>        at org.apache.hadoop.mapred.Task.commit(Task.java:962)
>        at org.apache.hadoop.mapred.Task.done(Task.java:824)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:391)
>        at org.apache.hadoop.mapred.Child$4.run(Child.java:217)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742)
>        at org.apache.hadoop.mapred.C
>
>
> this error was happened after adopting MultipleTextOutputFormat class in my
> job.
> the job is producing thousands of different output files on a HDFS.
>
> anybody can guess reasons?
>
> thanks.
>
> --
> Junyoung Kim ([email protected])
>
>



-- 
Harsh J

Reply via email to