It was modified the algorithm to decide if a job should end successfully or
not. I don't know if I'm removing the temporary directories before the last
reduce task save the result in HDFS.

But I think that this error means that the reduce task couldn't save the
data in HDFS.

On 10 April 2012 18:56, Harsh J <[email protected]> wrote:

> Pedro,
>
> Could we also know what was modified, since you claim it happens only
> in the modified build?
>
> On Tue, Apr 10, 2012 at 9:15 PM, Pedro Costa <[email protected]> wrote:
> > When I'm executing an MapReduce example on my modified Hadoop MapReduce,
> > sometimes the reduce task give me this error, and the example doesn't
> > finish:
> >
> > [code]
> > 2012-04-10 11:32:38,110 INFO hdfs.DFSClient.closeInternal:3231 Could not
> > complete file /user/output//part-r-00000 retrying....
> > [/code]
> >
> > This happens normally, when I'm executing an example after restarting JT
> > and TT, without restarting the Namenode and the datanode.
> >
> > What this error means?
> >
> > Please notice that this happens only to my modified version and not the
> > official. I modified the version 0.20.1.
> >
> > --
> > Best regards,
>
>
>
> --
> Harsh J
>



-- 
Best regards,

Reply via email to