I've also experienced this behavior on 0.19 and on older versions. I think
the same bug also causes counters to come and go completely on some tasks
that have a small number of map jobs and progress slowly.

I haven't had the chance to go hunting this down anywhere, but I can confirm
that it happens without any tasks dying.

-Todd

On Mon, Jan 5, 2009 at 4:15 PM, Saptarshi Guha <[email protected]>wrote:

> True, i suspected that, however none died. (Nothing in the Tasks
> killed/Failed field)
>
> On Mon, Jan 5, 2009 at 4:04 PM, Doug Cutting <[email protected]> wrote:
> > Values can drop if tasks die and must be re-run.
> >
> > Doug
> >
> > Aaron Kimball wrote:
> >>
> >> The actual number of input records is most likely steadily increasing.
> The
> >> counters on the web site are inaccurate until the job is complete; their
> >> values will fluctuate wildly. I'm not sure why this is.
> >>
> >> - Aaron
> >>
> >> On Mon, Jan 5, 2009 at 8:34 AM, Saptarshi Guha
> >> <[email protected]>wrote:
> >>
> >>> Hello,
> >>> When I check the job tracker web page, and look at the Map Input
> >>> records read,the map input records goes up to say 1.4MN and then drops
> >>> to 410K and then goes up again.
> >>> The same happens with input/output bytes and output records.
> >>>
> >>> Why is this? Is there something wrong with the mapper code? In my map
> >>> function, i assume I have received one line of input.
> >>> The oscillatory behavior does not occur for tiny datasets, but for 1GB
> >>> of data (tiny for others) i see this happening.
> >>>
> >>> Thank s
> >>> Saptarshi
> >>> --
> >>> Saptarshi Guha - [email protected]
> >>>
> >>
> >
>
>
>
> --
> Saptarshi Guha - [email protected]
>

Reply via email to