True, i suspected that, however none died. (Nothing in the Tasks killed/Failed field)
On Mon, Jan 5, 2009 at 4:04 PM, Doug Cutting <[email protected]> wrote: > Values can drop if tasks die and must be re-run. > > Doug > > Aaron Kimball wrote: >> >> The actual number of input records is most likely steadily increasing. The >> counters on the web site are inaccurate until the job is complete; their >> values will fluctuate wildly. I'm not sure why this is. >> >> - Aaron >> >> On Mon, Jan 5, 2009 at 8:34 AM, Saptarshi Guha >> <[email protected]>wrote: >> >>> Hello, >>> When I check the job tracker web page, and look at the Map Input >>> records read,the map input records goes up to say 1.4MN and then drops >>> to 410K and then goes up again. >>> The same happens with input/output bytes and output records. >>> >>> Why is this? Is there something wrong with the mapper code? In my map >>> function, i assume I have received one line of input. >>> The oscillatory behavior does not occur for tiny datasets, but for 1GB >>> of data (tiny for others) i see this happening. >>> >>> Thank s >>> Saptarshi >>> -- >>> Saptarshi Guha - [email protected] >>> >> > -- Saptarshi Guha - [email protected]
