No, the assertion exception is the only exception, everything else runs
smoothly. I will upload the patch to TestMapRed() in a few minutes (it will
apply to the 0.11.1 release)

On 2/13/07, Devaraj Das (JIRA) <[EMAIL PROTECTED]> wrote:


    [
https://issues.apache.org/jira/browse/HADOOP-1014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12472998]

Devaraj Das commented on HADOOP-1014:
-------------------------------------

Do you see any other error/exception for the failing job (other than the
assertion exception)?

> map/reduce is corrupting data between map and reduce
> ----------------------------------------------------
>
>                 Key: HADOOP-1014
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1014
>             Project: Hadoop
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.11.1
>            Reporter: Owen O'Malley
>         Assigned To: Devaraj Das
>            Priority: Blocker
>             Fix For: 0.11.2
>
>
> It appears that a random data corruption is happening between the map
and the reduce. This looks to be a blocker until it is resolved. There were
two relevant messages on hadoop-dev:
> from Mike Smith:
> The map/reduce jobs are not consistent in hadoop 0.11 release and trunk
both
> when you rerun the same job. I have observed this inconsistency of the
map
> output in different jobs. A simple test to double check is to use hadoop
> 0.11 with nutch trunk.
> from Albert Chern:
> I am having the same problem with my own map reduce jobs.  I have a job
> which requires two pieces of data per key, and just as a sanity check I
make
> sure that it gets both in the reducer, but sometimes it doesn't.  What's
> even stranger is, the same tasks that complain about missing key/value
pairs
> will maybe fail two or three times, but then succeed on a subsequent
try,
> which leads me to believe that the bug has to do with randomization (I'm
not
> sure, but I think the map outputs are shuffled?).
> All of my code works perfectly with 0.9, so I went back and just
compared
> the sizes of the outputs.  For some jobs, the outputs from 0.11 were
> consistently 4 bytes larger, probably due to changes in
SequenceFile.  But
> for others, the output sizes were all over the place.  Some partitions
were
> empty, some were correct, and some were missing data.  There seems to be
> something seriously wrong with 0.11, so I suggest you use 0.9.  I've
been
> trying to pinpoint the bug but its random nature is really annoying.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Reply via email to