[
https://issues.apache.org/jira/browse/HADOOP-5657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chris Douglas updated HADOOP-5657:
----------------------------------
Attachment: 5657-0.patch
The following are now validated in the reduce:
* Each map produces one record for each of 4096 small keys
* Includes unique large records, each straddled by a pair of small records from
another map (to detect corruption from the merge)
* Changes some parameters for {{testReduceFromDisk}} to make intermediate
merges with in-memory data occur occasionally
> Validate data passed through TestReduceFetch
> --------------------------------------------
>
> Key: HADOOP-5657
> URL: https://issues.apache.org/jira/browse/HADOOP-5657
> Project: Hadoop Core
> Issue Type: Improvement
> Components: mapred, test
> Reporter: Chris Douglas
> Attachments: 5657-0.patch
>
>
> While TestReduceFetch verifies the reduce semantics for reducing from
> in-memory segments, it does not validate the data it reads. Data corrupted
> during the merge will not be detected.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.