MAPREDUCE-6583. Clarify confusing sentence in MapReduce tutorial document. Contributed by Kai Sasaki.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7995a6ea Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7995a6ea Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7995a6ea Branch: refs/heads/yarn-2877 Commit: 7995a6ea4dc524e5b17606359d09df72d771224a Parents: 0f82b5d Author: Akira Ajisaka <[email protected]> Authored: Mon Dec 21 00:16:14 2015 +0900 Committer: Akira Ajisaka <[email protected]> Committed: Mon Dec 21 00:16:14 2015 +0900 ---------------------------------------------------------------------- hadoop-mapreduce-project/CHANGES.txt | 3 +++ .../src/site/markdown/MapReduceTutorial.md | 6 +++--- 2 files changed, 6 insertions(+), 3 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/hadoop/blob/7995a6ea/hadoop-mapreduce-project/CHANGES.txt ---------------------------------------------------------------------- diff --git a/hadoop-mapreduce-project/CHANGES.txt b/hadoop-mapreduce-project/CHANGES.txt index 55d2442..bdbdc22 100644 --- a/hadoop-mapreduce-project/CHANGES.txt +++ b/hadoop-mapreduce-project/CHANGES.txt @@ -692,6 +692,9 @@ Release 2.7.3 - UNRELEASED MAPREDUCE-6549. multibyte delimiters with LineRecordReader cause duplicate records (wilfreds via rkanter) + MAPREDUCE-6583. Clarify confusing sentence in MapReduce tutorial document. + (Kai Sasaki via aajisaka) + Release 2.7.2 - UNRELEASED INCOMPATIBLE CHANGES http://git-wip-us.apache.org/repos/asf/hadoop/blob/7995a6ea/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md ---------------------------------------------------------------------- diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md index e2aaaf6..74c6c66 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md @@ -311,7 +311,7 @@ public void reduce(Text key, Iterable<IntWritable> values, } ``` -The `Reducer` implementation, via the `reduce` method just sums up the values, which are the occurence counts for each key (i.e. words in this example). +The `Reducer` implementation, via the `reduce` method just sums up the values, which are the occurrence counts for each key (i.e. words in this example). Thus the output of the job is: @@ -348,7 +348,7 @@ Maps are the individual tasks that transform input records into intermediate rec The Hadoop MapReduce framework spawns one map task for each `InputSplit` generated by the `InputFormat` for the job. -Overall, `Mapper` implementations are passed the `Job` for the job via the [Job.setMapperClass(Class)](../../api/org/apache/hadoop/mapreduce/Job.html) method. The framework then calls [map(WritableComparable, Writable, Context)](../../api/org/apache/hadoop/mapreduce/Mapper.html) for each key/value pair in the `InputSplit` for that task. Applications can then override the `cleanup(Context)` method to perform any required cleanup. +Overall, mapper implementations are passed to the job via [Job.setMapperClass(Class)](../../api/org/apache/hadoop/mapreduce/Job.html) method. The framework then calls [map(WritableComparable, Writable, Context)](../../api/org/apache/hadoop/mapreduce/Mapper.html) for each key/value pair in the `InputSplit` for that task. Applications can then override the `cleanup(Context)` method to perform any required cleanup. Output pairs do not need to be of the same types as input pairs. A given input pair may map to zero or many output pairs. Output pairs are collected with calls to context.write(WritableComparable, Writable). @@ -848,7 +848,7 @@ In the following sections we discuss how to submit a debug script with a job. Th ##### How to distribute the script file: -The user needs to use [DistributedCache](#DistributedCache) to *distribute* and *symlink* thescript file. +The user needs to use [DistributedCache](#DistributedCache) to *distribute* and *symlink* to the script file. ##### How to submit the script:
