Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 803fdc9f5 -> e94a8aea5


MAPREDUCE-6583. Clarify confusing sentence in MapReduce tutorial document. 
Contributed by Kai Sasaki.

(cherry picked from commit 7995a6ea4dc524e5b17606359d09df72d771224a)
(cherry picked from commit 8607cb6074b40733d8990618a44c490f9f303ae3)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e94a8aea
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e94a8aea
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e94a8aea

Branch: refs/heads/branch-2.7
Commit: e94a8aea57fdbe20e496d6f1a17ac756a95d4e90
Parents: 803fdc9
Author: Akira Ajisaka <[email protected]>
Authored: Mon Dec 21 00:16:14 2015 +0900
Committer: Akira Ajisaka <[email protected]>
Committed: Mon Dec 21 00:19:11 2015 +0900

----------------------------------------------------------------------
 hadoop-mapreduce-project/CHANGES.txt                           | 3 +++
 .../src/site/markdown/MapReduceTutorial.md                     | 6 +++---
 2 files changed, 6 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e94a8aea/hadoop-mapreduce-project/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 748117c..1a8cc48 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -28,6 +28,9 @@ Release 2.7.3 - UNRELEASED
     MAPREDUCE-6549. multibyte delimiters with LineRecordReader cause
     duplicate records (wilfreds via rkanter)
 
+    MAPREDUCE-6583. Clarify confusing sentence in MapReduce tutorial document.
+    (Kai Sasaki via aajisaka)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e94a8aea/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md
----------------------------------------------------------------------
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md
index 0f24549..2b931ef 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md
@@ -309,7 +309,7 @@ public void reduce(Text key, Iterable<IntWritable> values,
 }
 ```
 
-The `Reducer` implementation, via the `reduce` method just sums up the values, 
which are the occurence counts for each key (i.e. words in this example).
+The `Reducer` implementation, via the `reduce` method just sums up the values, 
which are the occurrence counts for each key (i.e. words in this example).
 
 Thus the output of the job is:
 
@@ -346,7 +346,7 @@ Maps are the individual tasks that transform input records 
into intermediate rec
 
 The Hadoop MapReduce framework spawns one map task for each `InputSplit` 
generated by the `InputFormat` for the job.
 
-Overall, `Mapper` implementations are passed the `Job` for the job via the 
[Job.setMapperClass(Class)](../../api/org/apache/hadoop/mapreduce/Job.html) 
method. The framework then calls [map(WritableComparable, Writable, 
Context)](../../api/org/apache/hadoop/mapreduce/Mapper.html) for each key/value 
pair in the `InputSplit` for that task. Applications can then override the 
`cleanup(Context)` method to perform any required cleanup.
+Overall, mapper implementations are passed to the job via 
[Job.setMapperClass(Class)](../../api/org/apache/hadoop/mapreduce/Job.html) 
method. The framework then calls [map(WritableComparable, Writable, 
Context)](../../api/org/apache/hadoop/mapreduce/Mapper.html) for each key/value 
pair in the `InputSplit` for that task. Applications can then override the 
`cleanup(Context)` method to perform any required cleanup.
 
 Output pairs do not need to be of the same types as input pairs. A given input 
pair may map to zero or many output pairs. Output pairs are collected with 
calls to context.write(WritableComparable, Writable).
 
@@ -846,7 +846,7 @@ In the following sections we discuss how to submit a debug 
script with a job. Th
 
 ##### How to distribute the script file:
 
-The user needs to use [DistributedCache](#DistributedCache) to *distribute* 
and *symlink* thescript file.
+The user needs to use [DistributedCache](#DistributedCache) to *distribute* 
and *symlink* to the script file.
 
 ##### How to submit the script:
 

Reply via email to