Repository: hadoop
Updated Branches:
  refs/heads/trunk 2a0082c51 -> 23c3ff85a


http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c3ff85/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md
----------------------------------------------------------------------
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md
index 16f3afb..1d5b7f2 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md
@@ -85,11 +85,11 @@ A MapReduce *job* usually splits the input data-set into 
independent chunks whic
 
 Typically the compute nodes and the storage nodes are the same, that is, the 
MapReduce framework and the Hadoop Distributed File System (see [HDFS 
Architecture Guide](../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html)) are 
running on the same set of nodes. This configuration allows the framework to 
effectively schedule tasks on the nodes where data is already present, 
resulting in very high aggregate bandwidth across the cluster.
 
-The MapReduce framework consists of a single master `ResourceManager`, one 
slave `NodeManager` per cluster-node, and `MRAppMaster` per application (see 
[YARN Architecture Guide](../../hadoop-yarn/hadoop-yarn-site/YARN.html)).
+The MapReduce framework consists of a single master `ResourceManager`, one 
worker `NodeManager` per cluster-node, and `MRAppMaster` per application (see 
[YARN Architecture Guide](../../hadoop-yarn/hadoop-yarn-site/YARN.html)).
 
 Minimally, applications specify the input/output locations and supply *map* 
and *reduce* functions via implementations of appropriate interfaces and/or 
abstract-classes. These, and other job parameters, comprise the *job 
configuration*.
 
-The Hadoop *job client* then submits the job (jar/executable etc.) and 
configuration to the `ResourceManager` which then assumes the responsibility of 
distributing the software/configuration to the slaves, scheduling tasks and 
monitoring them, providing status and diagnostic information to the job-client.
+The Hadoop *job client* then submits the job (jar/executable etc.) and 
configuration to the `ResourceManager` which then assumes the responsibility of 
distributing the software/configuration to the workers, scheduling tasks and 
monitoring them, providing status and diagnostic information to the job-client.
 
 Although the Hadoop framework is implemented in Java™, MapReduce 
applications need not be written in Java.
 
@@ -213,10 +213,10 @@ Sample text-files as input:
     $ bin/hadoop fs -ls /user/joe/wordcount/input/
     /user/joe/wordcount/input/file01
     /user/joe/wordcount/input/file02
-    
+
     $ bin/hadoop fs -cat /user/joe/wordcount/input/file01
     Hello World Bye World
-    
+
     $ bin/hadoop fs -cat /user/joe/wordcount/input/file02
     Hello Hadoop Goodbye Hadoop
 
@@ -787,11 +787,11 @@ or Counters.incrCounter(String, String, long) in the 
`map` and/or `reduce` metho
 
 Applications specify the files to be cached via urls (hdfs://) in the `Job`. 
The `DistributedCache` assumes that the files specified via hdfs:// urls are 
already present on the `FileSystem`.
 
-The framework will copy the necessary files to the slave node before any tasks 
for the job are executed on that node. Its efficiency stems from the fact that 
the files are only copied once per job and the ability to cache archives which 
are un-archived on the slaves.
+The framework will copy the necessary files to the worker node before any 
tasks for the job are executed on that node. Its efficiency stems from the fact 
that the files are only copied once per job and the ability to cache archives 
which are un-archived on the workers.
 
 `DistributedCache` tracks the modification timestamps of the cached files. 
Clearly the cache files should not be modified by the application or externally 
while the job is executing.
 
-`DistributedCache` can be used to distribute simple, read-only data/text files 
and more complex types such as archives and jars. Archives (zip, tar, tgz and 
tar.gz files) are *un-archived* at the slave nodes. Files have *execution 
permissions* set.
+`DistributedCache` can be used to distribute simple, read-only data/text files 
and more complex types such as archives and jars. Archives (zip, tar, tgz and 
tar.gz files) are *un-archived* at the worker nodes. Files have *execution 
permissions* set.
 
 The files/archives can be distributed by setting the property 
`mapreduce.job.cache.{files |archives}`. If more than one file/archive has to 
be distributed, they can be added as comma separated paths. The properties can 
also be set by APIs
 [Job.addCacheFile(URI)](../../api/org/apache/hadoop/mapreduce/Job.html)/
@@ -808,12 +808,12 @@ api can be used to cache files/jars and also add them to 
the *classpath* of chil
 
 ##### Private and Public DistributedCache Files
 
-DistributedCache files can be private or public, that determines how they can 
be shared on the slave nodes.
+DistributedCache files can be private or public, that determines how they can 
be shared on the worker nodes.
 
 * "Private" DistributedCache files are cached in a localdirectory private to
   the user whose jobs need these files. These files are shared by all tasks
   and jobs of the specific user only and cannot be accessed by jobs of
-  other users on the slaves. A DistributedCache file becomes private by
+  other users on the workers. A DistributedCache file becomes private by
   virtue of its permissions on the file system where the files are
   uploaded, typically HDFS. If the file has no world readable access, or if
   the directory path leading to the file has no world executable access for
@@ -821,7 +821,7 @@ DistributedCache files can be private or public, that 
determines how they can be
 
 * "Public" DistributedCache files are cached in a global directory and the
   file access is setup such that they are publicly visible to all users.
-  These files can be shared by tasks and jobs of all users on the slaves. A
+  These files can be shared by tasks and jobs of all users on the workers. A
   DistributedCache file becomes public by virtue of its permissions on the
   file system where the files are uploaded, typically HDFS. If the file has
   world readable access, AND if the directory path leading to the file has
@@ -1076,10 +1076,10 @@ Sample text-files as input:
     $ bin/hadoop fs -ls /user/joe/wordcount/input/
     /user/joe/wordcount/input/file01
     /user/joe/wordcount/input/file02
-    
+
     $ bin/hadoop fs -cat /user/joe/wordcount/input/file01
     Hello World, Bye World!
-    
+
     $ bin/hadoop fs -cat /user/joe/wordcount/input/file02
     Hello Hadoop, Goodbye to hadoop.
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c3ff85/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/ReliabilityTest.java
----------------------------------------------------------------------
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/ReliabilityTest.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/ReliabilityTest.java
index ecac83a..983a4a7 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/ReliabilityTest.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/ReliabilityTest.java
@@ -43,7 +43,7 @@ import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
 
 /**
- * This class tests reliability of the framework in the face of failures of 
+ * This class tests reliability of the framework in the face of failures of
  * both tasks and tasktrackers. Steps:
  * 1) Get the cluster status
  * 2) Get the number of slots in the cluster
@@ -59,12 +59,12 @@ import org.apache.hadoop.util.ToolRunner;
  * ./bin/hadoop --config <config> jar
  *   build/hadoop-<version>-test.jar MRReliabilityTest -libjars
  *   build/hadoop-<version>-examples.jar [-scratchdir <dir>]"
- *   
- *   The scratchdir is optional and by default the current directory on the 
client
- *   will be used as the scratch space. Note that password-less SSH must be 
set up 
- *   between the client machine from where the test is submitted, and the 
cluster 
- *   nodes where the test runs.
- *   
+ *
+ *   The scratchdir is optional and by default the current directory on
+ *   the client will be used as the scratch space. Note that password-less
+ *   SSH must be set up between the client machine from where the test is
+ *   submitted, and the cluster nodes where the test runs.
+ *
  *   The test should be run on a <b>free</b> cluster where there is no other 
parallel
  *   job submission going on. Submission of other jobs while the test runs can 
cause
  *   the tests/jobs submitted to fail.
@@ -73,7 +73,7 @@ import org.apache.hadoop.util.ToolRunner;
 public class ReliabilityTest extends Configured implements Tool {
 
   private String dir;
-  private static final Log LOG = LogFactory.getLog(ReliabilityTest.class); 
+  private static final Log LOG = LogFactory.getLog(ReliabilityTest.class);
 
   private void displayUsage() {
     LOG.info("This must be run in only the distributed mode " +
@@ -88,13 +88,13 @@ public class ReliabilityTest extends Configured implements 
Tool {
         " any job submission while the tests are running can cause jobs/tests 
to fail");
     System.exit(-1);
   }
-  
+
   public int run(String[] args) throws Exception {
     Configuration conf = getConf();
     if ("local".equals(conf.get(JTConfig.JT_IPC_ADDRESS, "local"))) {
       displayUsage();
     }
-    String[] otherArgs = 
+    String[] otherArgs =
       new GenericOptionsParser(conf, args).getRemainingArgs();
     if (otherArgs.length == 2) {
       if (otherArgs[0].equals("-scratchdir")) {
@@ -108,7 +108,7 @@ public class ReliabilityTest extends Configured implements 
Tool {
     } else {
       displayUsage();
     }
-    
+
     //to protect against the case of jobs failing even when multiple attempts
     //fail, set some high values for the max attempts
     conf.setInt(JobContext.MAP_MAX_ATTEMPTS, 10);
@@ -117,26 +117,26 @@ public class ReliabilityTest extends Configured 
implements Tool {
     runSortJobTests(new JobClient(new JobConf(conf)), conf);
     return 0;
   }
-  
-  private void runSleepJobTest(final JobClient jc, final Configuration conf) 
+
+  private void runSleepJobTest(final JobClient jc, final Configuration conf)
   throws Exception {
     ClusterStatus c = jc.getClusterStatus();
     int maxMaps = c.getMaxMapTasks() * 2;
     int maxReduces = maxMaps;
     int mapSleepTime = (int)c.getTTExpiryInterval();
     int reduceSleepTime = mapSleepTime;
-    String[] sleepJobArgs = new String[] {     
-        "-m", Integer.toString(maxMaps), 
+    String[] sleepJobArgs = new String[] {
+        "-m", Integer.toString(maxMaps),
         "-r", Integer.toString(maxReduces),
         "-mt", Integer.toString(mapSleepTime),
         "-rt", Integer.toString(reduceSleepTime)};
-    runTest(jc, conf, "org.apache.hadoop.mapreduce.SleepJob", sleepJobArgs, 
+    runTest(jc, conf, "org.apache.hadoop.mapreduce.SleepJob", sleepJobArgs,
         new KillTaskThread(jc, 2, 0.2f, false, 2),
         new KillTrackerThread(jc, 2, 0.4f, false, 1));
     LOG.info("SleepJob done");
   }
-  
-  private void runSortJobTests(final JobClient jc, final Configuration conf) 
+
+  private void runSortJobTests(final JobClient jc, final Configuration conf)
   throws Exception {
     String inputPath = "my_reliability_test_input";
     String outputPath = "my_reliability_test_output";
@@ -147,36 +147,36 @@ public class ReliabilityTest extends Configured 
implements Tool {
     runSortTest(jc, conf, inputPath, outputPath);
     runSortValidatorTest(jc, conf, inputPath, outputPath);
   }
-  
-  private void runRandomWriterTest(final JobClient jc, 
-      final Configuration conf, final String inputPath) 
+
+  private void runRandomWriterTest(final JobClient jc,
+      final Configuration conf, final String inputPath)
   throws Exception {
-    runTest(jc, conf, "org.apache.hadoop.examples.RandomWriter", 
-        new String[]{inputPath}, 
+    runTest(jc, conf, "org.apache.hadoop.examples.RandomWriter",
+        new String[]{inputPath},
         null, new KillTrackerThread(jc, 0, 0.4f, false, 1));
     LOG.info("RandomWriter job done");
   }
-  
+
   private void runSortTest(final JobClient jc, final Configuration conf,
-      final String inputPath, final String outputPath) 
+      final String inputPath, final String outputPath)
   throws Exception {
-    runTest(jc, conf, "org.apache.hadoop.examples.Sort", 
+    runTest(jc, conf, "org.apache.hadoop.examples.Sort",
         new String[]{inputPath, outputPath},
         new KillTaskThread(jc, 2, 0.2f, false, 2),
         new KillTrackerThread(jc, 2, 0.8f, false, 1));
     LOG.info("Sort job done");
   }
-  
-  private void runSortValidatorTest(final JobClient jc, 
+
+  private void runSortValidatorTest(final JobClient jc,
       final Configuration conf, final String inputPath, final String 
outputPath)
   throws Exception {
     runTest(jc, conf, "org.apache.hadoop.mapred.SortValidator", new String[] {
         "-sortInput", inputPath, "-sortOutput", outputPath},
         new KillTaskThread(jc, 2, 0.2f, false, 1),
-        new KillTrackerThread(jc, 2, 0.8f, false, 1));  
-    LOG.info("SortValidator job done");    
+        new KillTrackerThread(jc, 2, 0.8f, false, 1));
+    LOG.info("SortValidator job done");
   }
-  
+
   private String normalizeCommandPath(String command) {
     final String hadoopHome;
     if ((hadoopHome = System.getenv("HADOOP_HOME")) != null) {
@@ -184,7 +184,7 @@ public class ReliabilityTest extends Configured implements 
Tool {
     }
     return command;
   }
-  
+
   private void checkJobExitStatus(int status, String jobName) {
     if (status != 0) {
       LOG.info(jobName + " job failed with status: " + status);
@@ -203,7 +203,7 @@ public class ReliabilityTest extends Configured implements 
Tool {
       public void run() {
         try {
           Class<?> jobClassObj = conf.getClassByName(jobClass);
-          int status = ToolRunner.run(conf, (Tool)(jobClassObj.newInstance()), 
+          int status = ToolRunner.run(conf, (Tool)(jobClassObj.newInstance()),
               args);
           checkJobExitStatus(status, jobClass);
         } catch (Exception e) {
@@ -223,7 +223,8 @@ public class ReliabilityTest extends Configured implements 
Tool {
     JobID jobId = jobs[jobs.length - 1].getJobID();
     RunningJob rJob = jc.getJob(jobId);
     if(rJob.isComplete()) {
-      LOG.error("The last job returned by the querying JobTracker is complete 
:" + 
+      LOG.error("The last job returned by the querying "
+          +"JobTracker is complete :" +
           rJob.getJobID() + " .Exiting the test");
       System.exit(-1);
     }
@@ -246,7 +247,7 @@ public class ReliabilityTest extends Configured implements 
Tool {
     }
     t.join();
   }
-  
+
   private class KillTrackerThread extends Thread {
     private volatile boolean killed = false;
     private JobClient jc;
@@ -255,14 +256,14 @@ public class ReliabilityTest extends Configured 
implements Tool {
     private float threshold = 0.2f;
     private boolean onlyMapsProgress;
     private int numIterations;
-    final private String slavesFile = dir + "/_reliability_test_slaves_file_";
-    final String shellCommand = normalizeCommandPath("bin/slaves.sh");
-    final private String STOP_COMMAND = "ps uwwx | grep java | grep " + 
-    "org.apache.hadoop.mapred.TaskTracker"+ " |" + 
-    " grep -v grep | tr -s ' ' | cut -d ' ' -f2 | xargs kill -s STOP";
-    final private String RESUME_COMMAND = "ps uwwx | grep java | grep " + 
-    "org.apache.hadoop.mapred.TaskTracker"+ " |" + 
-    " grep -v grep | tr -s ' ' | cut -d ' ' -f2 | xargs kill -s CONT";
+    final private String workersFile = dir + 
"/_reliability_test_workers_file_";
+    final private String shellCommand = normalizeCommandPath("bin/workers.sh");
+    final private String stopCommand = "ps uwwx | grep java | grep " +
+        "org.apache.hadoop.mapred.TaskTracker"+ " |" +
+        " grep -v grep | tr -s ' ' | cut -d ' ' -f2 | xargs kill -s STOP";
+    final private String resumeCommand = "ps uwwx | grep java | grep " +
+        "org.apache.hadoop.mapred.TaskTracker"+ " |" +
+        " grep -v grep | tr -s ' ' | cut -d ' ' -f2 | xargs kill -s CONT";
     //Only one instance must be active at any point
     public KillTrackerThread(JobClient jc, int threshaldMultiplier,
         float threshold, boolean onlyMapsProgress, int numIterations) {
@@ -293,8 +294,8 @@ public class ReliabilityTest extends Configured implements 
Tool {
         LOG.info("Will STOP/RESUME tasktrackers based on " +
                 "Reduces' progress");
       }
-      LOG.info("Initial progress threshold: " + threshold + 
-          ". Threshold Multiplier: " + thresholdMultiplier + 
+      LOG.info("Initial progress threshold: " + threshold +
+          ". Threshold Multiplier: " + thresholdMultiplier +
           ". Number of iterations: " + numIterations);
       float thresholdVal = threshold;
       int numIterationsDone = 0;
@@ -336,7 +337,7 @@ public class ReliabilityTest extends Configured implements 
Tool {
 
       int count = 0;
 
-      FileOutputStream fos = new FileOutputStream(new File(slavesFile));
+      FileOutputStream fos = new FileOutputStream(new File(workersFile));
       LOG.info(new Date() + " Stopping a few trackers");
 
       for (String tracker : trackerNamesList) {
@@ -355,17 +356,17 @@ public class ReliabilityTest extends Configured 
implements Tool {
     private void startTaskTrackers() throws Exception {
       LOG.info(new Date() + " Resuming the stopped trackers");
       runOperationOnTT("resume");
-      new File(slavesFile).delete();
+      new File(workersFile).delete();
     }
-    
+
     private void runOperationOnTT(String operation) throws IOException {
       Map<String,String> hMap = new HashMap<String,String>();
-      hMap.put("HADOOP_SLAVES", slavesFile);
+      hMap.put("HADOOP_WORKERS", workersFile);
       StringTokenizer strToken;
       if (operation.equals("suspend")) {
-        strToken = new StringTokenizer(STOP_COMMAND, " ");
+        strToken = new StringTokenizer(stopCommand, " ");
       } else {
-        strToken = new StringTokenizer(RESUME_COMMAND, " ");
+        strToken = new StringTokenizer(resumeCommand, " ");
       }
       String commandArgs[] = new String[strToken.countTokens() + 1];
       int i = 0;
@@ -382,14 +383,14 @@ public class ReliabilityTest extends Configured 
implements Tool {
     private String convertTrackerNameToHostName(String trackerName) {
       // Convert the trackerName to it's host name
       int indexOfColon = trackerName.indexOf(":");
-      String trackerHostName = (indexOfColon == -1) ? 
-          trackerName : 
+      String trackerHostName = (indexOfColon == -1) ?
+          trackerName :
             trackerName.substring(0, indexOfColon);
       return trackerHostName.substring("tracker_".length());
     }
 
   }
-  
+
   private class KillTaskThread extends Thread {
 
     private volatile boolean killed = false;
@@ -399,7 +400,7 @@ public class ReliabilityTest extends Configured implements 
Tool {
     private float threshold = 0.2f;
     private boolean onlyMapsProgress;
     private int numIterations;
-    public KillTaskThread(JobClient jc, int thresholdMultiplier, 
+    public KillTaskThread(JobClient jc, int thresholdMultiplier,
         float threshold, boolean onlyMapsProgress, int numIterations) {
       this.jc = jc;
       this.thresholdMultiplier = thresholdMultiplier;
@@ -427,15 +428,15 @@ public class ReliabilityTest extends Configured 
implements Tool {
       } else {
         LOG.info("Will kill tasks based on Reduces' progress");
       }
-      LOG.info("Initial progress threshold: " + threshold + 
-          ". Threshold Multiplier: " + thresholdMultiplier + 
+      LOG.info("Initial progress threshold: " + threshold +
+          ". Threshold Multiplier: " + thresholdMultiplier +
           ". Number of iterations: " + numIterations);
       float thresholdVal = threshold;
       int numIterationsDone = 0;
       while (!killed) {
         try {
           float progress;
-          if (jc.getJob(rJob.getID()).isComplete() || 
+          if (jc.getJob(rJob.getID()).isComplete() ||
               numIterationsDone == numIterations) {
             break;
           }
@@ -499,7 +500,7 @@ public class ReliabilityTest extends Configured implements 
Tool {
       }
     }
   }
-  
+
   public static void main(String args[]) throws Exception {
     int res = ToolRunner.run(new Configuration(), new ReliabilityTest(), args);
     System.exit(res);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c3ff85/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestLazyOutput.java
----------------------------------------------------------------------
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestLazyOutput.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestLazyOutput.java
index dde9310..04a5127 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestLazyOutput.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestLazyOutput.java
@@ -44,11 +44,11 @@ import static org.junit.Assert.assertTrue;
  * 0 byte files
  */
 public class TestLazyOutput {
-  private static final int NUM_HADOOP_SLAVES = 3;
+  private static final int NUM_HADOOP_WORKERS = 3;
   private static final int NUM_MAPS_PER_NODE = 2;
-  private static final Path INPUT = new Path("/testlazy/input");
+  private static final Path INPUTPATH = new Path("/testlazy/input");
 
-  private static final List<String> input = 
+  private static final List<String> INPUTLIST =
     Arrays.asList("All","Roads","Lead","To","Hadoop");
 
 
@@ -70,7 +70,7 @@ public class TestLazyOutput {
     }
   }
 
-  static class TestReducer  extends MapReduceBase 
+  static class TestReducer  extends MapReduceBase
   implements Reducer<LongWritable, Text, LongWritable, Text> {
     private String id;
 
@@ -93,12 +93,12 @@ public class TestLazyOutput {
   }
 
   private static void runTestLazyOutput(JobConf job, Path output,
-      int numReducers, boolean createLazily) 
+      int numReducers, boolean createLazily)
   throws Exception {
 
     job.setJobName("test-lazy-output");
 
-    FileInputFormat.setInputPaths(job, INPUT);
+    FileInputFormat.setInputPaths(job, INPUTPATH);
     FileOutputFormat.setOutputPath(job, output);
     job.setInputFormat(TextInputFormat.class);
     job.setMapOutputKeyClass(LongWritable.class);
@@ -106,7 +106,7 @@ public class TestLazyOutput {
     job.setOutputKeyClass(LongWritable.class);
     job.setOutputValueClass(Text.class);
 
-    job.setMapperClass(TestMapper.class);        
+    job.setMapperClass(TestMapper.class);
     job.setReducerClass(TestReducer.class);
 
     JobClient client = new JobClient(job);
@@ -123,10 +123,10 @@ public class TestLazyOutput {
 
   public void createInput(FileSystem fs, int numMappers) throws Exception {
     for (int i =0; i < numMappers; i++) {
-      OutputStream os = fs.create(new Path(INPUT, 
+      OutputStream os = fs.create(new Path(INPUTPATH,
         "text" + i + ".txt"));
       Writer wr = new OutputStreamWriter(os);
-      for(String inp : input) {
+      for(String inp : INPUTLIST) {
         wr.write(inp+"\n");
       }
       wr.close();
@@ -142,22 +142,23 @@ public class TestLazyOutput {
       Configuration conf = new Configuration();
 
       // Start the mini-MR and mini-DFS clusters
-      dfs = new MiniDFSCluster.Builder(conf).numDataNodes(NUM_HADOOP_SLAVES)
+      dfs = new MiniDFSCluster.Builder(conf).numDataNodes(NUM_HADOOP_WORKERS)
           .build();
       fileSys = dfs.getFileSystem();
-      mr = new MiniMRCluster(NUM_HADOOP_SLAVES, fileSys.getUri().toString(), 
1);
+      mr = new MiniMRCluster(NUM_HADOOP_WORKERS,
+            fileSys.getUri().toString(), 1);
 
       int numReducers = 2;
-      int numMappers = NUM_HADOOP_SLAVES * NUM_MAPS_PER_NODE;
+      int numMappers = NUM_HADOOP_WORKERS * NUM_MAPS_PER_NODE;
 
       createInput(fileSys, numMappers);
       Path output1 = new Path("/testlazy/output1");
 
-      // Test 1. 
-      runTestLazyOutput(mr.createJobConf(), output1, 
+      // Test 1.
+      runTestLazyOutput(mr.createJobConf(), output1,
           numReducers, true);
 
-      Path[] fileList = 
+      Path[] fileList =
         FileUtil.stat2Paths(fileSys.listStatus(output1,
             new Utils.OutputFileUtils.OutputFilesFilter()));
       for(int i=0; i < fileList.length; ++i) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c3ff85/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipes.java
----------------------------------------------------------------------
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipes.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipes.java
index 34b1d75..84b491a 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipes.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipes.java
@@ -53,18 +53,18 @@ import static org.junit.Assert.assertFalse;
 public class TestPipes {
   private static final Log LOG =
     LogFactory.getLog(TestPipes.class.getName());
-  
-  private static Path cppExamples = 
+
+  private static Path cppExamples =
     new Path(System.getProperty("install.c++.examples"));
-  static Path wordCountSimple = 
+  private static Path wordCountSimple =
     new Path(cppExamples, "bin/wordcount-simple");
-  static Path wordCountPart = 
+  private static Path wordCountPart =
     new Path(cppExamples, "bin/wordcount-part");
-  static Path wordCountNoPipes = 
+  private static Path wordCountNoPipes =
     new Path(cppExamples,"bin/wordcount-nopipe");
-  
+
   static Path nonPipedOutDir;
-  
+
   static void cleanup(FileSystem fs, Path p) throws IOException {
     fs.delete(p, true);
     assertFalse("output not cleaned up", fs.exists(p));
@@ -80,15 +80,16 @@ public class TestPipes {
     Path inputPath = new Path("testing/in");
     Path outputPath = new Path("testing/out");
     try {
-      final int numSlaves = 2;
+      final int numWorkers = 2;
       Configuration conf = new Configuration();
-      dfs = new MiniDFSCluster.Builder(conf).numDataNodes(numSlaves).build();
-      mr = new MiniMRCluster(numSlaves, 
dfs.getFileSystem().getUri().toString(), 1);
+      dfs = new MiniDFSCluster.Builder(conf).numDataNodes(numWorkers).build();
+      mr = new MiniMRCluster(numWorkers,
+                 dfs.getFileSystem().getUri().toString(), 1);
       writeInputFile(dfs.getFileSystem(), inputPath);
-      runProgram(mr, dfs, wordCountSimple, 
+      runProgram(mr, dfs, wordCountSimple,
                  inputPath, outputPath, 3, 2, twoSplitOutput, null);
       cleanup(dfs.getFileSystem(), outputPath);
-      runProgram(mr, dfs, wordCountSimple, 
+      runProgram(mr, dfs, wordCountSimple,
                  inputPath, outputPath, 3, 0, noSortOutput, null);
       cleanup(dfs.getFileSystem(), outputPath);
       runProgram(mr, dfs, wordCountPart,
@@ -104,41 +105,41 @@ public class TestPipes {
 
   final static String[] twoSplitOutput = new String[] {
     "`and\t1\na\t1\nand\t1\nbeginning\t1\nbook\t1\nbut\t1\nby\t1\n" +
-    "conversation?'\t1\ndo:\t1\nhad\t2\nhaving\t1\nher\t2\nin\t1\nit\t1\n"+
-    "it,\t1\nno\t1\nnothing\t1\nof\t3\non\t1\nonce\t1\nor\t3\npeeped\t1\n"+
-    "pictures\t2\nthe\t3\nthought\t1\nto\t2\nuse\t1\nwas\t2\n",
+        "conversation?'\t1\ndo:\t1\nhad\t2\nhaving\t1\nher\t2\nin\t1\nit\t1\n"+
+        "it,\t1\nno\t1\nnothing\t1\nof\t3\non\t1\nonce\t1\nor\t3\npeeped\t1\n"+
+        "pictures\t2\nthe\t3\nthought\t1\nto\t2\nuse\t1\nwas\t2\n",
 
-    "Alice\t2\n`without\t1\nbank,\t1\nbook,'\t1\nconversations\t1\nget\t1\n" +
-    "into\t1\nis\t1\nreading,\t1\nshe\t1\nsister\t2\nsitting\t1\ntired\t1\n" +
-    "twice\t1\nvery\t1\nwhat\t1\n"
+      "Alice\t2\n`without\t1\nbank,\t1\nbook,'\t1\nconversations\t1\nget\t1\n" 
+
+        
"into\t1\nis\t1\nreading,\t1\nshe\t1\nsister\t2\nsitting\t1\ntired\t1\n" +
+        "twice\t1\nvery\t1\nwhat\t1\n"
   };
 
   final static String[] noSortOutput = new String[] {
     "it,\t1\n`and\t1\nwhat\t1\nis\t1\nthe\t1\nuse\t1\nof\t1\na\t1\n" +
-    "book,'\t1\nthought\t1\nAlice\t1\n`without\t1\npictures\t1\nor\t1\n"+
-    "conversation?'\t1\n",
+        "book,'\t1\nthought\t1\nAlice\t1\n`without\t1\npictures\t1\nor\t1\n"+
+        "conversation?'\t1\n",
 
-    "Alice\t1\nwas\t1\nbeginning\t1\nto\t1\nget\t1\nvery\t1\ntired\t1\n"+
-    "of\t1\nsitting\t1\nby\t1\nher\t1\nsister\t1\non\t1\nthe\t1\nbank,\t1\n"+
-    "and\t1\nof\t1\nhaving\t1\nnothing\t1\nto\t1\ndo:\t1\nonce\t1\n", 
+      "Alice\t1\nwas\t1\nbeginning\t1\nto\t1\nget\t1\nvery\t1\ntired\t1\n"+
+        
"of\t1\nsitting\t1\nby\t1\nher\t1\nsister\t1\non\t1\nthe\t1\nbank,\t1\n"+
+        "and\t1\nof\t1\nhaving\t1\nnothing\t1\nto\t1\ndo:\t1\nonce\t1\n",
 
-    "or\t1\ntwice\t1\nshe\t1\nhad\t1\npeeped\t1\ninto\t1\nthe\t1\nbook\t1\n"+
-    "her\t1\nsister\t1\nwas\t1\nreading,\t1\nbut\t1\nit\t1\nhad\t1\nno\t1\n"+
-    "pictures\t1\nor\t1\nconversations\t1\nin\t1\n"
+      "or\t1\ntwice\t1\nshe\t1\nhad\t1\npeeped\t1\ninto\t1\nthe\t1\nbook\t1\n"+
+        
"her\t1\nsister\t1\nwas\t1\nreading,\t1\nbut\t1\nit\t1\nhad\t1\nno\t1\n"+
+        "pictures\t1\nor\t1\nconversations\t1\nin\t1\n"
   };
-  
+
   final static String[] fixedPartitionOutput = new String[] {
     "Alice\t2\n`and\t1\n`without\t1\na\t1\nand\t1\nbank,\t1\nbeginning\t1\n" +
-    "book\t1\nbook,'\t1\nbut\t1\nby\t1\nconversation?'\t1\nconversations\t1\n"+
-    "do:\t1\nget\t1\nhad\t2\nhaving\t1\nher\t2\nin\t1\ninto\t1\nis\t1\n" +
-    "it\t1\nit,\t1\nno\t1\nnothing\t1\nof\t3\non\t1\nonce\t1\nor\t3\n" +
-    "peeped\t1\npictures\t2\nreading,\t1\nshe\t1\nsister\t2\nsitting\t1\n" +
-    "the\t3\nthought\t1\ntired\t1\nto\t2\ntwice\t1\nuse\t1\n" +
-    "very\t1\nwas\t2\nwhat\t1\n",
-    
-    ""                                                   
+        
"book\t1\nbook,'\t1\nbut\t1\nby\t1\nconversation?'\t1\nconversations\t1\n"+
+        "do:\t1\nget\t1\nhad\t2\nhaving\t1\nher\t2\nin\t1\ninto\t1\nis\t1\n" +
+        "it\t1\nit,\t1\nno\t1\nnothing\t1\nof\t3\non\t1\nonce\t1\nor\t3\n" +
+        "peeped\t1\npictures\t2\nreading,\t1\nshe\t1\nsister\t2\nsitting\t1\n" 
+
+        "the\t3\nthought\t1\ntired\t1\nto\t2\ntwice\t1\nuse\t1\n" +
+        "very\t1\nwas\t2\nwhat\t1\n",
+
+      ""
   };
-  
+
   static void writeInputFile(FileSystem fs, Path dir) throws IOException {
     DataOutputStream out = fs.create(new Path(dir, "part0"));
     out.writeBytes("Alice was beginning to get very tired of sitting by 
her\n");
@@ -150,7 +151,7 @@ public class TestPipes {
     out.close();
   }
 
-  static void runProgram(MiniMRCluster mr, MiniDFSCluster dfs, 
+  static void runProgram(MiniMRCluster mr, MiniDFSCluster dfs,
                           Path program, Path inputPath, Path outputPath,
                           int numMaps, int numReduces, String[] 
expectedResults,
                           JobConf conf
@@ -161,13 +162,13 @@ public class TestPipes {
       job = mr.createJobConf();
     }else {
       job = new JobConf(conf);
-    } 
+    }
     job.setNumMapTasks(numMaps);
     job.setNumReduceTasks(numReduces);
     {
       FileSystem fs = dfs.getFileSystem();
       fs.delete(wordExec.getParent(), true);
-      fs.copyFromLocalFile(program, wordExec);                                 
        
+      fs.copyFromLocalFile(program, wordExec);
       Submitter.setExecutable(job, fs.makeQualified(wordExec).toString());
       Submitter.setIsJavaRecordReader(job, true);
       Submitter.setIsJavaRecordWriter(job, true);
@@ -176,7 +177,7 @@ public class TestPipes {
       RunningJob rJob = null;
       if (numReduces == 0) {
         rJob = Submitter.jobSubmit(job);
-        
+
         while (!rJob.isComplete()) {
           try {
             Thread.sleep(1000);
@@ -188,7 +189,7 @@ public class TestPipes {
         rJob = Submitter.runJob(job);
       }
       assertTrue("pipes job failed", rJob.isSuccessful());
-      
+
       Counters counters = rJob.getCounters();
       Counters.Group wordCountCounters = counters.getGroup("WORDCOUNT");
       int numCounters = 0;
@@ -205,14 +206,14 @@ public class TestPipes {
                                                 .OutputFilesFilter()))) {
       results.add(MapReduceTestUtil.readOutput(p, job));
     }
-    assertEquals("number of reduces is wrong", 
+    assertEquals("number of reduces is wrong",
                  expectedResults.length, results.size());
     for(int i=0; i < results.size(); i++) {
       assertEquals("pipes program " + program + " output " + i + " wrong",
                    expectedResults[i], results.get(i));
     }
   }
-  
+
   /**
    * Run a map/reduce word count that does all of the map input and reduce
    * output directly rather than sending it back up to Java.
@@ -229,10 +230,10 @@ public class TestPipes {
     }else {
       job = new JobConf(conf);
     }
-    
+
     job.setInputFormat(WordCountInputFormat.class);
     FileSystem local = FileSystem.getLocal(job);
-    Path testDir = new Path("file:" + System.getProperty("test.build.data"), 
+    Path testDir = new Path("file:" + System.getProperty("test.build.data"),
                             "pipes");
     Path inDir = new Path(testDir, "input");
     nonPipedOutDir = new Path(testDir, "output");
@@ -263,18 +264,18 @@ public class TestPipes {
     out = local.create(jobXml);
     job.writeXml(out);
     out.close();
-    System.err.println("About to run: Submitter -conf " + jobXml + 
-                       " -input " + inDir + " -output " + nonPipedOutDir + 
-                       " -program " + 
+    System.err.println("About to run: Submitter -conf " + jobXml +
+                       " -input " + inDir + " -output " + nonPipedOutDir +
+                       " -program " +
                        dfs.getFileSystem().makeQualified(wordExec));
     try {
       int ret = ToolRunner.run(new Submitter(),
                                new String[]{"-conf", jobXml.toString(),
-                                  "-input", inDir.toString(),
-                                  "-output", nonPipedOutDir.toString(),
-                                  "-program", 
+                                   "-input", inDir.toString(),
+                                   "-output", nonPipedOutDir.toString(),
+                                   "-program",
                         dfs.getFileSystem().makeQualified(wordExec).toString(),
-                                  "-reduces", "2"});
+                                   "-reduces", "2"});
       assertEquals(0, ret);
     } catch (Exception e) {
       assertTrue("got exception: " + StringUtils.stringifyException(e), false);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c3ff85/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TestMapReduceLazyOutput.java
----------------------------------------------------------------------
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TestMapReduceLazyOutput.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TestMapReduceLazyOutput.java
index a69e06e..7c01038 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TestMapReduceLazyOutput.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TestMapReduceLazyOutput.java
@@ -50,14 +50,17 @@ import static org.junit.Assert.assertTrue;
  * 0 byte files
  */
 public class TestMapReduceLazyOutput {
-  private static final int NUM_HADOOP_SLAVES = 3;
+  private static final int NUM_HADOOP_WORKERS = 3;
   private static final int NUM_MAPS_PER_NODE = 2;
-  private static final Path INPUT = new Path("/testlazy/input");
+  private static final Path INPUTPATH = new Path("/testlazy/input");
 
-  private static final List<String> input = 
+  private static final List<String> INPUTLIST =
     Arrays.asList("All","Roads","Lead","To","Hadoop");
 
-  public static class TestMapper 
+  /**
+   * Test mapper.
+   */
+  public static class TestMapper
   extends Mapper<LongWritable, Text, LongWritable, Text>{
 
     public void map(LongWritable key, Text value, Context context
@@ -70,11 +73,13 @@ public class TestMapReduceLazyOutput {
     }
   }
 
-
-  public static class TestReducer 
+  /**
+   * Test Reducer.
+   */
+  public static class TestReducer
   extends Reducer<LongWritable,Text,LongWritable,Text> {
-    
-    public void reduce(LongWritable key, Iterable<Text> values, 
+
+    public void reduce(LongWritable key, Iterable<Text> values,
         Context context) throws IOException, InterruptedException {
       String id = context.getTaskAttemptID().toString();
       // Reducer 0 does not output anything
@@ -85,13 +90,13 @@ public class TestMapReduceLazyOutput {
       }
     }
   }
-  
+
   private static void runTestLazyOutput(Configuration conf, Path output,
-      int numReducers, boolean createLazily) 
+      int numReducers, boolean createLazily)
   throws Exception {
     Job job = Job.getInstance(conf, "Test-Lazy-Output");
 
-    FileInputFormat.setInputPaths(job, INPUT);
+    FileInputFormat.setInputPaths(job, INPUTPATH);
     FileOutputFormat.setOutputPath(job, output);
 
     job.setJarByClass(TestMapReduceLazyOutput.class);
@@ -113,10 +118,10 @@ public class TestMapReduceLazyOutput {
 
   public void createInput(FileSystem fs, int numMappers) throws Exception {
     for (int i =0; i < numMappers; i++) {
-      OutputStream os = fs.create(new Path(INPUT, 
+      OutputStream os = fs.create(new Path(INPUTPATH,
         "text" + i + ".txt"));
       Writer wr = new OutputStreamWriter(os);
-      for(String inp : input) {
+      for(String inp : INPUTLIST) {
         wr.write(inp+"\n");
       }
       wr.close();
@@ -132,22 +137,23 @@ public class TestMapReduceLazyOutput {
       Configuration conf = new Configuration();
 
       // Start the mini-MR and mini-DFS clusters
-      dfs = new MiniDFSCluster.Builder(conf).numDataNodes(NUM_HADOOP_SLAVES)
+      dfs = new MiniDFSCluster.Builder(conf).numDataNodes(NUM_HADOOP_WORKERS)
           .build();
       fileSys = dfs.getFileSystem();
-      mr = new MiniMRCluster(NUM_HADOOP_SLAVES, fileSys.getUri().toString(), 
1);
+      mr = new MiniMRCluster(NUM_HADOOP_WORKERS,
+                             fileSys.getUri().toString(), 1);
 
       int numReducers = 2;
-      int numMappers = NUM_HADOOP_SLAVES * NUM_MAPS_PER_NODE;
+      int numMappers = NUM_HADOOP_WORKERS * NUM_MAPS_PER_NODE;
 
       createInput(fileSys, numMappers);
       Path output1 = new Path("/testlazy/output1");
 
-      // Test 1. 
-      runTestLazyOutput(mr.createJobConf(), output1, 
+      // Test 1.
+      runTestLazyOutput(mr.createJobConf(), output1,
           numReducers, true);
 
-      Path[] fileList = 
+      Path[] fileList =
         FileUtil.stat2Paths(fileSys.listStatus(output1,
             new Utils.OutputFileUtils.OutputFilesFilter()));
       for(int i=0; i < fileList.length; ++i) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c3ff85/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestBinaryTokenFile.java
----------------------------------------------------------------------
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestBinaryTokenFile.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestBinaryTokenFile.java
index 7a2c03b..f504f0c 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestBinaryTokenFile.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestBinaryTokenFile.java
@@ -56,7 +56,7 @@ public class TestBinaryTokenFile {
 
   private static final String KEY_SECURITY_TOKEN_FILE_NAME = 
"key-security-token-file";
   private static final String DELEGATION_TOKEN_KEY = "Hdfs";
-  
+
   // my sleep class
   static class MySleepMapper extends SleepJob.SleepMapper {
     /**
@@ -67,7 +67,7 @@ public class TestBinaryTokenFile {
     throws IOException, InterruptedException {
       // get context token storage:
       final Credentials contextCredentials = context.getCredentials();
-      
+
       final Collection<Token<? extends TokenIdentifier>> 
contextTokenCollection = contextCredentials.getAllTokens();
       for (Token<? extends TokenIdentifier> t : contextTokenCollection) {
         System.out.println("Context token: [" + t + "]");
@@ -77,17 +77,17 @@ public class TestBinaryTokenFile {
         throw new RuntimeException("Exactly 2 tokens are expected in the 
contextTokenCollection: " +
                        "one job token and one delegation token, but was found 
" + contextTokenCollection.size() + " tokens.");
       }
-      
+
       final Token<? extends TokenIdentifier> dt = 
contextCredentials.getToken(new Text(DELEGATION_TOKEN_KEY));
       if (dt == null) {
         throw new RuntimeException("Token for key ["+DELEGATION_TOKEN_KEY+"] 
not found in the job context.");
       }
-      
+
       String tokenFile0 = 
context.getConfiguration().get(MRJobConfig.MAPREDUCE_JOB_CREDENTIALS_BINARY);
       if (tokenFile0 != null) {
         throw new RuntimeException("Token file key 
["+MRJobConfig.MAPREDUCE_JOB_CREDENTIALS_BINARY+"] found in the configuration. 
It should have been removed from the configuration.");
       }
-      
+
       final String tokenFile = 
context.getConfiguration().get(KEY_SECURITY_TOKEN_FILE_NAME);
       if (tokenFile == null) {
         throw new RuntimeException("Token file key 
["+KEY_SECURITY_TOKEN_FILE_NAME+"] not found in the job configuration.");
@@ -99,7 +99,8 @@ public class TestBinaryTokenFile {
       if (binaryTokenCollection.size() != 1) {
         throw new RuntimeException("The token collection read from file 
["+tokenFile+"] must have size = 1.");
       }
-      final Token<? extends TokenIdentifier> binTok = 
binaryTokenCollection.iterator().next(); 
+      final Token<? extends TokenIdentifier> binTok = binaryTokenCollection
+          .iterator().next();
       System.out.println("The token read from binary file: t = [" + binTok + 
"]");
       // Verify that dt is same as the token in the file:
       if (!dt.equals(binTok)) {
@@ -107,7 +108,7 @@ public class TestBinaryTokenFile {
               "Delegation token in job is not same as the token passed in 
file:"
                   + " tokenInFile=[" + binTok + "], dt=[" + dt + "].");
       }
-      
+
       // Now test the user tokens.
       final UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
       // Print all the UGI tokens for diagnostic purposes:
@@ -115,7 +116,7 @@ public class TestBinaryTokenFile {
       for (Token<? extends TokenIdentifier> t: ugiTokenCollection) {
         System.out.println("UGI token: [" + t + "]");
       }
-      final Token<? extends TokenIdentifier> ugiToken 
+      final Token<? extends TokenIdentifier> ugiToken
         = ugi.getCredentials().getToken(new Text(DELEGATION_TOKEN_KEY));
       if (ugiToken == null) {
         throw new RuntimeException("Token for key ["+DELEGATION_TOKEN_KEY+"] 
not found among the UGI tokens.");
@@ -125,27 +126,27 @@ public class TestBinaryTokenFile {
               "UGI token is not same as the token passed in binary file:"
                   + " tokenInBinFile=[" + binTok + "], ugiTok=[" + ugiToken + 
"].");
       }
-      
+
       super.map(key, value, context);
     }
   }
-  
+
   class MySleepJob extends SleepJob {
     @Override
-    public Job createJob(int numMapper, int numReducer, 
-        long mapSleepTime, int mapSleepCount, 
-        long reduceSleepTime, int reduceSleepCount) 
+    public Job createJob(int numMapper, int numReducer,
+        long mapSleepTime, int mapSleepCount,
+        long reduceSleepTime, int reduceSleepCount)
     throws IOException {
       Job job =  super.createJob(numMapper, numReducer,
-           mapSleepTime, mapSleepCount, 
+           mapSleepTime, mapSleepCount,
           reduceSleepTime, reduceSleepCount);
-      
+
       job.setMapperClass(MySleepMapper.class);
       //Populate tokens here because security is disabled.
       setupBinaryTokenFile(job);
       return job;
     }
-    
+
     private void setupBinaryTokenFile(Job job) {
     // Credentials in the job will not have delegation tokens
     // because security is disabled. Fetch delegation tokens
@@ -161,40 +162,41 @@ public class TestBinaryTokenFile {
           binaryTokenFileName.toString());
     }
   }
-  
+
   private static MiniMRYarnCluster mrCluster;
   private static MiniDFSCluster dfsCluster;
-  
-  private static final Path TEST_DIR = 
+
+  private static final Path TEST_DIR =
     new Path(System.getProperty("test.build.data","/tmp"));
   private static final Path binaryTokenFileName = new Path(TEST_DIR, 
"tokenFile.binary");
-  
-  private static final int numSlaves = 1; // num of data nodes
+
+  private static final int NUMWORKERS = 1; // num of data nodes
   private static final int noOfNMs = 1;
-  
+
   private static Path p1;
-  
+
   @BeforeClass
   public static void setUp() throws Exception {
     final Configuration conf = new Configuration();
-    
+
     conf.set(MRConfig.FRAMEWORK_NAME, MRConfig.YARN_FRAMEWORK_NAME);
     conf.set(YarnConfiguration.RM_PRINCIPAL, "jt_id/" + 
SecurityUtil.HOSTNAME_PATTERN + "@APACHE.ORG");
-    
+
     final MiniDFSCluster.Builder builder = new MiniDFSCluster.Builder(conf);
     builder.checkExitOnShutdown(true);
-    builder.numDataNodes(numSlaves);
+    builder.numDataNodes(NUMWORKERS);
     builder.format(true);
     builder.racks(null);
     dfsCluster = builder.build();
-    
+
     mrCluster = new MiniMRYarnCluster(TestBinaryTokenFile.class.getName(), 
noOfNMs);
     mrCluster.init(conf);
     mrCluster.start();
 
-    
NameNodeAdapter.getDtSecretManager(dfsCluster.getNamesystem()).startThreads(); 
-    
-    FileSystem fs = dfsCluster.getFileSystem(); 
+    NameNodeAdapter.getDtSecretManager(dfsCluster.getNamesystem())
+        .startThreads();
+
+    FileSystem fs = dfsCluster.getFileSystem();
     p1 = new Path("file1");
     p1 = fs.makeQualified(p1);
   }
@@ -240,13 +242,13 @@ public class TestBinaryTokenFile {
   @Test
   public void testBinaryTokenFile() throws IOException {
     Configuration conf = mrCluster.getConfig();
-    
+
     // provide namenodes names for the job to get the delegation tokens for
     final String nnUri = dfsCluster.getURI(0).toString();
     conf.set(MRJobConfig.JOB_NAMENODES, nnUri + "," + nnUri);
-    
+
     // using argument to pass the file name
-    final String[] args = { 
+    final String[] args = {
         "-m", "1", "-r", "1", "-mt", "1", "-rt", "1"
         };
     int res = -1;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c3ff85/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestMRCredentials.java
----------------------------------------------------------------------
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestMRCredentials.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestMRCredentials.java
index 85d60f0..0a9c32f 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestMRCredentials.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestMRCredentials.java
@@ -51,7 +51,7 @@ public class TestMRCredentials {
   static final int NUM_OF_KEYS = 10;
   private static MiniMRClientCluster mrCluster;
   private static MiniDFSCluster dfsCluster;
-  private static int numSlaves = 1;
+  private static int numWorkers = 1;
   private static JobConf jConf;
 
   @SuppressWarnings("deprecation")
@@ -59,7 +59,7 @@ public class TestMRCredentials {
   public static void setUp() throws Exception {
     System.setProperty("hadoop.log.dir", "logs");
     Configuration conf = new Configuration();
-    dfsCluster = new MiniDFSCluster.Builder(conf).numDataNodes(numSlaves)
+    dfsCluster = new MiniDFSCluster.Builder(conf).numDataNodes(numWorkers)
         .build();
     jConf = new JobConf(conf);
     FileSystem.setDefaultUri(conf, 
dfsCluster.getFileSystem().getUri().toString());
@@ -80,7 +80,7 @@ public class TestMRCredentials {
 
   }
 
-  public static void createKeysAsJson (String fileName) 
+  public static void createKeysAsJson(String fileName)
   throws FileNotFoundException, IOException{
     StringBuilder jsonString = new StringBuilder();
     jsonString.append("{");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c3ff85/hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh 
b/hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh
index 3b41299..ecc0140 100755
--- a/hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh
+++ b/hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh
@@ -62,7 +62,7 @@ else
   "${HADOOP_YARN_HOME}/bin/yarn" \
       --config "${HADOOP_CONF_DIR}" \
       --daemon start \
-      --slaves \
+      --workers \
       --hostnames "${RMHOSTS}" \
       resourcemanager
 fi
@@ -71,7 +71,7 @@ fi
 echo "Starting nodemanagers"
 "${HADOOP_YARN_HOME}/bin/yarn" \
     --config "${HADOOP_CONF_DIR}" \
-    --slaves \
+    --workers \
     --daemon start \
     nodemanager
 
@@ -80,7 +80,7 @@ PROXYSERVER=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -confKey 
 yarn.web-proxy.ad
 if [[ -n ${PROXYSERVER} ]]; then
   "${HADOOP_YARN_HOME}/bin/yarn" \
       --config "${HADOOP_CONF_DIR}" \
-      --slaves \
+      --workers \
       --hostnames "${PROXYSERVER}" \
       --daemon start \
       proxyserver

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c3ff85/hadoop-yarn-project/hadoop-yarn/bin/stop-yarn.sh
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/bin/stop-yarn.sh 
b/hadoop-yarn-project/hadoop-yarn/bin/stop-yarn.sh
index 358f0c9..1ed52dd 100755
--- a/hadoop-yarn-project/hadoop-yarn/bin/stop-yarn.sh
+++ b/hadoop-yarn-project/hadoop-yarn/bin/stop-yarn.sh
@@ -62,7 +62,7 @@ else
   "${HADOOP_YARN_HOME}/bin/yarn" \
       --config "${HADOOP_CONF_DIR}" \
       --daemon stop \
-      --slaves \
+      --workers \
       --hostnames "${RMHOSTS}" \
       resourcemanager
 fi
@@ -71,7 +71,7 @@ fi
 echo "Stopping nodemanagers"
 "${HADOOP_YARN_HOME}/bin/yarn" \
     --config "${HADOOP_CONF_DIR}" \
-    --slaves \
+    --workers \
     --daemon stop \
     nodemanager
 
@@ -81,7 +81,7 @@ if [[ -n ${PROXYSERVER} ]]; then
   echo "Stopping proxy server [${PROXYSERVER}]"
   "${HADOOP_YARN_HOME}/bin/yarn" \
       --config "${HADOOP_CONF_DIR}" \
-      --slaves \
+      --workers \
       --hostnames "${PROXYSERVER}" \
       --daemon stop \
       proxyserver

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c3ff85/hadoop-yarn-project/hadoop-yarn/bin/yarn
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/bin/yarn 
b/hadoop-yarn-project/hadoop-yarn/bin/yarn
index 2c19cd2..9a5086e 100755
--- a/hadoop-yarn-project/hadoop-yarn/bin/yarn
+++ b/hadoop-yarn-project/hadoop-yarn/bin/yarn
@@ -26,10 +26,10 @@ function hadoop_usage
 {
   hadoop_add_option "--buildpaths" "attempt to add class files from build tree"
   hadoop_add_option "--daemon (start|status|stop)" "operate on a daemon"
-  hadoop_add_option "--hostnames list[,of,host,names]" "hosts to use in slave 
mode"
+  hadoop_add_option "--hostnames list[,of,host,names]" "hosts to use in worker 
mode"
   hadoop_add_option "--loglevel level" "set the log4j level for this command"
-  hadoop_add_option "--hosts filename" "list of hosts to use in slave mode"
-  hadoop_add_option "--slaves" "turn on slave mode"
+  hadoop_add_option "--hosts filename" "list of hosts to use in worker mode"
+  hadoop_add_option "--workers" "turn on worker mode"
 
   hadoop_add_subcommand "application" "prints application(s) report/kill 
application"
   hadoop_add_subcommand "applicationattempt" "prints applicationattempt(s) 
report"
@@ -41,7 +41,7 @@ function hadoop_usage
   hadoop_add_subcommand "jar <jar>" "run a jar file"
   hadoop_add_subcommand "logs" "dump container logs"
   hadoop_add_subcommand "node" "prints node report(s)"
-  hadoop_add_subcommand "nodemanager" "run a nodemanager on each slave"
+  hadoop_add_subcommand "nodemanager" "run a nodemanager on each worker"
   hadoop_add_subcommand "proxyserver" "run the web app proxy server"
   hadoop_add_subcommand "queue" "prints queue information"
   hadoop_add_subcommand "resourcemanager" "run the ResourceManager"
@@ -266,8 +266,8 @@ fi
 
 hadoop_verify_user "${HADOOP_SUBCMD}"
 
-if [[ ${HADOOP_SLAVE_MODE} = true ]]; then
-  hadoop_common_slave_mode_execute "${HADOOP_YARN_HOME}/bin/yarn" 
"${HADOOP_USER_PARAMS[@]}"
+if [[ ${HADOOP_WORKER_MODE} = true ]]; then
+  hadoop_common_worker_mode_execute "${HADOOP_YARN_HOME}/bin/yarn" 
"${HADOOP_USER_PARAMS[@]}"
   exit $?
 fi
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c3ff85/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.cmd
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.cmd 
b/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.cmd
index 41c1434..f2ccc8f 100644
--- a/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.cmd
+++ b/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.cmd
@@ -64,7 +64,7 @@ if not defined YARN_CONF_DIR (
 @rem
 
 if "%1" == "--hosts" (
-  set YARN_SLAVES=%YARN_CONF_DIR%\%2
+  set YARN_WORKERS=%YARN_CONF_DIR%\%2
   shift
   shift
 )

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c3ff85/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.sh
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.sh 
b/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.sh
index d7fa406..719a6ae 100644
--- a/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.sh
+++ b/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.sh
@@ -15,7 +15,7 @@
 
 function hadoop_subproject_init
 {
-  
+
   # at some point in time, someone thought it would be a good idea to
   # create separate vars for every subproject.  *sigh*
   # let's perform some overrides and setup some defaults for bw compat
@@ -23,7 +23,7 @@ function hadoop_subproject_init
   # used interchangeable from here on out
   # ...
   # this should get deprecated at some point.
-  
+
   if [[ -z "${HADOOP_YARN_ENV_PROCESSED}" ]]; then
     if [[ -e "${YARN_CONF_DIR}/yarn-env.sh" ]]; then
       . "${YARN_CONF_DIR}/yarn-env.sh"
@@ -32,29 +32,29 @@ function hadoop_subproject_init
     fi
     export HADOOP_YARN_ENV_PROCESSED=true
   fi
-  
+
   hadoop_deprecate_envvar YARN_CONF_DIR HADOOP_CONF_DIR
 
   hadoop_deprecate_envvar YARN_LOG_DIR HADOOP_LOG_DIR
 
   hadoop_deprecate_envvar YARN_LOGFILE HADOOP_LOGFILE
-  
+
   hadoop_deprecate_envvar YARN_NICENESS HADOOP_NICENESS
-  
+
   hadoop_deprecate_envvar YARN_STOP_TIMEOUT HADOOP_STOP_TIMEOUT
-  
+
   hadoop_deprecate_envvar YARN_PID_DIR HADOOP_PID_DIR
-  
+
   hadoop_deprecate_envvar YARN_ROOT_LOGGER HADOOP_ROOT_LOGGER
 
   hadoop_deprecate_envvar YARN_IDENT_STRING HADOOP_IDENT_STRING
 
   hadoop_deprecate_envvar YARN_OPTS HADOOP_OPTS
 
-  hadoop_deprecate_envvar YARN_SLAVES HADOOP_SLAVES
-  
+  hadoop_deprecate_envvar YARN_SLAVES HADOOP_WORKERS
+
   HADOOP_YARN_HOME="${HADOOP_YARN_HOME:-$HADOOP_HOME}"
-  
+
   # YARN-1429 added the completely superfluous YARN_USER_CLASSPATH
   # env var.  We're going to override HADOOP_USER_CLASSPATH to keep
   # consistency with the rest of the duplicate/useless env vars

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c3ff85/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemons.sh
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemons.sh 
b/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemons.sh
index 958c8bd..2226422 100644
--- a/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemons.sh
+++ b/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemons.sh
@@ -47,13 +47,13 @@ daemonmode=$1
 shift
 
 hadoop_error "WARNING: Use of this script to ${daemonmode} YARN daemons is 
deprecated."
-hadoop_error "WARNING: Attempting to execute replacement \"yarn --slaves 
--daemon ${daemonmode}\" instead."
+hadoop_error "WARNING: Attempting to execute replacement \"yarn --workers 
--daemon ${daemonmode}\" instead."
 
 #
 # Original input was usually:
 #  yarn-daemons.sh (shell options) (start|stop) nodemanager (daemon options)
 # we're going to turn this into
-#  yarn --slaves --daemon (start|stop) (rest of options)
+#  yarn --workers --daemon (start|stop) (rest of options)
 #
 for (( i = 0; i < ${#HADOOP_USER_PARAMS[@]}; i++ ))
 do
@@ -64,5 +64,5 @@ do
   fi
 done
 
-${yarnscript} --slaves --daemon "${daemonmode}" "${HADOOP_USER_PARAMS[@]}"
+${yarnscript} --workers --daemon "${daemonmode}" "${HADOOP_USER_PARAMS[@]}"
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c3ff85/hadoop-yarn-project/hadoop-yarn/conf/slaves
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/conf/slaves 
b/hadoop-yarn-project/hadoop-yarn/conf/slaves
deleted file mode 100644
index 2fbb50c..0000000
--- a/hadoop-yarn-project/hadoop-yarn/conf/slaves
+++ /dev/null
@@ -1 +0,0 @@
-localhost

http://git-wip-us.apache.org/repos/asf/hadoop/blob/23c3ff85/hadoop-yarn-project/hadoop-yarn/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/pom.xml
index 3e31ec0..eb63f80 100644
--- a/hadoop-yarn-project/hadoop-yarn/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/pom.xml
@@ -54,7 +54,7 @@
         <artifactId>apache-rat-plugin</artifactId>
         <configuration>
           <excludes>
-            <exclude>conf/slaves</exclude>
+            <exclude>conf/workers</exclude>
             <exclude>conf/container-executor.cfg</exclude>
             <exclude>dev-support/jdiff/**</exclude>
           </excludes>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org

Reply via email to