[01/50] [abbrv] hadoop git commit: HADOOP-10908. Common needs updates for shell rewrite (aw)

2015-01-13 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-6994 2e42564ad - a607429b5


HADOOP-10908. Common needs updates for shell rewrite (aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/94d342e6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/94d342e6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/94d342e6

Branch: refs/heads/HDFS-6994
Commit: 94d342e607e1db317bae7af86a34ae7cd3860348
Parents: 41d72cb
Author: Allen Wittenauer a...@apache.org
Authored: Mon Jan 5 14:26:41 2015 -0800
Committer: Allen Wittenauer a...@apache.org
Committed: Mon Jan 5 14:26:41 2015 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   2 +
 .../src/site/apt/ClusterSetup.apt.vm| 348 ---
 .../src/site/apt/CommandsManual.apt.vm  | 316 +
 .../src/site/apt/FileSystemShell.apt.vm | 313 ++---
 .../src/site/apt/SingleCluster.apt.vm   |  20 +-
 5 files changed, 534 insertions(+), 465 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/94d342e6/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 0c76894..40e8d29 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -344,6 +344,8 @@ Trunk (Unreleased)
 
 HADOOP-11397. Can't override HADOOP_IDENT_STRING (Kengo Seki via aw)
 
+HADOOP-10908. Common needs updates for shell rewrite (aw)
+
   OPTIMIZATIONS
 
 HADOOP-7761. Improve the performance of raw comparisons. (todd)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/94d342e6/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm 
b/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
index f5f1deb..52b0552 100644
--- a/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
+++ b/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
@@ -11,83 +11,81 @@
 ~~ limitations under the License. See accompanying LICENSE file.
 
   ---
-  Hadoop Map Reduce Next Generation-${project.version} - Cluster Setup
+  Hadoop ${project.version} - Cluster Setup
   ---
   ---
   ${maven.build.timestamp}
 
 %{toc|section=1|fromDepth=0}
 
-Hadoop MapReduce Next Generation - Cluster Setup
+Hadoop Cluster Setup
 
 * {Purpose}
 
-  This document describes how to install, configure and manage non-trivial
+  This document describes how to install and configure
   Hadoop clusters ranging from a few nodes to extremely large clusters
-  with thousands of nodes.
+  with thousands of nodes.  To play with Hadoop, you may first want to
+  install it on a single machine (see {{{./SingleCluster.html}Single Node 
Setup}}).
 
-  To play with Hadoop, you may first want to install it on a single
-  machine (see {{{./SingleCluster.html}Single Node Setup}}).
+  This document does not cover advanced topics such as 
{{{./SecureMode.html}Security}} or
+  High Availability.
 
 * {Prerequisites}
 
-  Download a stable version of Hadoop from Apache mirrors.
+  * Install Java. See the 
{{{http://wiki.apache.org/hadoop/HadoopJavaVersions}Hadoop Wiki}} for known 
good versions.
+  * Download a stable version of Hadoop from Apache mirrors.
 
 * {Installation}
 
   Installing a Hadoop cluster typically involves unpacking the software on all
-  the machines in the cluster or installing RPMs.
+  the machines in the cluster or installing it via a packaging system as
+  appropriate for your operating system.  It is important to divide up the 
hardware
+  into functions.
 
   Typically one machine in the cluster is designated as the NameNode and
-  another machine the as ResourceManager, exclusively. These are the masters.
+  another machine the as ResourceManager, exclusively. These are the masters. 
Other
+  services (such as Web App Proxy Server and MapReduce Job History server) are 
usually
+  run either on dedicated hardware or on shared infrastrucutre, depending upon 
the load.
 
   The rest of the machines in the cluster act as both DataNode and NodeManager.
   These are the slaves.
 
-* {Running Hadoop in Non-Secure Mode}
+* {Configuring Hadoop in Non-Secure Mode}
 
-  The following sections describe how to configure a Hadoop cluster.
-
-  {Configuration Files}
-
-Hadoop configuration is driven by two types of important configuration 
files:
+Hadoop's Java configuration is driven by two types of important 
configuration files:
 
   * Read-only default configuration - 

hadoop git commit: HADOOP-10908. Common needs updates for shell rewrite (aw)

2015-01-05 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/trunk 41d72cbd4 - 94d342e60


HADOOP-10908. Common needs updates for shell rewrite (aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/94d342e6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/94d342e6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/94d342e6

Branch: refs/heads/trunk
Commit: 94d342e607e1db317bae7af86a34ae7cd3860348
Parents: 41d72cb
Author: Allen Wittenauer a...@apache.org
Authored: Mon Jan 5 14:26:41 2015 -0800
Committer: Allen Wittenauer a...@apache.org
Committed: Mon Jan 5 14:26:41 2015 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   2 +
 .../src/site/apt/ClusterSetup.apt.vm| 348 ---
 .../src/site/apt/CommandsManual.apt.vm  | 316 +
 .../src/site/apt/FileSystemShell.apt.vm | 313 ++---
 .../src/site/apt/SingleCluster.apt.vm   |  20 +-
 5 files changed, 534 insertions(+), 465 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/94d342e6/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 0c76894..40e8d29 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -344,6 +344,8 @@ Trunk (Unreleased)
 
 HADOOP-11397. Can't override HADOOP_IDENT_STRING (Kengo Seki via aw)
 
+HADOOP-10908. Common needs updates for shell rewrite (aw)
+
   OPTIMIZATIONS
 
 HADOOP-7761. Improve the performance of raw comparisons. (todd)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/94d342e6/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm 
b/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
index f5f1deb..52b0552 100644
--- a/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
+++ b/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
@@ -11,83 +11,81 @@
 ~~ limitations under the License. See accompanying LICENSE file.
 
   ---
-  Hadoop Map Reduce Next Generation-${project.version} - Cluster Setup
+  Hadoop ${project.version} - Cluster Setup
   ---
   ---
   ${maven.build.timestamp}
 
 %{toc|section=1|fromDepth=0}
 
-Hadoop MapReduce Next Generation - Cluster Setup
+Hadoop Cluster Setup
 
 * {Purpose}
 
-  This document describes how to install, configure and manage non-trivial
+  This document describes how to install and configure
   Hadoop clusters ranging from a few nodes to extremely large clusters
-  with thousands of nodes.
+  with thousands of nodes.  To play with Hadoop, you may first want to
+  install it on a single machine (see {{{./SingleCluster.html}Single Node 
Setup}}).
 
-  To play with Hadoop, you may first want to install it on a single
-  machine (see {{{./SingleCluster.html}Single Node Setup}}).
+  This document does not cover advanced topics such as 
{{{./SecureMode.html}Security}} or
+  High Availability.
 
 * {Prerequisites}
 
-  Download a stable version of Hadoop from Apache mirrors.
+  * Install Java. See the 
{{{http://wiki.apache.org/hadoop/HadoopJavaVersions}Hadoop Wiki}} for known 
good versions.
+  * Download a stable version of Hadoop from Apache mirrors.
 
 * {Installation}
 
   Installing a Hadoop cluster typically involves unpacking the software on all
-  the machines in the cluster or installing RPMs.
+  the machines in the cluster or installing it via a packaging system as
+  appropriate for your operating system.  It is important to divide up the 
hardware
+  into functions.
 
   Typically one machine in the cluster is designated as the NameNode and
-  another machine the as ResourceManager, exclusively. These are the masters.
+  another machine the as ResourceManager, exclusively. These are the masters. 
Other
+  services (such as Web App Proxy Server and MapReduce Job History server) are 
usually
+  run either on dedicated hardware or on shared infrastrucutre, depending upon 
the load.
 
   The rest of the machines in the cluster act as both DataNode and NodeManager.
   These are the slaves.
 
-* {Running Hadoop in Non-Secure Mode}
+* {Configuring Hadoop in Non-Secure Mode}
 
-  The following sections describe how to configure a Hadoop cluster.
-
-  {Configuration Files}
-
-Hadoop configuration is driven by two types of important configuration 
files:
+Hadoop's Java configuration is driven by two types of important 
configuration files:
 
   * Read-only default configuration - core-default.xml,
  

[15/18] hadoop git commit: HADOOP-10908. Common needs updates for shell rewrite (aw)

2015-01-05 Thread zhz
HADOOP-10908. Common needs updates for shell rewrite (aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e38cd055
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e38cd055
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e38cd055

Branch: refs/heads/HDFS-EC
Commit: e38cd055b3441d5c6c1bfbbc1a312c08fdf5f25b
Parents: cf02311
Author: Allen Wittenauer a...@apache.org
Authored: Mon Jan 5 14:26:41 2015 -0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Jan 5 14:48:38 2015 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   2 +
 .../src/site/apt/ClusterSetup.apt.vm| 348 ---
 .../src/site/apt/CommandsManual.apt.vm  | 316 +
 .../src/site/apt/FileSystemShell.apt.vm | 313 ++---
 .../src/site/apt/SingleCluster.apt.vm   |  20 +-
 5 files changed, 534 insertions(+), 465 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e38cd055/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 0c76894..40e8d29 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -344,6 +344,8 @@ Trunk (Unreleased)
 
 HADOOP-11397. Can't override HADOOP_IDENT_STRING (Kengo Seki via aw)
 
+HADOOP-10908. Common needs updates for shell rewrite (aw)
+
   OPTIMIZATIONS
 
 HADOOP-7761. Improve the performance of raw comparisons. (todd)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e38cd055/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm 
b/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
index f5f1deb..52b0552 100644
--- a/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
+++ b/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
@@ -11,83 +11,81 @@
 ~~ limitations under the License. See accompanying LICENSE file.
 
   ---
-  Hadoop Map Reduce Next Generation-${project.version} - Cluster Setup
+  Hadoop ${project.version} - Cluster Setup
   ---
   ---
   ${maven.build.timestamp}
 
 %{toc|section=1|fromDepth=0}
 
-Hadoop MapReduce Next Generation - Cluster Setup
+Hadoop Cluster Setup
 
 * {Purpose}
 
-  This document describes how to install, configure and manage non-trivial
+  This document describes how to install and configure
   Hadoop clusters ranging from a few nodes to extremely large clusters
-  with thousands of nodes.
+  with thousands of nodes.  To play with Hadoop, you may first want to
+  install it on a single machine (see {{{./SingleCluster.html}Single Node 
Setup}}).
 
-  To play with Hadoop, you may first want to install it on a single
-  machine (see {{{./SingleCluster.html}Single Node Setup}}).
+  This document does not cover advanced topics such as 
{{{./SecureMode.html}Security}} or
+  High Availability.
 
 * {Prerequisites}
 
-  Download a stable version of Hadoop from Apache mirrors.
+  * Install Java. See the 
{{{http://wiki.apache.org/hadoop/HadoopJavaVersions}Hadoop Wiki}} for known 
good versions.
+  * Download a stable version of Hadoop from Apache mirrors.
 
 * {Installation}
 
   Installing a Hadoop cluster typically involves unpacking the software on all
-  the machines in the cluster or installing RPMs.
+  the machines in the cluster or installing it via a packaging system as
+  appropriate for your operating system.  It is important to divide up the 
hardware
+  into functions.
 
   Typically one machine in the cluster is designated as the NameNode and
-  another machine the as ResourceManager, exclusively. These are the masters.
+  another machine the as ResourceManager, exclusively. These are the masters. 
Other
+  services (such as Web App Proxy Server and MapReduce Job History server) are 
usually
+  run either on dedicated hardware or on shared infrastrucutre, depending upon 
the load.
 
   The rest of the machines in the cluster act as both DataNode and NodeManager.
   These are the slaves.
 
-* {Running Hadoop in Non-Secure Mode}
+* {Configuring Hadoop in Non-Secure Mode}
 
-  The following sections describe how to configure a Hadoop cluster.
-
-  {Configuration Files}
-
-Hadoop configuration is driven by two types of important configuration 
files:
+Hadoop's Java configuration is driven by two types of important 
configuration files:
 
   * Read-only default configuration - core-default.xml,
 hdfs-default.xml, yarn-default.xml and
 mapred-default.xml.
 
-  *