[2/2] hadoop git commit: HADOOP-13724. Fix a few typos in site markdown documents. Contributed by Ding Fei.
HADOOP-13724. Fix a few typos in site markdown documents. Contributed by Ding Fei. (cherry picked from commit 987ee51141a15d3f4d1df4dc792a192b92b87b5f) Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4ed7cf3b Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4ed7cf3b Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4ed7cf3b Branch: refs/heads/branch-2 Commit: 4ed7cf3b362f94367c57d012608213f46f0e16e8 Parents: fbdb23d Author: Andrew Wang Authored: Mon Oct 17 13:25:58 2016 -0700 Committer: Andrew Wang Committed: Mon Oct 17 13:32:39 2016 -0700 -- .../src/site/markdown/ClusterSetup.md | 2 +- .../src/site/markdown/Compatibility.md | 16 +-- .../site/markdown/InterfaceClassification.md| 28 ++-- .../src/site/markdown/filesystem/filesystem.md | 17 ++-- .../markdown/filesystem/fsdatainputstream.md| 16 +-- .../site/markdown/filesystem/introduction.md| 12 - .../src/site/markdown/filesystem/model.md | 7 ++--- .../src/site/markdown/filesystem/notation.md| 2 +- .../src/site/markdown/filesystem/testing.md | 4 +-- .../src/site/markdown/HadoopArchives.md.vm | 2 +- 10 files changed, 53 insertions(+), 53 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hadoop/blob/4ed7cf3b/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md -- diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md b/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md index 7d2d38f..66c25e5 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md @@ -35,7 +35,7 @@ Installation Installing a Hadoop cluster typically involves unpacking the software on all the machines in the cluster or installing it via a packaging system as appropriate for your operating system. It is important to divide up the hardware into functions. -Typically one machine in the cluster is designated as the NameNode and another machine the as ResourceManager, exclusively. These are the masters. Other services (such as Web App Proxy Server and MapReduce Job History server) are usually run either on dedicated hardware or on shared infrastrucutre, depending upon the load. +Typically one machine in the cluster is designated as the NameNode and another machine as the ResourceManager, exclusively. These are the masters. Other services (such as Web App Proxy Server and MapReduce Job History server) are usually run either on dedicated hardware or on shared infrastructure, depending upon the load. The rest of the machines in the cluster act as both DataNode and NodeManager. These are the slaves. http://git-wip-us.apache.org/repos/asf/hadoop/blob/4ed7cf3b/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md -- diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md b/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md index c275518..a7ded24 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md @@ -68,7 +68,7 @@ Wire compatibility concerns data being transmitted over the wire between Hadoop Use Cases * Client-Server compatibility is required to allow users to continue using the old clients even after upgrading the server (cluster) to a later version (or vice versa). For example, a Hadoop 2.1.0 client talking to a Hadoop 2.3.0 cluster. -* Client-Server compatibility is also required to allow users to upgrade the client before upgrading the server (cluster). For example, a Hadoop 2.4.0 client talking to a Hadoop 2.3.0 cluster. This allows deployment of client-side bug fixes ahead of full cluster upgrades. Note that new cluster features invoked by new client APIs or shell commands will not be usable. YARN applications that attempt to use new APIs (including new fields in data structures) that have not yet deployed to the cluster can expect link exceptions. +* Client-Server compatibility is also required to allow users to upgrade the client before upgrading the server (cluster). For example, a Hadoop 2.4.0 client talking to a Hadoop 2.3.0 cluster. This allows deployment of client-side bug fixes ahead of full cluster upgrades. Note that new cluster features invoked by new client APIs or shell commands will not be usable. YARN applications that attempt to use new APIs (including new fields in data structures) that have not yet been deployed to t
[2/2] hadoop git commit: HADOOP-13724. Fix a few typos in site markdown documents. Contributed by Ding Fei.
HADOOP-13724. Fix a few typos in site markdown documents. Contributed by Ding Fei. (cherry picked from commit 987ee51141a15d3f4d1df4dc792a192b92b87b5f) (cherry picked from commit 4ed7cf3b362f94367c57d012608213f46f0e16e8) Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/15ff590c Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/15ff590c Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/15ff590c Branch: refs/heads/branch-2.8 Commit: 15ff590c375c2e2abc8d5e68938a373caeaaea7f Parents: 9d473b8 Author: Andrew Wang Authored: Mon Oct 17 13:25:58 2016 -0700 Committer: Andrew Wang Committed: Mon Oct 17 13:32:52 2016 -0700 -- .../src/site/markdown/ClusterSetup.md | 2 +- .../src/site/markdown/Compatibility.md | 16 +-- .../site/markdown/InterfaceClassification.md| 28 ++-- .../src/site/markdown/filesystem/filesystem.md | 17 ++-- .../markdown/filesystem/fsdatainputstream.md| 16 +-- .../site/markdown/filesystem/introduction.md| 12 - .../src/site/markdown/filesystem/model.md | 7 ++--- .../src/site/markdown/filesystem/notation.md| 2 +- .../src/site/markdown/filesystem/testing.md | 4 +-- .../src/site/markdown/HadoopArchives.md.vm | 2 +- 10 files changed, 53 insertions(+), 53 deletions(-) -- http://git-wip-us.apache.org/repos/asf/hadoop/blob/15ff590c/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md -- diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md b/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md index 7d2d38f..66c25e5 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md @@ -35,7 +35,7 @@ Installation Installing a Hadoop cluster typically involves unpacking the software on all the machines in the cluster or installing it via a packaging system as appropriate for your operating system. It is important to divide up the hardware into functions. -Typically one machine in the cluster is designated as the NameNode and another machine the as ResourceManager, exclusively. These are the masters. Other services (such as Web App Proxy Server and MapReduce Job History server) are usually run either on dedicated hardware or on shared infrastrucutre, depending upon the load. +Typically one machine in the cluster is designated as the NameNode and another machine as the ResourceManager, exclusively. These are the masters. Other services (such as Web App Proxy Server and MapReduce Job History server) are usually run either on dedicated hardware or on shared infrastructure, depending upon the load. The rest of the machines in the cluster act as both DataNode and NodeManager. These are the slaves. http://git-wip-us.apache.org/repos/asf/hadoop/blob/15ff590c/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md -- diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md b/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md index c275518..a7ded24 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md @@ -68,7 +68,7 @@ Wire compatibility concerns data being transmitted over the wire between Hadoop Use Cases * Client-Server compatibility is required to allow users to continue using the old clients even after upgrading the server (cluster) to a later version (or vice versa). For example, a Hadoop 2.1.0 client talking to a Hadoop 2.3.0 cluster. -* Client-Server compatibility is also required to allow users to upgrade the client before upgrading the server (cluster). For example, a Hadoop 2.4.0 client talking to a Hadoop 2.3.0 cluster. This allows deployment of client-side bug fixes ahead of full cluster upgrades. Note that new cluster features invoked by new client APIs or shell commands will not be usable. YARN applications that attempt to use new APIs (including new fields in data structures) that have not yet deployed to the cluster can expect link exceptions. +* Client-Server compatibility is also required to allow users to upgrade the client before upgrading the server (cluster). For example, a Hadoop 2.4.0 client talking to a Hadoop 2.3.0 cluster. This allows deployment of client-side bug fixes ahead of full cluster upgrades. Note that new cluster features invoked by new client APIs or shell commands will not be usable. YARN applications that attempt to use new APIs (includi