HDFS-10855. Fix typos in HDFS documents. Contributed by Yiqun Lin.

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cc01ed70
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cc01ed70
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cc01ed70

Branch: refs/heads/YARN-2915
Commit: cc01ed702629cb923a8938ec03dd0decbcd6f495
Parents: a99bf26
Author: Xiao Chen <[email protected]>
Authored: Sat Sep 10 23:23:34 2016 -0700
Committer: Xiao Chen <[email protected]>
Committed: Sat Sep 10 23:24:49 2016 -0700

----------------------------------------------------------------------
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml              | 4 ++--
 .../hadoop-hdfs/src/site/markdown/ArchivalStorage.md             | 2 +-
 .../hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithNFS.md | 4 ++--
 .../hadoop-hdfs/src/site/markdown/HdfsMultihoming.md             | 4 ++--
 .../hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md              | 2 +-
 hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md      | 2 +-
 6 files changed, 9 insertions(+), 9 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc01ed70/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
----------------------------------------------------------------------
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 3a5de3e..29c9ef2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -163,7 +163,7 @@
   <name>dfs.namenode.http-bind-host</name>
   <value></value>
   <description>
-    The actual adress the HTTP server will bind to. If this optional address
+    The actual address the HTTP server will bind to. If this optional address
     is set, it overrides only the hostname portion of 
dfs.namenode.http-address.
     It can also be specified per name node or name service for HA/Federation.
     This is useful for making the name node HTTP server listen on all
@@ -243,7 +243,7 @@
   <name>dfs.namenode.https-bind-host</name>
   <value></value>
   <description>
-    The actual adress the HTTPS server will bind to. If this optional address
+    The actual address the HTTPS server will bind to. If this optional address
     is set, it overrides only the hostname portion of 
dfs.namenode.https-address.
     It can also be specified per name node or name service for HA/Federation.
     This is useful for making the name node HTTPS server listen on all

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc01ed70/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ArchivalStorage.md
----------------------------------------------------------------------
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ArchivalStorage.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ArchivalStorage.md
index 31bea7c..06b7390 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ArchivalStorage.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ArchivalStorage.md
@@ -89,7 +89,7 @@ Note 2: For the erasure coded files with striping layout, the 
suitable storage p
 
 When a file or directory is created, its storage policy is *unspecified*. The 
storage policy can be specified using the "[`storagepolicies 
-setStoragePolicy`](#Set_Storage_Policy)" command. The effective storage policy 
of a file or directory is resolved by the following rules.
 
-1.  If the file or directory is specificed with a storage policy, return it.
+1.  If the file or directory is specified with a storage policy, return it.
 
 2.  For an unspecified file or directory, if it is the root directory, return 
the *default storage policy*. Otherwise, return its parent's effective storage 
policy.
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc01ed70/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithNFS.md
----------------------------------------------------------------------
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithNFS.md
 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithNFS.md
index d9f895a..b743233 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithNFS.md
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithNFS.md
@@ -87,7 +87,7 @@ In order to deploy an HA cluster, you should prepare the 
following:
 
 * **NameNode machines** - the machines on which you run the Active and Standby 
NameNodes should have equivalent hardware to each other, and equivalent 
hardware to what would be used in a non-HA cluster.
 
-* **Shared storage** - you will need to have a shared directory which the 
NameNode machines have read/write access to. Typically this is a remote filer 
which supports NFS and is mounted on each of the NameNode machines. Currently 
only a single shared edits directory is supported. Thus, the availability of 
the system is limited by the availability of this shared edits directory, and 
therefore in order to remove all single points of failure there needs to be 
redundancy for the shared edits directory. Specifically, multiple network paths 
to the storage, and redundancy in the storage itself (disk, network, and 
power). Beacuse of this, it is recommended that the shared storage server be a 
high-quality dedicated NAS appliance rather than a simple Linux server.
+* **Shared storage** - you will need to have a shared directory which the 
NameNode machines have read/write access to. Typically this is a remote filer 
which supports NFS and is mounted on each of the NameNode machines. Currently 
only a single shared edits directory is supported. Thus, the availability of 
the system is limited by the availability of this shared edits directory, and 
therefore in order to remove all single points of failure there needs to be 
redundancy for the shared edits directory. Specifically, multiple network paths 
to the storage, and redundancy in the storage itself (disk, network, and 
power). Because of this, it is recommended that the shared storage server be a 
high-quality dedicated NAS appliance rather than a simple Linux server.
 
 Note that, in an HA cluster, the Standby NameNodes also perform checkpoints of 
the namespace state, and thus it is not necessary to run a Secondary NameNode, 
CheckpointNode, or BackupNode in an HA cluster. In fact, to do so would be an 
error. This also allows one who is reconfiguring a non-HA-enabled HDFS cluster 
to be HA-enabled to reuse the hardware which they had previously dedicated to 
the Secondary NameNode.
 
@@ -137,7 +137,7 @@ The order in which you set these configurations is 
unimportant, but the values y
 *   **dfs.namenode.rpc-address.[nameservice ID].[name node ID]** - the 
fully-qualified RPC address for each NameNode to listen on
 
     For both of the previously-configured NameNode IDs, set the full address 
and
-    IPC port of the NameNode processs. Note that this results in two separate
+    IPC port of the NameNode process. Note that this results in two separate
     configuration options. For example:
 
         <property>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc01ed70/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsMultihoming.md
----------------------------------------------------------------------
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsMultihoming.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsMultihoming.md
index 4be5511..4e1d480 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsMultihoming.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsMultihoming.md
@@ -86,7 +86,7 @@ The solution is to have separate setting for server endpoints 
to force binding t
       <name>dfs.namenode.http-bind-host</name>
       <value>0.0.0.0</value>
       <description>
-        The actual adress the HTTP server will bind to. If this optional 
address
+        The actual address the HTTP server will bind to. If this optional 
address
         is set, it overrides only the hostname portion of 
dfs.namenode.http-address.
         It can also be specified per name node or name service for 
HA/Federation.
         This is useful for making the name node HTTP server listen on all
@@ -98,7 +98,7 @@ The solution is to have separate setting for server endpoints 
to force binding t
       <name>dfs.namenode.https-bind-host</name>
       <value>0.0.0.0</value>
       <description>
-        The actual adress the HTTPS server will bind to. If this optional 
address
+        The actual address the HTTPS server will bind to. If this optional 
address
         is set, it overrides only the hostname portion of 
dfs.namenode.https-address.
         It can also be specified per name node or name service for 
HA/Federation.
         This is useful for making the name node HTTPS server listen on all

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc01ed70/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
----------------------------------------------------------------------
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
index 6731189..ddb4f01 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
@@ -148,7 +148,7 @@ It's strongly recommended for the users to update a few 
configuration properties
     characters. The machine name format can be a single host, a "*", a Java 
regular expression, or an IPv4 address. The access
     privilege uses rw or ro to specify read/write or read-only access of the 
machines to exports. If the access privilege is not provided, the default is 
read-only. Entries are separated by ";".
     For example: "192.168.0.0/22 rw ; \\\\w\*\\\\.example\\\\.com ; 
host1.test.org ro;". Only the NFS gateway needs to restart after
-    this property is updated. Note that, here Java regular expression is 
differnt with the regrulation expression used in 
+    this property is updated. Note that, here Java regular expression is 
different with the regulation expression used in
     Linux NFS export table, such as, using "\\\\w\*\\\\.example\\\\.com" 
instead of "\*.example.com", "192\\\\.168\\\\.0\\\\.(11|22)"
     instead of "192.168.0.[11|22]" and so on.  
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc01ed70/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
index 94662f5..5f88def 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md
@@ -143,7 +143,7 @@ Hence on Cluster X, where the `core-site.xml` is set to 
make the default fs to u
 
 ### Pathname Usage Best Practices
 
-When one is within a cluster, it is recommended to use the pathname of type 
(1) above instead of a fully qualified URI like (2). Futher, applications 
should not use the knowledge of the mount points and use a path like 
`hdfs://namenodeContainingUserDirs:port/joe/foo/bar` to refer to a file in a 
particular namenode. One should use `/user/joe/foo/bar` instead.
+When one is within a cluster, it is recommended to use the pathname of type 
(1) above instead of a fully qualified URI like (2). Further, applications 
should not use the knowledge of the mount points and use a path like 
`hdfs://namenodeContainingUserDirs:port/joe/foo/bar` to refer to a file in a 
particular namenode. One should use `/user/joe/foo/bar` instead.
 
 ### Renaming Pathnames Across Namespaces
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to