HDFS-9928. Make HDFS commands guide up to date (Wei-Chiu Chuang via iwasakims)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5de848cd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5de848cd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5de848cd

Branch: refs/heads/HDFS-7240
Commit: 5de848cd5d46527a8fba481c76089da21f533050
Parents: eba66a6
Author: Masatake Iwasaki <[email protected]>
Authored: Tue Mar 15 20:42:05 2016 +0900
Committer: Masatake Iwasaki <[email protected]>
Committed: Tue Mar 15 20:42:05 2016 +0900

----------------------------------------------------------------------
 .../src/site/markdown/HDFSCommands.md           | 159 ++++++++++++-------
 1 file changed, 103 insertions(+), 56 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5de848cd/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index ae126a2..035abd6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -19,6 +19,7 @@ HDFS Commands Guide
 * [User Commands](#User_Commands)
     * [classpath](#classpath)
     * [dfs](#dfs)
+    * [envvars](#envvars)
     * [fetchdt](#fetchdt)
     * [fsck](#fsck)
     * [getconf](#getconf)
@@ -88,6 +89,12 @@ Usage: `hdfs dfs [COMMAND [COMMAND_OPTIONS]]`
 
 Run a filesystem command on the file system supported in Hadoop. The various 
COMMAND\_OPTIONS can be found at [File System Shell 
Guide](../hadoop-common/FileSystemShell.html).
 
+### `envvars`
+
+Usage: `hdfs envvars`
+
+display computed Hadoop environment variables.
+
 ### `fetchdt`
 
 Usage: `hdfs fetchdt <opts> <token_file_path> `
@@ -182,7 +189,8 @@ Usage: `hdfs jmxget [-localVM ConnectorURL | -port port | 
-server mbeanserver |
 | `-help` | print help |
 | `-localVM` ConnectorURL | connect to the VM on the same machine |
 | `-port` *mbean server port* | specify mbean server port, if missing it will 
try to connect to MBean Server in the same VM |
-| `-service` | specify jmx service, either DataNode or NameNode, the default |
+| `-server` | specify mbean server (localhost by default) |
+| `-service` NameNode\|DataNode | specify jmx service. NameNode by default. |
 
 Dump JMX information from a service.
 
@@ -203,11 +211,11 @@ Usage: `hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE`
 |:---- |:---- |
 | `-f`,`--fix-txids` | Renumber the transaction IDs in the input, so that 
there are no gaps or invalid transaction IDs. |
 | `-h`,`--help` | Display usage information and exit |
-| `-r`,`--ecover` | When reading binary edit logs, use recovery mode. This 
will give you the chance to skip corrupt parts of the edit log. |
+| `-r`,`--recover` | When reading binary edit logs, use recovery mode. This 
will give you the chance to skip corrupt parts of the edit log. |
 | `-p`,`--processor` *arg* | Select which type of processor to apply against 
image file, currently supported processors are: binary (native binary format 
that Hadoop uses), xml (default, XML format), stats (prints statistics about 
edits file) |
 | `-v`,`--verbose` | More verbose output, prints the input and output 
filenames, for processors that write to a file, also output to screen. On large 
image files this will dramatically increase processing time (default is false). 
|
 
-Hadoop offline edits viewer.
+Hadoop offline edits viewer. See [Offline Edits Viewer 
Guide](./HdfsEditsViewer.html) for more info.
 
 ### `oiv`
 
@@ -217,17 +225,24 @@ Usage: `hdfs oiv [OPTIONS] -i INPUT_FILE`
 
 | COMMAND\_OPTION | Description |
 |:---- |:---- |
-| `-i`,`--inputFile` *arg* | edits file to process, xml (case insensitive) 
extension means XML format, any other filename means binary format |
+| `-i`\|`--inputFile` *input file* | Specify the input fsimage file (or XML 
file, if ReverseXML processor is used) to process. |
+
 
 #### Optional command line arguments:
 
 | COMMAND\_OPTION | Description |
 |:---- |:---- |
-| `-h`,`--help` | Display usage information and exit |
-| `-o`,`--outputFile` *arg* | Name of output file. If the specified file 
exists, it will be overwritten, format of the file is determined by -p option |
-| `-p`,`--processor` *arg* | Select which type of processor to apply against 
image file, currently supported processors are: binary (native binary format 
that Hadoop uses), xml (default, XML format), stats (prints statistics about 
edits file) |
+| `-o`,`--outputFile` *output file* | Specify the output filename, if the 
specified output processor generates one. If the specified file already exists, 
it is silently overwritten. (output to stdout by default) If the input file is 
an XML file, it also creates an &lt;outputFile&gt;.md5. |
+| `-p`,`--processor` *processor* | Specify the image processor to apply 
against the image file. Currently valid options are `Web` (default), `XML`, 
`Delimited`, `FileDistribution` and `ReverseXML`. |
+| `-addr` *address* | Specify the address(host:port) to listen. 
(localhost:5978 by default). This option is used with Web processor. |
+| `-maxSize` *size* | Specify the range [0, maxSize] of file sizes to be 
analyzed in bytes (128GB by default). This option is used with FileDistribution 
processor. |
+| `-step` *size* | Specify the granularity of the distribution in bytes (2MB 
by default). This option is used with FileDistribution processor. |
+| `-delimiter` *arg* | Delimiting string to use with Delimited processor. |
+| `-t`,`--temp` *temporary dir* | Use temporary dir to cache intermediate 
result to generate Delimited outputs. If not set, Delimited processor 
constructs the namespace in memory before outputting text. |
+| `-h`,`--help` | Display the tool usage and help information and exit. |
 
-Hadoop Offline Image Viewer for newer image files.
+
+Hadoop Offline Image Viewer for image files in Hadoop 2.4 or up. See [Offline 
Image Viewer Guide](./HdfsImageViewer.html) for more info.
 
 ### `oiv_legacy`
 
@@ -235,11 +250,21 @@ Usage: `hdfs oiv_legacy [OPTIONS] -i INPUT_FILE -o 
OUTPUT_FILE`
 
 | COMMAND\_OPTION | Description |
 |:---- |:---- |
-| `-h`,`--help` | Display usage information and exit |
-| `-i`,`--inputFile` *arg* | edits file to process, xml (case insensitive) 
extension means XML format, any other filename means binary format |
-| `-o`,`--outputFile` *arg* | Name of output file. If the specified file 
exists, it will be overwritten, format of the file is determined by -p option |
+| `-i`,`--inputFile` *input file* | Specify the input fsimage file to process. 
|
+| `-o`,`--outputFile` *output file* | Specify the output filename, if the 
specified output processor generates one. If the specified file already exists, 
it is silently overwritten. |
 
-Hadoop offline image viewer for older versions of Hadoop.
+#### Optional command line arguments:
+
+| COMMAND\_OPTION | Description |
+|:---- |:---- |
+| `-p`\|`--processor` *processor* | Specify the image processor to apply 
against the image file. Valid options are Ls (default), XML, Delimited, 
Indented, and FileDistribution. |
+| `-skipBlocks` | Do not enumerate individual blocks within files. This may 
save processing time and outfile file space on namespaces with very large 
files. The Ls processor reads the blocks to correctly determine file sizes and 
ignores this option. |
+| `-printToScreen` | Pipe output of processor to console as well as specified 
file. On extremely large namespaces, this may increase processing time by an 
order of magnitude. |
+| `-delimiter` *arg* | When used in conjunction with the Delimited processor, 
replaces the default tab delimiter with the string specified by *arg*. |
+| `-h`\|`--help` | Display the tool usage and help information and exit. |
+
+
+Hadoop offline image viewer for older versions of Hadoop. See [oiv\_legacy 
Command](./HdfsImageViewer.html#oiv_legacy_Command) for more info.
 
 ### `snapshotDiff`
 
@@ -263,13 +288,14 @@ Commands useful for administrators of a hadoop cluster.
 Usage:
 
         hdfs balancer
-              [-threshold <threshold>]
               [-policy <policy>]
+              [-threshold <threshold>]
               [-exclude [-f <hosts-file> | <comma-separated list of hosts>]]
               [-include [-f <hosts-file> | <comma-separated list of hosts>]]
               [-source [-f <hosts-file> | <comma-separated list of hosts>]]
               [-blockpools <comma-separated list of blockpool ids>]
               [-idleiterations <idleiterations>]
+              [-runDuringUpgrade]
 
 | COMMAND\_OPTION | Description |
 |:---- |:---- |
@@ -280,6 +306,8 @@ Usage:
 | `-source -f` \<hosts-file\> \| \<comma-separated list of hosts\> | Pick only 
the specified datanodes as source nodes. |
 | `-blockpools` \<comma-separated list of blockpool ids\> | The balancer will 
only run on blockpools included in this list. |
 | `-idleiterations` \<iterations\> | Maximum number of idle iterations before 
exit. This overwrites the default idleiterations(5). |
+| `-runDuringUpgrade` | Whether to run the balancer during an ongoing HDFS 
upgrade. This is usually not desired since it will not affect used space on 
over-utilized machines. |
+| `-h`\|`--help` | Display the tool usage and help information and exit. |
 
 Runs a cluster balancing utility. An administrator can simply press Ctrl-C to 
stop the rebalancing process. See [Balancer](./HdfsUserGuide.html#Balancer) for 
more details.
 
@@ -289,7 +317,18 @@ Besides the above command options, a pinning feature is 
introduced starting from
 
 ### `cacheadmin`
 
-Usage: `hdfs cacheadmin -addDirective -path <path> -pool <pool-name> [-force] 
[-replication <replication>] [-ttl <time-to-live>]`
+Usage:
+
+    hdfs cacheadmin [-addDirective -path <path> -pool <pool-name> [-force] 
[-replication <replication>] [-ttl <time-to-live>]]
+    hdfs cacheadmin [-modifyDirective -id <id> [-path <path>] [-force] 
[-replication <replication>] [-pool <pool-name>] [-ttl <time-to-live>]]
+    hdfs cacheadmin [-listDirectives [-stats] [-path <path>] [-pool <pool>] 
[-id <id>]
+    hdfs cacheadmin [-removeDirective <id>]
+    hdfs cacheadmin [-removeDirectives -path <path>]
+    hdfs cacheadmin [-addPool <name> [-owner <owner>] [-group <group>] [-mode 
<mode>] [-limit <limit>] [-maxTtl <maxTtl>]
+    hdfs cacheadmin [-modifyPool <name> [-owner <owner>] [-group <group>] 
[-mode <mode>] [-limit <limit>] [-maxTtl <maxTtl>]]
+    hdfs cacheadmin [-removePool <name>]
+    hdfs cacheadmin [-listPools [-stats] [<name>]]
+    hdfs cacheadmin [-help <command-name>]
 
 See the [HDFS Cache Administration 
Documentation](./CentralizedCacheManagement.html#cacheadmin_command-line_interface)
 for more information.
 
@@ -298,8 +337,8 @@ See the [HDFS Cache Administration 
Documentation](./CentralizedCacheManagement.h
 Usage:
 
       hdfs crypto -createZone -keyName <keyName> -path <path>
-      hdfs crypto -help <command-name>
       hdfs crypto -listZones
+      hdfs crypto -help <command-name>
 
 See the [HDFS Transparent Encryption 
Documentation](./TransparentEncryption.html#crypto_command-line_interface) for 
more information.
 
@@ -319,44 +358,43 @@ Runs a HDFS datanode.
 
 Usage:
 
-        hdfs dfsadmin [GENERIC_OPTIONS]
-              [-report [-live] [-dead] [-decommissioning]]
-              [-safemode enter | leave | get | wait | forceExit]
-              [-saveNamespace]
-              [-rollEdits]
-              [-restoreFailedStorage true |false |check]
-              [-refreshNodes]
-              [-setQuota <quota> <dirname>...<dirname>]
-              [-clrQuota <dirname>...<dirname>]
-              [-setSpaceQuota <quota> [-storageType <storagetype>] 
<dirname>...<dirname>]
-              [-clrSpaceQuota [-storageType <storagetype>] 
<dirname>...<dirname>]
-              [-finalizeUpgrade]
-              [-rollingUpgrade [<query> |<prepare> |<finalize>]]
-              [-metasave filename]
-              [-refreshServiceAcl]
-              [-refreshUserToGroupsMappings]
-              [-refreshSuperUserGroupsConfiguration]
-              [-refreshCallQueue]
-              [-refresh <host:ipc_port> <key> [arg1..argn]]
-              [-reconfig <datanode |...> <host:ipc_port> <start |status>]
-              [-printTopology]
-              [-refreshNamenodes datanodehost:port]
-              [-deleteBlockPool datanode-host:port blockpoolId [force]]
-              [-setBalancerBandwidth <bandwidth in bytes per second>]
-              [-getBalancerBandwidth <datanode_host:ipc_port>]
-              [-allowSnapshot <snapshotDir>]
-              [-disallowSnapshot <snapshotDir>]
-              [-fetchImage <local directory>]
-              [-shutdownDatanode <datanode_host:ipc_port> [upgrade]]
-              [-getDatanodeInfo <datanode_host:ipc_port>]
-              [-triggerBlockReport [-incremental] <datanode_host:ipc_port>]
-              [-help [cmd]]
+        hdfs dfsadmin [-report [-live] [-dead] [-decommissioning]]
+        hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
+        hdfs dfsadmin [-saveNamespace [-beforeShutdown]]
+        hdfs dfsadmin [-rollEdits]
+        hdfs dfsadmin [-restoreFailedStorage true |false |check]
+        hdfs dfsadmin [-refreshNodes]
+        hdfs dfsadmin [-setQuota <quota> <dirname>...<dirname>]
+        hdfs dfsadmin [-clrQuota <dirname>...<dirname>]
+        hdfs dfsadmin [-setSpaceQuota <quota> [-storageType <storagetype>] 
<dirname>...<dirname>]
+        hdfs dfsadmin [-clrSpaceQuota [-storageType <storagetype>] 
<dirname>...<dirname>]
+        hdfs dfsadmin [-finalizeUpgrade]
+        hdfs dfsadmin [-rollingUpgrade [<query> |<prepare> |<finalize>]]
+        hdfs dfsadmin [-refreshServiceAcl]
+        hdfs dfsadmin [-refreshUserToGroupsMappings]
+        hdfs dfsadmin [-refreshSuperUserGroupsConfiguration]
+        hdfs dfsadmin [-refreshCallQueue]
+        hdfs dfsadmin [-refresh <host:ipc_port> <key> [arg1..argn]]
+        hdfs dfsadmin [-reconfig <namenode|datanode> <host:ipc_port> <start 
|status |properties>]
+        hdfs dfsadmin [-printTopology]
+        hdfs dfsadmin [-refreshNamenodes datanodehost:port]
+        hdfs dfsadmin [-deleteBlockPool datanode-host:port blockpoolId [force]]
+        hdfs dfsadmin [-setBalancerBandwidth <bandwidth in bytes per second>]
+        hdfs dfsadmin [-getBalancerBandwidth <datanode_host:ipc_port>]
+        hdfs dfsadmin [-fetchImage <local directory>]
+        hdfs dfsadmin [-allowSnapshot <snapshotDir>]
+        hdfs dfsadmin [-disallowSnapshot <snapshotDir>]
+        hdfs dfsadmin [-shutdownDatanode <datanode_host:ipc_port> [upgrade]]
+        hdfs dfsadmin [-getDatanodeInfo <datanode_host:ipc_port>]
+        hdfs dfsadmin [-metasave filename]
+        hdfs dfsadmin [-triggerBlockReport [-incremental] 
<datanode_host:ipc_port>]
+        hdfs dfsadmin [-help [cmd]]
 
 | COMMAND\_OPTION | Description |
 |:---- |:---- |
 | `-report` `[-live]` `[-dead]` `[-decommissioning]` | Reports basic 
filesystem information and statistics, The dfs usage can be different from "du" 
usage, because it measures raw space used by replication, checksums, snapshots 
and etc. on all the DNs. Optional flags may be used to filter the list of 
displayed DataNodes. |
 | `-safemode` enter\|leave\|get\|wait\|forceExit | Safe mode maintenance 
command. Safe mode is a Namenode state in which it <br/>1. does not accept 
changes to the name space (read-only) <br/>2. does not replicate or delete 
blocks. <br/>Safe mode is entered automatically at Namenode startup, and leaves 
safe mode automatically when the configured minimum percentage of blocks 
satisfies the minimum replication condition. If Namenode detects any anomaly 
then it will linger in safe mode till that issue is resolved. If that anomaly 
is the consequence of a deliberate action, then administrator can use -safemode 
forceExit to exit safe mode. The cases where forceExit may be required are<br/> 
1. Namenode metadata is not consistent. If Namenode detects that metadata has 
been modified out of band and can cause data loss, then Namenode will enter 
forceExit state. At that point user can either restart Namenode with correct 
metadata files or forceExit (if data loss is acceptable).<br/>2. Rollback c
 auses metadata to be replaced and rarely it can trigger safe mode forceExit 
state in Namenode. In that case you may proceed by issuing -safemode 
forceExit.<br/> Safe mode can also be entered manually, but then it can only be 
turned off manually as well. |
-| `-saveNamespace` | Save current namespace into storage directories and reset 
edits log. Requires safe mode. |
+| `-saveNamespace` `\[-beforeShutdown\]` | Save current namespace into storage 
directories and reset edits log. Requires safe mode. If the "beforeShutdown" 
option is given, the NameNode does a checkpoint if and only if no checkpoint 
has been done during a time window (a configurable number of checkpoint 
periods). This is usually used before shutting down the NameNode to prevent 
potential fsimage/editlog corruption. |
 | `-rollEdits` | Rolls the edit log on the active NameNode. |
 | `-restoreFailedStorage` true\|false\|check | This option will turn on/off 
automatic attempt to restore failed storage replicas. If a failed storage 
becomes available again the system will attempt to restore edits and/or fsimage 
during checkpoint. 'check' option will return current setting. |
 | `-refreshNodes` | Re-read the hosts and exclude files to update the set of 
Datanodes that are allowed to connect to the Namenode and those that should be 
decommissioned or recommissioned. |
@@ -366,23 +404,23 @@ Usage:
 | `-clrSpaceQuota` `[-storageType <storagetype>]` \<dirname\>...\<dirname\> | 
See [HDFS Quotas 
Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands) for the 
detail. |
 | `-finalizeUpgrade` | Finalize upgrade of HDFS. Datanodes delete their 
previous version working directories, followed by Namenode doing the same. This 
completes the upgrade process. |
 | `-rollingUpgrade` [\<query\>\|\<prepare\>\|\<finalize\>] | See [Rolling 
Upgrade 
document](../hadoop-hdfs/HdfsRollingUpgrade.html#dfsadmin_-rollingUpgrade) for 
the detail. |
-| `-metasave` filename | Save Namenode's primary data structures to *filename* 
in the directory specified by hadoop.log.dir property. *filename* is 
overwritten if it exists. *filename* will contain one line for each of the 
following<br/>1. Datanodes heart beating with Namenode<br/>2. Blocks waiting to 
be replicated<br/>3. Blocks currently being replicated<br/>4. Blocks waiting to 
be deleted |
 | `-refreshServiceAcl` | Reload the service-level authorization policy file. |
 | `-refreshUserToGroupsMappings` | Refresh user-to-groups mappings. |
 | `-refreshSuperUserGroupsConfiguration` | Refresh superuser proxy groups 
mappings |
 | `-refreshCallQueue` | Reload the call queue from config. |
 | `-refresh` \<host:ipc\_port\> \<key\> [arg1..argn] | Triggers a 
runtime-refresh of the resource specified by \<key\> on \<host:ipc\_port\>. All 
other args after are sent to the host. |
-| `-reconfig` \<datanode \|...\> \<host:ipc\_port\> \<start\|status\> | Start 
reconfiguration or get the status of an ongoing reconfiguration. The second 
parameter specifies the node type. Currently, only reloading DataNode's 
configuration is supported. |
+| `-reconfig` \<datanode \|namenode\> \<host:ipc\_port\> 
\<start\|status\|properties\> | Starts reconfiguration or gets the status of an 
ongoing reconfiguration, or gets a list of reconfigurable properties. The 
second parameter specifies the node type. |
 | `-printTopology` | Print a tree of the racks and their nodes as reported by 
the Namenode |
 | `-refreshNamenodes` datanodehost:port | For the given datanode, reloads the 
configuration files, stops serving the removed block-pools and starts serving 
new block-pools. |
 | `-deleteBlockPool` datanode-host:port blockpoolId [force] | If force is 
passed, block pool directory for the given blockpool id on the given datanode 
is deleted along with its contents, otherwise the directory is deleted only if 
it is empty. The command will fail if datanode is still serving the block pool. 
Refer to refreshNamenodes to shutdown a block pool service on a datanode. |
 | `-setBalancerBandwidth` \<bandwidth in bytes per second\> | Changes the 
network bandwidth used by each datanode during HDFS block balancing. 
\<bandwidth\> is the maximum number of bytes per second that will be used by 
each datanode. This value overrides the dfs.balance.bandwidthPerSec parameter. 
NOTE: The new value is not persistent on the DataNode. |
 | `-getBalancerBandwidth` \<datanode\_host:ipc\_port\> | Get the network 
bandwidth(in bytes per second) for the given datanode. This is the maximum 
network bandwidth used by the datanode during HDFS block balancing.|
+| `-fetchImage` \<local directory\> | Downloads the most recent fsimage from 
the NameNode and saves it in the specified local directory. |
 | `-allowSnapshot` \<snapshotDir\> | Allowing snapshots of a directory to be 
created. If the operation completes successfully, the directory becomes 
snapshottable. See the [HDFS Snapshot Documentation](./HdfsSnapshots.html) for 
more information. |
 | `-disallowSnapshot` \<snapshotDir\> | Disallowing snapshots of a directory 
to be created. All snapshots of the directory must be deleted before 
disallowing snapshots. See the [HDFS Snapshot 
Documentation](./HdfsSnapshots.html) for more information. |
-| `-fetchImage` \<local directory\> | Downloads the most recent fsimage from 
the NameNode and saves it in the specified local directory. |
 | `-shutdownDatanode` \<datanode\_host:ipc\_port\> [upgrade] | Submit a 
shutdown request for the given datanode. See [Rolling Upgrade 
document](./HdfsRollingUpgrade.html#dfsadmin_-shutdownDatanode) for the detail. 
|
 | `-getDatanodeInfo` \<datanode\_host:ipc\_port\> | Get the information about 
the given datanode. See [Rolling Upgrade 
document](./HdfsRollingUpgrade.html#dfsadmin_-getDatanodeInfo) for the detail. |
+| `-metasave` filename | Save Namenode's primary data structures to *filename* 
in the directory specified by hadoop.log.dir property. *filename* is 
overwritten if it exists. *filename* will contain one line for each of the 
following<br/>1. Datanodes heart beating with Namenode<br/>2. Blocks waiting to 
be replicated<br/>3. Blocks currently being replicated<br/>4. Blocks waiting to 
be deleted |
 | `-triggerBlockReport` `[-incremental]` \<datanode\_host:ipc\_port\> | 
Trigger a block report for the given datanode. If 'incremental' is specified, 
it will be otherwise, it will be a full block report. |
 | `-help` [cmd] | Displays help for the given command or all commands if none 
is specified. |
 
@@ -411,12 +449,13 @@ Runs the ErasureCoding CLI. See [HDFS 
ErasureCoding](./HDFSErasureCoding.html#Ad
 
 Usage:
 
-        hdfs haadmin -checkHealth <serviceId>
+        hdfs haadmin -transitionToActive <serviceId> [--forceactive]
+        hdfs haadmin -transitionToStandby <serviceId>
         hdfs haadmin -failover [--forcefence] [--forceactive] <serviceId> 
<serviceId>
         hdfs haadmin -getServiceState <serviceId>
+        hdfs haadmin -checkHealth <serviceId>
         hdfs haadmin -help <command>
-        hdfs haadmin -transitionToActive <serviceId> [--forceactive]
-        hdfs haadmin -transitionToStandby <serviceId>
+
 
 | COMMAND\_OPTION | Description |
 |:---- |:---- |
@@ -425,6 +464,7 @@ Usage:
 | `-getServiceState` | determine whether the given NameNode is Active or 
Standby |
 | `-transitionToActive` | transition the state of the given NameNode to Active 
(Warning: No fencing is done) |
 | `-transitionToStandby` | transition the state of the given NameNode to 
Standby (Warning: No fencing is done) |
+| `-help` [cmd] | Displays help for the given command or all commands if none 
is specified. |
 
 See [HDFS HA with 
NFS](./HDFSHighAvailabilityWithNFS.html#Administrative_commands) or [HDFS HA 
with QJM](./HDFSHighAvailabilityWithQJM.html#Administrative_commands) for more 
information on this command.
 
@@ -509,9 +549,16 @@ Runs the HDFS secondary namenode. See [Secondary 
Namenode](./HdfsUserGuide.html#
 
 ### `storagepolicies`
 
-Usage: `hdfs storagepolicies`
+Usage:
+
+      hdfs storagepolicies
+          [-listPolicies]
+          [-setStoragePolicy -path <path> -policy <policy>]
+          [-getStoragePolicy -path <path>]
+          [-unsetStoragePolicy -path <path>]
+          [-help <command-name>]
 
-Lists out all storage policies. See the [HDFS Storage Policy 
Documentation](./ArchivalStorage.html) for more information.
+Lists out all/Gets/sets/unsets storage policies. See the [HDFS Storage Policy 
Documentation](./ArchivalStorage.html) for more information.
 
 ### `zkfc`
 

Reply via email to