http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqfilespace.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqfilespace.html.md.erb 
b/markdown/reference/cli/admin_utilities/hawqfilespace.html.md.erb
new file mode 100644
index 0000000..eeb7b39
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqfilespace.html.md.erb
@@ -0,0 +1,147 @@
+---
+title: hawq filespace
+---
+
+Creates a filespace using a configuration file that defines a file system 
location. Filespaces describe the physical file system resources to be used by 
a tablespace.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq filespace [<connection_options>] 
+  -o <output_directory_name> | --output <output_directory_name>
+  [-l <logfile_directory> | --logdir <logfile_directory>] 
+
+hawq filespace [<connection_options]  
+  -c <fs_config_file> | --config <fs_config_file> 
+  [-l <logfile_directory> | --logdir <logfile_directory>] 
+
+hawq filespace [<connection_options>]
+  --movefilespace <filespace> --location <dfslocation>
+  [-l <logfile_directory> | --logdir <logfile_directory>] 
+
+hawq filespace -v | --version 
+
+hawq filespace -? | --help
+```
+where:
+
+``` pre
+<connection_options> =
+  [-h <host> | --host <host>] 
+  [-p <port> | -- port <port>] 
+  [-U <username> | --username <username>] 
+  [-W | --password] 
+```
+
+## <a id="topic1__section3"></a>Description
+
+A tablespace requires a file system location to store its database files. This 
file system location for all components in a HAWQ system is referred to as a 
*filespace*. Once a filespace is defined, it can be used by one or more 
tablespaces.
+
+The `--movefilespace` option allows you to relocate a filespace and its 
components within a dfs file system.
+
+When used with the `-o` option, the `hawq filespace` utility looks up your 
system configuration information in the system catalog tables and prompts you 
for the appropriate file system location needed to create the filespace. It 
then outputs a configuration file that can be used to create a filespace. If a 
file name is not specified, a `hawqfilespace_config_`*\#* file will be created 
in the current directory by default.
+
+Once you have a configuration file, you can run `hawq filespace` with the `-c` 
option to create the filespace in HAWQ system.
+
+**Note:** If segments are down due to a power or nic failure, you may see 
inconsistencies during filespace creation. You may not be able to bring up the 
cluster.
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-o, -\\\-output &lt;output\_directory\_name&gt;  </dt>
+<dd>The directory location and file name to output the generated filespace 
configuration file. You will be prompted to enter a name for the filespace and 
file system location. The file system locations must exist on all hosts in your 
system prior to running the `hawq filespace` command. You will specify the 
number of replicas to create. The default is 3 replicas. After the utility 
creates the configuration file, you can manually edit the file to make any 
required changes to the filespace layout before creating the filespace in 
HAWQ.</dd>
+
+<dt>-c, -\\\-config &lt;fs\_config\_file&gt;  </dt>
+<dd>A configuration file containing:
+
+-   An initial line denoting the new filespace name. For example:
+
+    filespace:&lt;myfs&gt;
+</dd>
+
+<dt>-\\\-movefilespace &lt;filespace&gt;  </dt>
+<dd>Create the filespace in a new location on a distributed file system. 
Updates the dfs url in the HAWQ database, so that data in the original location 
can be moved or deleted. Data in the original location is not affected by this 
command.</dd>
+
+<dt>-\\\-location &lt;dfslocation&gt;  </dt>
+<dd>Specifies the new URL location to which a dfs file system should be 
moved.</dd>
+
+<dt>-l, -\\\-logdir &lt;logfile\_directory&gt;  </dt>
+<dd>The directory to write the log file. Defaults to `~/hawqAdminLogs`.</dd>
+
+<dt>-v, -\\\-version (show utility version)  </dt>
+<dd>Displays the version of this utility.</dd>
+
+<dt>-?, -\\\-help (help)  </dt>
+<dd>Displays the command usage and syntax.</dd>
+
+**&lt;connection_options&gt;**
+
+<dt>-h, -\\\-host &lt;hostname&gt;  </dt>
+<dd>The host name of the machine on which the HAWQ master database server is 
running. If not specified, reads from the environment variable `PGHOST` or 
defaults to localhost.</dd>
+
+<dt>-p, -\\\-port &lt;port&gt;  </dt>
+<dd>The TCP port on which the HAWQ master database server is listening for 
connections. If not specified, reads from the environment variable `PGPORT` or 
defaults to 5432.</dd>
+
+<dt>-U, -\\\-username &lt;superuser\_name&gt;  </dt>
+<dd>The database superuser role name to connect as. If not specified, reads 
from the environment variable `PGUSER` or defaults to the current system user 
name. Only database superusers are allowed to create filespaces.</dd>
+
+<dt>-W, -\\\-password  </dt>
+<dd>Force a password prompt.</dd>
+
+## <a id="topic1__section6"></a>Example 1
+
+Create a filespace configuration file. Depending on your system setup, you may 
need to specify the host and port. You will be prompted to enter a name for the 
filespace and a replica number. You will then be asked for the DFS location. 
The file system locations must exist on all hosts in your system prior to 
running the `hawq filespace` command:
+
+``` shell
+$ hawq filespace -o .
+```
+
+``` pre
+Enter a name for this filespace
+> fastdisk
+Enter replica num for filespace. If 0, default replica num is used (default=3)
+0
+Please specify the DFS location for the filespace (for example: 
localhost:9000/fs)
+location> localhost:9000/hawqfs
+
+20160203:11:35:42:272716 hawqfilespace:localhost:gpadmin-[INFO]:-[created]
+20160203:11:35:42:272716 hawqfilespace:localhost:gpadmin-[INFO]:-
+To add this filespace to the database please run the command:
+   hawqfilespace --config ./hawqfilespace_config_20160203_112711
+Checking your configuration: 
+
+Your system has 1 hosts with 2 primary segments 
+per host.
+
+Configuring hosts: [sdw1, sdw2] 
+
+Enter a file system location for the master:
+master location> /hawq_master_filespc
+```
+
+Example filespace configuration file:
+
+``` pre
+filespace:fastdisk
+mdw:1:/hawq_master_filespc/gp-1
+sdw1:2:/hawq_pri_filespc/gp0
+sdw2:3:/hawq_pri_filespc/gp1
+```
+
+Execute the configuration file to create the filespace:
+
+``` shell
+$ hawq filespace --config hawq_filespace_config_1
+```
+
+## Example 2
+
+Create the filespace at `cdbfast_fs_a` and move an hdfs filesystem to it:
+
+``` shell
+$ hawq filespace --movefilespace=cdbfast_fs_a
+      --location=hdfs://gphd-cluster/cdbfast_fs_a/
+```
+
+## <a id="topic1__section7"></a>See Also
+
+[CREATE TABLESPACE](../../sql/CREATE-TABLESPACE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqinit.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqinit.html.md.erb 
b/markdown/reference/cli/admin_utilities/hawqinit.html.md.erb
new file mode 100644
index 0000000..de45ef3
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqinit.html.md.erb
@@ -0,0 +1,156 @@
+---
+title: hawq init
+---
+
+The `hawq init cluster` command initializes a HAWQ system and starts it.
+
+Use the `hawq init master` and `hawq init segment` commands to individually 
initialize the master or segment nodes, respectively. Specify any format 
options at this time. The `hawq init standby` command initializes a standby 
master host for a HAWQ system.
+
+Use the `hawq init <object> --standby-host` option to define the host for a 
standby at initialization.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq init <object> [--options]
+
+hawq init standby | cluster
+  [--standby-host <address_of_standby_host>] 
+  [<options>]
+
+hawq init -? | --help
+```
+where:
+
+``` pre
+<object> = cluster | master | segment | standby
+
+<options> =   
+  [-a] [-l <logfile_directory>] [-q] [-v] [-t] 
+  [-n]   
+  [--locale=<locale>] [--lc-collate=<locale>] 
+  [--lc-ctype=<locale>] [--lc-messages=<locale>] 
+  [--lc-monetary=<locale>] [--lc-numeric=<locale>] 
+  [--lc-time=<locale>] 
+  [--bucket_number <number>] 
+  [--max_connections <number>]  
+  [--shared_buffers <number>]
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq init <object>` utility creates a HAWQ instance using configuration 
parameters defined in `$GPHOME/etc/hawq-site.xml`. Before running this utility, 
verify that you have installed the HAWQ software on all the hosts in the array.
+
+In a HAWQ DBMS, each database instance (the master and all segments) must be 
initialized across all of the hosts in the system in a way that allows them to 
work together as a unified DBMS. The `hawq init cluster` utility initializes 
the HAWQ master and each segment instance, and configures the system as a 
whole. When `hawq init cluster` is run, the cluster comes online automatically 
without needing to explicitly start it. You can start a single node cluster 
without any user-defined changes to the default `hawq-site.xml` file. For 
larger clusters, use the template-hawq-site.xml file to specify the 
configuration.
+
+To use the template for initializing a new cluster configuration, replace the 
items contained within the % markers. For example, replace 
`value%master.host%value` and `%master.host%` with the master host name. After 
modification, rename the file to the name of the default configuration file: 
`hawq-site.xml`.
+
+
+-   Before initializing HAWQ, set the `$GPHOME` environment variable to point 
to the location of your HAWQ installation on the master host and exchange SSH 
keys between all host addresses in the array, using `hawq ssh-exkeys`.
+-   To initialize and start a HAWQ cluster, enter the following command on the 
master host:
+
+    ```shell
+    $ hawq init cluster
+    ```
+
+This utility performs the following tasks:
+
+-   Verifies that the parameters in the configuration file are correct.
+-   Ensures that a connection can be established to each host address. If a 
host address cannot be reached, the utility will exit.
+-   Verifies the locale settings.
+-   Initializes the master instance.
+-   Initializes the standby master instance (if specified).
+-   Initializes the segment instances.
+-   Configures the HAWQ system and checks for errors.
+-   Starts the HAWQ system.
+
+The `hawq init standby` utility can be run on either  the currently active 
*primary* master host or on the standby node.
+
+`hawq init standby` performs the following steps:
+
+-   Updates the HAWQ system catalog to add the new standby master host 
information
+-   Edits the `pg_hba.conf` file of the HAWQ master to allow access from the 
newly added standby master.
+-   Sets up the standby master instance on the alternate master host
+-   Starts the synchronization process
+
+A backup, standby master host serves as a 'warm standby' in the event of the 
primary master host becoming non-operational. The standby master is kept up to 
date by transaction log replication processes (the `walsender` and 
`walreceiver`), which run on the primary master and standby master hosts and 
keep the data between the primary and standby master hosts synchronized. To add 
a standby master to the system, use the command `hawq init standby`, for 
example: `init standby host09`. To configure the standby hostname at 
initialization without needing to run hawq config by defining it, use the 
--standby-host option. To create the standby above, you would specify `hawq 
init standby --standby-host=host09` or `hawq init cluster 
--standby-host=host09`.
+
+If the primary master fails, the log replication process is shut down. Run the 
`hawq activate standby` utility to activate the standby master in its place;  
upon activation of the standby master, the replicated logs are used to 
reconstruct the state of the master host at the time of the last successfully 
committed transaction.
+
+## Objects
+
+<dt>cluster  </dt>
+<dd>Start a HAWQ cluster.</dd>
+
+<dt>master  </dt>
+<dd>Start HAWQ master.</dd>
+
+<dt>segment  </dt>
+<dd>Start a local segment node.</dd>
+
+<dt>standby  </dt>
+<dd>Start a HAWQ standby master.</dd>
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-a, (do not prompt)  </dt>
+<dd>Do not prompt the user for confirmation.</dd>
+
+
+<dt>-l, -\\\-logdir \<logfile\_directory\>  </dt>
+<dd>The directory to write the log file. Defaults to `~/hawq/AdminLogs`.</dd>
+
+<dt>-q, -\\\-quiet (no screen output)  </dt>
+<dd>Run in quiet mode. Command output is not displayed on the screen, but is 
still written to the log file.</dd>
+
+<dt>-v, -\\\-verbose  </dt>
+<dd>Displays detailed status, progress and error messages and writes them to 
the log files.</dd>
+
+<dt>-t, -\\\-timeout  </dt>
+<dd>Sets timeout value in seconds. The default is 60 seconds.</dd>
+
+<dt>-n, -\\\-no-update  </dt>
+<dd>Resync the standby with the master, but do not update system catalog 
tables.</dd>
+
+<dt>-\\\-locale=\<locale\>   </dt>
+<dd>Sets the default locale used by HAWQ. If not specified, the `LC_ALL`, 
`LC_COLLATE`, or `LANG` environment variable of the master host determines the 
locale. If these are not set, the default locale is `C` (`POSIX`). A locale 
identifier consists of a language identifier and a region identifier, and 
optionally a character set encoding. For example, `sv_SE` is Swedish as spoken 
in Sweden, `en_US` is U.S. English, and `fr_CA` is French Canadian. If more 
than one character set can be useful for a locale, then the specifications look 
like this: `en_US.UTF-8` (locale specification and character set encoding). On 
most systems, the command `locale` will show the locale environment settings 
and `locale -a` will show a list of all available locales.</dd>
+
+<dt>-\\\-lc-collate=\<locale\>  </dt>
+<dd>Similar to `--locale`, but sets the locale used for collation (sorting 
data). The sort order cannot be changed after HAWQ is initialized, so it is 
important to choose a collation locale that is compatible with the character 
set encodings that you plan to use for your data. There is a special collation 
name of `C` or `POSIX` (byte-order sorting as opposed to dictionary-order 
sorting). The `C` collation can be used with any character encoding.</dd>
+
+<dt>-\\\-lc-ctype=\<locale\>  </dt>
+<dd>Similar to `--locale`, but sets the locale used for character 
classification (what character sequences are valid and how they are 
interpreted). This cannot be changed after HAWQ is initialized, so it is 
important to choose a character classification locale that is compatible with 
the data you plan to store in HAWQ.</dd>
+
+<dt>-\\\-lc-messages=\<locale\>  </dt>
+<dd>Similar to `--locale`, but sets the locale used for messages output by 
HAWQ. The current version of HAWQ does not support multiple locales for output 
messages (all messages are in English), so changing this setting will not have 
any effect.</dd>
+
+<dt>-\\\-lc-monetary=\<locale\>  </dt>
+<dd>Similar to `--locale`, but sets the locale used for formatting currency 
amounts.</dd>
+
+<dt>-\\\-lc-numeric=\<locale\>  </dt>
+<dd>Similar to `--locale`, but sets the locale used for formatting 
numbers.</dd>
+
+<dt>-\\\-lc-time=\<locale\>  </dt>
+<dd>Similar to `--locale`, but sets the locale used for formatting dates and 
times.</dd>
+
+<dt>-\\\-bucket\_number=\<number\>   </dt>
+<dd>Sets value of `default_hash_table_bucket_number`, which sets the default 
number of hash buckets for creating virtual segments. This parameter overrides 
the default value of `default_hash_table_bucket_number` set in `hawq-site.xml` 
by an Ambari install. If not specified, `hawq init` will use the value in 
`hawq-site.xml`.</dd>
+
+<dt>-\\\-max\_connections=\<number\>   </dt>
+<dd>Sets the number of client connections allowed to the master. The default 
is 250.</dd>
+
+<dt>-\\\-shared\_buffers \<number\>  </dt>
+<dd>Sets the number of shared\_buffers to be used when initializing HAWQ.</dd>
+
+<dt>-s, -\\\-standby-host \<name\_of\_standby\_host\>  </dt>
+<dd>Adds a standby host name to hawq-site.xml and syncs it to all the nodes. 
If a standby host name was already defined in hawq-site.xml, using this option 
will overwrite the existing value.</dd>
+
+<dt>-?, -\\\-help  </dt>
+<dd>Displays the online help.</dd>
+
+## <a id="topic1__section6"></a>Examples
+
+Initialize a HAWQ array with an optional standby master host:
+
+``` shell
+$ hawq init standby 
+```

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqload.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqload.html.md.erb 
b/markdown/reference/cli/admin_utilities/hawqload.html.md.erb
new file mode 100644
index 0000000..b9fe441
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqload.html.md.erb
@@ -0,0 +1,420 @@
+---
+title: hawq load
+---
+
+Acts as an interface to the external table parallel loading feature. Executes 
a load specification defined in a YAML-formatted control file to invoke the 
HAWQ parallel file server (`gpfdist`).
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq load -f <control_file> [-l <log_file>]   
+  [--gpfdist_timeout <seconds>] 
+  [[-v | -V] 
+  [-q]]
+  [-D]
+  [<connection_options>]
+
+hawq load -? 
+
+hawq load --version
+```
+where:
+
+``` pre
+<connection_options> =
+  [-h <host>] 
+  [-p <port>] 
+  [-U <username>] 
+  [-d <database>]
+  [-W]
+```
+
+## <a id="topic1__section3"></a>Prerequisites
+
+The client machine where `hawq load` is executed must have the following:
+
+-   Python 2.6.2 or later, `pygresql` (the Python interface to PostgreSQL), 
and `pyyaml`. Note that Python and the required Python libraries are included 
with the HAWQ server installation, so if you have HAWQ installed on the machine 
where `hawq load` is running, you do not need a separate Python installation.
+    **Note:** HAWQ Loaders for Windows supports only Python 2.5 (available 
from [www.python.org](http://python.org)).
+
+-   The [gpfdist](gpfdist.html#topic1) parallel file distribution program 
installed and in your `$PATH`. This program is located in `$GPHOME/bin` of your 
HAWQ server installation.
+-   Network access to and from all hosts in your HAWQ array (master and 
segments).
+-   Network access to and from the hosts where the data to be loaded resides 
(ETL servers).
+
+## <a id="topic1__section4"></a>Description
+
+`hawq load` is a data loading utility that acts as an interface to HAWQ's 
external table parallel loading feature. Using a load specification defined in 
a YAML formatted control file, `hawq                     load` executes a load 
by invoking the HAWQ parallel file server ([gpfdist](gpfdist.html#topic1)), 
creating an external table definition based on the source data defined, and 
executing an `INSERT` operation to load the source data into the target table 
in the database.
+
+The operation, including any SQL commands specified in the `SQL` collection of 
the YAML control file (see [Control File Format](#topic1__section7)), are 
performed as a single transaction to prevent inconsistent data when performing 
multiple, simultaneous load operations on a target table.
+
+## <a id="args"></a>Arguments
+
+<dt>-f &lt;control\_file&gt;  </dt>
+<dd>A YAML file that contains the load specification details. See [Control 
File Format](#topic1__section7).</dd>
+
+## <a id="topic1__section5"></a>Options
+
+<dt>-\\\-gpfdist\_timeout &lt;seconds&gt;  </dt>
+<dd>Sets the timeout for the `gpfdist` parallel file distribution program to 
send a response. Enter a value from `0` to `30` seconds (entering "`0`" to 
disables timeouts). Note that you might need to increase this value when 
operating on high-traffic networks.</dd>
+
+<dt>-l &lt;log\_file&gt;  </dt>
+<dd>Specifies where to write the log file. Defaults to 
`~/hawq/Adminlogs/hawq_load_YYYYMMDD`. For more information about the log file, 
see [Log File Format](#topic1__section9).</dd>
+
+<dt>-q (no screen output)  </dt>
+<dd>Run in quiet mode. Command output is not displayed on the screen, but is 
still written to the log file.</dd>
+
+<dt>-D (debug mode)  </dt>
+<dd>Check for error conditions, but do not execute the load.</dd>
+
+<dt>-v (verbose mode)  </dt>
+<dd>Show verbose output of the load steps as they are executed.</dd>
+
+<dt>-V (very verbose mode)  </dt>
+<dd>Shows very verbose output.</dd>
+
+<dt>-? (show help)  </dt>
+<dd>Show help, then exit.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Show the version of this utility, then exit.</dd>
+
+**Connection Options**
+
+<dt>-d &lt;database&gt;  </dt>
+<dd>The database to load into. If not specified, reads from the load control 
file, the environment variable `$PGDATABASE` or defaults to the current system 
user name.</dd>
+
+<dt>-h &lt;hostname&gt;  </dt>
+<dd>Specifies the host name of the machine on which the HAWQ master database 
server is running. If not specified, reads from the load control file, the 
environment variable `$PGHOST` or defaults to `localhost`.</dd>
+
+<dt>-p &lt;port&gt;  </dt>
+<dd>Specifies the TCP port on which the HAWQ master database server is 
listening for connections. If not specified, reads from the load control file, 
the environment variable `$PGPORT` or defaults to 5432.</dd>
+
+<dt>-U &lt;username&gt;  </dt>
+<dd>The database role name to connect as. If not specified, reads from the 
load control file, the environment variable `$PGUSER` or defaults to the 
current system user name.</dd>
+
+<dt>-W (force password prompt)  </dt>
+<dd>Force a password prompt. If not specified, reads the password from the 
environment variable `$PGPASSWORD` or from a password file specified by 
`$PGPASSFILE` or in `~/.pgpass`. If these are not set, then `hawq               
                  load` will prompt for a password even if `-W` is not 
supplied.</dd>
+
+## <a id="topic1__section7"></a>Control File Format
+
+The `hawq load` control file uses the [YAML 1.1](http://yaml.org/spec/1.1/) 
document format and then implements its own schema for defining the various 
steps of a HAWQ load operation. The control file must be a valid YAML document.
+
+The `hawq load` program processes the control file document in order and uses 
indentation (spaces) to determine the document hierarchy and the relationships 
of the sections to one another. The use of white space is significant. White 
space should not be used simply for formatting purposes, and tabs should not be 
used at all.
+
+The basic structure of a load control file is:
+
+``` pre
+---
+VERSION: 1.0.0.1
+DATABASE: db_name
+USER: db_username
+HOST: master_hostname
+PORT: master_port
+GPLOAD:
+   INPUT:
+    - SOURCE:
+         LOCAL_HOSTNAME:
+           - hostname_or_ip
+         PORT: http_port
+       | PORT_RANGE: [start_port_range, end_port_range]
+         FILE: 
+           - /path/to/input_file
+         SSL: true | false
+         CERTIFICATES_PATH: /path/to/certificates
+    - COLUMNS:
+           - field_name: data_type
+    - TRANSFORM: 'transformation'
+    - TRANSFORM_CONFIG: 'configuration-file-path' 
+    - MAX_LINE_LENGTH: integer 
+    - FORMAT: text | csv
+    - DELIMITER: 'delimiter_character'
+    - ESCAPE: 'escape_character' | 'OFF'
+    - NULL_AS: 'null_string'
+    - FORCE_NOT_NULL: true | false
+    - QUOTE: 'csv_quote_character'
+    - HEADER: true | false
+    - ENCODING: database_encoding
+    - ERROR_LIMIT: integer
+    - ERROR_TABLE: schema.table_name
+   OUTPUT:
+    - TABLE: schema.table_name
+    - MODE: insert | update | merge
+    - MATCH_COLUMNS:
+           - target_column_name
+    - UPDATE_COLUMNS:
+           - target_column_name
+    - UPDATE_CONDITION: 'boolean_condition'
+    - MAPPING:
+              target_column_name: source_column_name | 'expression'
+   PRELOAD:
+    - TRUNCATE: true | false
+    - REUSE_TABLES: true | false
+   SQL:
+    - BEFORE: "sql_command"
+    - AFTER: "sql_command"
+```
+
+**Control File Schema Elements**  
+
+The control file contains the schema elements for:
+
+-   Version
+-   Database
+-   User
+-   Host
+-   Port
+-   GPLOAD file
+
+<dt>VERSION  </dt>
+<dd>Optional. The version of the `hawq load` control file schema, for example: 
1.0.0.1.</dd>
+
+<dt>DATABASE  </dt>
+<dd>Optional. Specifies which database in HAWQ to connect to. If not 
specified, defaults to `$PGDATABASE` if set or the current system user name. 
You can also specify the database on the command line using the `-d` 
option.</dd>
+
+<dt>USER  </dt>
+<dd>Optional. Specifies which database role to use to connect. If not 
specified, defaults to the current user or `$PGUSER` if set. You can also 
specify the database role on the command line using the `-U` option.
+
+If the user running `hawq load` is not a HAWQ superuser, then the server 
configuration parameter `gp_external_grant_privileges` must be set to `on` for 
the load to be processed.</dd>
+
+<dt>HOST  </dt>
+<dd>Optional. Specifies HAWQ master host name. If not specified, defaults to 
localhost or `$PGHOST` if set. You can also specify the master host name on the 
command line using the `-h` option.</dd>
+
+<dt>PORT  </dt>
+<dd>Optional. Specifies HAWQ master port. If not specified, defaults to 5432 
or `$PGPORT` if set. You can also specify the master port on the command line 
using the `-p` option.</dd>
+
+<dt>GPLOAD  </dt>
+<dd>Required. Begins the load specification section. A `GPLOAD` specification 
must have an `INPUT` and an `OUTPUT` section defined.</dd>
+
+<dt>INPUT  </dt>
+<dd>Required element. Defines the location and the format of the input data to 
be loaded. `hawq load` will start one or more instances of the 
[gpfdist](gpfdist.html#topic1) file distribution program on the current host 
and create the required external table definition(s) in HAWQ that point to the 
source data. Note that the host from which you run `hawq load` must be 
accessible over the network by all HAWQ hosts (master and segments).</dd>
+
+<dt>SOURCE  </dt>
+<dd>Required. The `SOURCE` block of an `INPUT` specification defines the 
location of a source file. An `INPUT` section can have more than one `SOURCE` 
block defined. Each `SOURCE` block defined corresponds to one instance of the 
[gpfdist](gpfdist.html#topic1) file distribution program that will be started 
on the local machine. Each `SOURCE` block defined must have a `FILE` 
specification.</dd>
+
+<dt>LOCAL\_HOSTNAME  </dt>
+<dd>Optional. Specifies the host name or IP address of the local machine on 
which `hawq                                                   load` is running. 
If this machine is configured with multiple network interface cards (NICs), you 
can specify the host name or IP of each individual NIC to allow network traffic 
to use all NICs simultaneously. The default is to use the local machine's 
primary host name or IP only.</dd>
+
+<dt>PORT  </dt>
+<dd>Optional. Specifies the specific port number that the 
[gpfdist](gpfdist.html#topic1) file distribution program should use. You can 
also supply a `PORT_RANGE` to select an available port from the specified 
range. If both `PORT` and `PORT_RANGE` are defined, then `PORT` takes 
precedence. If neither `PORT` or `PORT_RANGE` are defined, the default is to 
select an available port between 8000 and 9000.
+
+If multiple host names are declared in `LOCAL_HOSTNAME`, this port number is 
used for all hosts. This configuration is desired if you want to use all NICs 
to load the same file or set of files in a given directory location.</dd>
+
+<dt>PORT\_RANGE  </dt>
+<dd>Optional. Can be used instead of `PORT` to supply a range of port numbers 
from which `hawq load` can choose an available port for this instance of the 
[gpfdist](gpfdist.html#topic1) file distribution program.</dd>
+
+<dt>FILE  </dt>
+<dd>Required. Specifies the location of a file, named pipe, or directory 
location on the local file system that contains data to be loaded. You can 
declare more than one file so long as the data is of the same format in all 
files specified.
+
+If the files are compressed using `gzip` or `bzip2` (have a `.gz` or `.bz2` 
file extension), the files will be uncompressed automatically (provided that 
`gunzip` or `bunzip2` is in your path).
+
+When specifying which source files to load, you can use the wildcard character 
(`*`) or other C-style pattern matching to denote multiple files. The files 
specified are assumed to be relative to the current directory from which `hawq  
                                                 load` is executed (or you can 
declare an absolute path).</dd>
+
+<dt>SSL  </dt>
+<dd>Optional. Specifies usage of SSL encryption.</dd>
+
+<dt>CERTIFICATES\_PATH  </dt>
+<dd>Required when SSL is `true`; cannot be specified when SSL is `false` or 
unspecified. The location specified in `CERTIFICATES_PATH` must contain the 
following files:
+
+-   The server certificate file, `server.crt`
+-   The server private key file, `server.key`
+-   The trusted certificate authorities, `root.crt`
+
+The root directory (`/`) cannot be specified as `CERTIFICATES_PATH`.</dd>
+
+<dt>COLUMNS  </dt>
+<dd>Optional. Specifies the schema of the source data file(s) in the format of 
`field_name:data_type`. The `DELIMITER` character in the source file is what 
separates two data value fields (columns). A row is determined by a line feed 
character (`0x0a`).
+
+If the input `COLUMNS` are not specified, then the schema of the output 
`TABLE` is implied, meaning that the source data must have the same column 
order, number of columns, and data format as the target table.
+
+The default source-to-target mapping is based on a match of column names as 
defined in this section and the column names in the target `TABLE`. This 
default mapping can be overridden using the `MAPPING` section.</dd>
+
+<dt>TRANSFORM  </dt>
+<dd>Optional. Specifies the name of the input XML transformation passed to 
`hawq                                                   load`. <span 
class="ph">For more information about XML transformations, see [&quot;Loading 
and Unloading 
Data.&quot;](../../../datamgmt/load/g-loading-and-unloading-data.html#topic1).</span></dd>
+
+<dt>TRANSFORM\_CONFIG  </dt>
+<dd>Optional. Specifies the location of the XML transformation configuration 
file that is specified in the `TRANSFORM` parameter, above.</dd>
+
+<dt>MAX\_LINE\_LENGTH  </dt>
+<dd>Optional. An integer that specifies the maximum length of a line in the 
XML transformation data passed to `hawq load`.</dd>
+
+<dt>FORMAT  </dt>
+<dd>Optional. Specifies the format of the source data file(s) - either plain 
text (`TEXT`) or comma separated values (`CSV`) format. Defaults to `TEXT` if 
not specified.<span class="ph"> For more information about the format of the 
source data, see [&quot;Loading and Unloading 
Data&quot;](../../../datamgmt/load/g-loading-and-unloading-data.html#topic1) 
.</span></dd>
+
+<dt>DELIMITER  </dt>
+<dd>Optional. Specifies a single ASCII character that separates columns within 
each row (line) of data. The default is a tab character in TEXT mode, a comma 
in CSV mode.You can also specify a non-printable ASCII character via an escape 
sequence\\ using the decimal representation of the ASCII character. For 
example, `\014` represents the shift out character..</dd>
+
+<dt>ESCAPE  </dt>
+<dd>Specifies the single character that is used for C escape sequences (such 
as `\n`, `\t`, `\100`, and so on) and for escaping data characters that might 
otherwise be taken as row or column delimiters. Make sure to choose an escape 
character that is not used anywhere in your actual column data. The default 
escape character is a \\ (backslash) for text-formatted files and a `"` (double 
quote) for csv-formatted files, however it is possible to specify another 
character to represent an escape. It is also possible to disable escaping in 
text-formatted files by specifying the value `'OFF'` as the escape value. This 
is very useful for data such as text-formatted web log data that has many 
embedded backslashes that are not intended to be escapes.</dd>
+
+<dt>NULL\_AS  </dt>
+<dd>Optional. Specifies the string that represents a null value. The default 
is `\N` (backslash-N) in `TEXT` mode, and an empty value with no quotations in 
`CSV` mode. You might prefer an empty string even in `TEXT` mode for cases 
where you do not want to distinguish nulls from empty strings. Any source data 
item that matches this string will be considered a null value.</dd>
+
+<dt>FORCE\_NOT\_NULL  </dt>
+<dd>Optional. In CSV mode, processes each specified column as though it were 
quoted and hence not a NULL value. For the default null string in CSV mode 
(nothing between two delimiters), this causes missing values to be evaluated as 
zero-length strings.</dd>
+
+<dt>QUOTE  </dt>
+<dd>Required when `FORMAT` is `CSV`. Specifies the quotation character for 
`CSV` mode. The default is double-quote (`"`).</dd>
+
+<dt>HEADER  </dt>
+<dd>Optional. Specifies that the first line in the data file(s) is a header 
row (contains the names of the columns) and should not be included as data to 
be loaded. If using multiple data source files, all files must have a header 
row. The default is to assume that the input files do not have a header 
row.</dd>
+
+<dt>ENCODING  </dt>
+<dd>Optional. Character set encoding of the source data. Specify a string 
constant (such as `'SQL_ASCII'`), an integer encoding number, or `'DEFAULT'` to 
use the default client encoding. If not specified, the default client encoding 
is used.</dd>
+
+<dt>ERROR\_LIMIT  </dt>
+<dd>Optional. Sets the error limit count for HAWQ segment instances during 
input processing. Error rows will be written to the table specified in 
`ERROR_TABLE`. The value of ERROR\_LIMIT must be 2 or greater.</dd>
+
+<dt>ERROR\_TABLE  </dt>
+<dd>Optional when `ERROR_LIMIT` is declared. Specifies an error table where 
rows with formatting errors will be logged when running in single row error 
isolation mode. You can then examine this error table to see error rows that 
were not loaded (if any). If the `ERROR_TABLE` specified already exists, it 
will be used. If it does not exist, it will be automatically generated.
+
+For more information about handling load errors, see "[Loading and Unloading 
Data](../../../datamgmt/load/g-loading-and-unloading-data.html#topic1)".</dd>
+
+<dt>OUTPUT   </dt>
+<dd>Required element. Defines the target table and final data column values 
that are to be loaded into the database.</dd>
+
+<dt>TABLE  </dt>
+<dd>Required. The name of the target table to load into.</dd>
+
+<dt>MODE  </dt>
+<dd>Optional. Defaults to `INSERT` if not specified. There are three available 
load modes:</dd>
+
+<dt>INSERT  </dt>
+<dd>Loads data into the target table using the following method:
+
+``` pre
+INSERT INTO target_table SELECT * FROM input_data;
+```
+</dd>
+
+<dt>UPDATE</dt>
+<dd>Updates the `UPDATE_COLUMNS` of the target table where the rows have 
`MATCH_COLUMNS` attribute values equal to those of the input data, and the 
optional `UPDATE_CONDITION` is true.</dd>
+
+<dt>MERGE</dt>
+<dd>Inserts new rows and updates the `UPDATE_COLUMNS` of existing rows where 
`MATCH_COLUMNS` attribute values are equal to those of the input data, and the 
optional `UPDATE_CONDITION` is true. New rows are identified when the 
`MATCH_COLUMNS` value in the source data does not have a corresponding value in 
the existing data of the target table. In those cases, the **entire row** from 
the source file is inserted, not only the `MATCH` and `UPDATE` columns. If 
there are multiple new `MATCH_COLUMNS` values that are the same, only one new 
row for that value will be inserted. Use `UPDATE_CONDITION` to filter out the 
rows to discard.</dd>
+
+<dt>MATCH\_COLUMNS  </dt>
+<dd>Required if `MODE` is `UPDATE` or `MERGE`. Specifies the column(s) to use 
as the join condition for the update. The attribute value in the specified 
target column(s) must be equal to that of the corresponding source data 
column(s) in order for the row to be updated in the target table.</dd>
+
+<dt>UPDATE\_COLUMNS  </dt>
+<dd>Required if `MODE` is `UPDATE` or `MERGE`. Specifies the column(s) to 
update for the rows that meet the `MATCH_COLUMNS` criteria and the optional 
`UPDATE_CONDITION`.</dd>
+
+<dt>UPDATE\_CONDITION  </dt>
+<dd>Optional. Specifies a Boolean condition (similar to what you would declare 
in a `WHERE` clause) that must be met for a row in the target table to be 
updated (or inserted in the case of a `MERGE`).</dd>
+
+<dt>MAPPING  </dt>
+<dd>Optional. If a mapping is specified, it overrides the default 
source-to-target column mapping. The default source-to-target mapping is based 
on a match of column names as defined in the source `COLUMNS` section and the 
column names of the target `TABLE`. A mapping is specified as either:
+
+`target_column_name:                                                   
source_column_name`
+
+or
+
+`target_column_name:                                                   
'expression'`
+
+Where &lt;expression&gt; is any expression that you would specify in the 
`SELECT` list of a query, such as a constant value, a column reference, an 
operator invocation, a function call, and so on.</dd>
+
+<dt>PRELOAD  </dt>
+<dd>Optional. Specifies operations to run prior to the load operation. 
Currently, the only preload operation is `TRUNCATE`.</dd>
+
+<dt>TRUNCATE  </dt>
+<dd>Optional. If set to true, `hawq load` will remove all rows in the target 
table prior to loading it.</dd>
+
+<dt>REUSE\_TABLES  </dt>
+<dd>Optional. If set to true, `hawq load` will not drop the external table 
objects and staging table objects it creates. These objects will be reused for 
future load operations that use the same load specifications. Reusing objects 
improves performance of trickle loads (ongoing small loads to the same target 
table).</dd>
+
+<dt>SQL  </dt>
+<dd>Optional. Defines SQL commands to run before and/or after the load 
operation. Commands that contain spaces or special characters must be enclosed 
in quotes. You can specify multiple `BEFORE` and/or `AFTER` commands. List 
commands in the desired order of execution.</dd>
+
+<dt>BEFORE  </dt>
+<dd>Optional. A SQL command to run before the load operation starts. Enclose 
commands in quotes.</dd>
+
+<dt>AFTER  </dt>
+<dd>Optional. A SQL command to run after the load operation completes. Enclose 
commands in quotes.</dd>
+
+## Notes
+
+If your database object names were created using a double-quoted identifier 
(delimited identifier), you must specify the delimited name within single 
quotes in the `hawq load` control file. For example, if you create a table as 
follows:
+
+``` sql
+CREATE TABLE "MyTable" ("MyColumn" text);
+```
+
+Your YAML-formatted `hawq load` control file would refer to the above table 
and column names as follows:
+
+``` pre
+- COLUMNS:
+   - '"MyColumn"': text
+OUTPUT:
+   - TABLE: public.'"MyTable"'
+```
+
+## <a id="topic1__section9"></a>Log File Format
+
+Log files output by `hawq load` have the following format:
+
+``` pre
+timestamp|level|message
+```
+
+Where &lt;timestamp&gt; takes the form: `YYYY-MM-DD                     
HH:MM:SS`, &lt;level&gt; is one of `DEBUG`, `LOG`, `INFO`, `ERROR`, and 
&lt;message&gt; is a normal text message.
+
+Some `INFO` messages that may be of interest in the log files are (where *\#* 
corresponds to the actual number of seconds, units of data, or failed rows):
+
+``` pre
+INFO|running time: #.## seconds
+INFO|transferred #.# kB of #.# kB.
+INFO|hawq load succeeded
+INFO|hawq load succeeded with warnings
+INFO|hawq load failed
+INFO|1 bad row
+INFO|# bad rows
+```
+
+## <a id="topic1__section10"></a>Examples
+
+Run a load job as defined in `my_load.yml`:
+
+``` shell
+$ hawq load -f my_load.yml
+```
+
+Example load control file:
+
+``` pre
+---
+VERSION: 1.0.0.1
+DATABASE: ops
+USER: gpadmin
+HOST: mdw-1
+PORT: 5432
+GPLOAD:
+   INPUT:
+    - SOURCE:
+         LOCAL_HOSTNAME:
+           - etl1-1
+           - etl1-2
+           - etl1-3
+           - etl1-4
+         PORT: 8081
+         FILE: 
+           - /var/load/data/*
+    - COLUMNS:
+           - name: text
+           - amount: float4
+           - category: text
+           - desc: text
+           - date: date
+    - FORMAT: text
+    - DELIMITER: '|'
+    - ERROR_LIMIT: 25
+    - ERROR_TABLE: payables.err_expenses
+   OUTPUT:
+    - TABLE: payables.expenses
+    - MODE: INSERT
+   SQL:
+   - BEFORE: "INSERT INTO audit VALUES('start', current_timestamp)"
+   - AFTER: "INSERT INTO audit VALUES('end', 
+current_timestamp)"
+```
+
+## <a id="topic1__section11"></a>See Also
+
+[gpfdist](gpfdist.html#topic1), [CREATE EXTERNAL 
TABLE](../../sql/CREATE-EXTERNAL-TABLE.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqregister.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqregister.html.md.erb 
b/markdown/reference/cli/admin_utilities/hawqregister.html.md.erb
new file mode 100644
index 0000000..c230d6d
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqregister.html.md.erb
@@ -0,0 +1,254 @@
+---
+title: hawq register
+---
+
+Loads and registers AO or Parquet-formatted tables in HDFS into a 
corresponding table in HAWQ.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+Usage 1:
+hawq register [<connection_options>] [-f <hdfsfilepath>] [-e <Eof>] <tablename>
+
+Usage 2:
+hawq register [<connection_options>] [-c <configfilepath>][-F] <tablename>
+
+Connection Options:
+     [-h | --host <hostname>] 
+     [-p | --port <port>] 
+     [-U | --user <username>] 
+     [-d | --database <database>]
+     
+Misc. Options:
+     [-f | --filepath <filepath>] 
+        [-e | --eof<eof>]
+        [-F | --force ] 
+     [-c | --config <yml_config>]  
+hawq register help | -? 
+hawq register --version
+```
+
+## <a id="topic1__section3"></a>Prerequisites
+
+The client machine where `hawq register` is executed must meet the following 
conditions:
+
+-   All hosts in your HAWQ cluster (master and segments) must have network 
access between them and the hosts containing the data to be loaded.
+-   The Hadoop client must be configured and the hdfs filepath specified.
+-   The files to be registered and the HAWQ table must be located in the same 
HDFS cluster.
+-   The target table DDL is configured with the correct data type mapping.
+
+## <a id="topic1__section4"></a>Description
+
+`hawq register` is a utility that loads and registers existing data files or 
folders in HDFS into HAWQ internal tables, allowing HAWQ to directly read the 
data and use internal table processing for operations such as transactions and 
high perforance, without needing to load or copy it. Data from the file or 
directory specified by \<hdfsfilepath\> is loaded into the appropriate HAWQ 
table directory in HDFS and the utility updates the corresponding HAWQ metadata 
for the files. 
+
+You can use `hawq register` to:
+
+-  Load and register external Parquet-formatted file data generated by an 
external system such as Hive or Spark.
+-  Recover cluster data from a backup cluster.
+
+Two usage models are available.
+
+###Usage Model 1: Register file data to an existing table.
+
+`hawq register [-h hostname] [-p port] [-U username] [-d databasename] [-f 
filepath] [-e eof]<tablename>`
+
+Metadata for the Parquet file(s) and the destination table must be consistent. 
Different  data types are used by HAWQ tables and Parquet files, so the data is 
mapped. Refer to the section [Data Type 
Mapping](hawqregister.html#topic1__section7) below. You must verify that the 
structure of the Parquet files and the HAWQ table are compatible before running 
`hawq register`. 
+
+####Limitations
+
+Only HAWQ or Hive-generated Parquet tables are supported.
+Hash tables and partitioned tables are not supported in this use model.
+
+###Usage Model 2: Use information from a YAML configuration file to register 
data
+ 
+`hawq register [-h hostname] [-p port] [-U username] [-d databasename] [-c 
configfile] [--force] <tablename>`
+
+Files generated by the `hawq extract` command are registered through use of 
metadata in a YAML configuration file. Both AO and Parquet tables can be 
registered. Tables need not exist in HAWQ before being registered.
+
+The register process behaves differently, according to different conditions. 
+
+-  Existing tables have files appended to the existing HAWQ table.
+-  If a table does not exist, it is created and registered into HAWQ. 
+-  If the -\\\-force option is used, the data in existing catalog tables is 
erased and re-registered.
+
+
+###Limitations for Registering Hive Tables to HAWQ
+The currently-supported data types for generating Hive tables into HAWQ tables 
are: boolean, int, smallint, tinyint, bigint, float, double, string, binary, 
char, and varchar.  
+
+The following HIVE data types cannot be converted to HAWQ equivalents: 
timestamp, decimal, array, struct, map, and union.   
+
+Only single-level partitioned tables are supported.
+
+###Data Type Mapping<a id="topic1__section7"></a>
+
+HAWQ and Parquet tables and HIVE and HAWQ tables use different data types. 
Mapping must be used for compatibility. You are responsible for making sure 
your implementation is mapped to the appropriate data type before running `hawq 
register`. The tables below show equivalent data types, if available.
+
+<span class="tablecap">Table 1. HAWQ to Parquet Mapping</span>
+
+|HAWQ Data Type   | Parquet Data Type  |
+| :------------| :---------------|
+| bool        | boolean       |
+| int2/int4/date        | int32       |
+| int8/money       | int64      |
+| time/timestamptz/timestamp       | int64      |
+| float4        | float       |
+|float8        | double       |
+|bit/varbit/bytea/numeric       | Byte array       |
+|char/bpchar/varchar/name| Byte array |
+| text/xml/interval/timetz  | Byte array  |
+| macaddr/inet/cidr  | Byte array  |
+
+**Additional HAWQ-to-Parquet Mapping**
+
+**point**:  
+
+``` 
+group {
+    required int x;
+    required int y;
+}
+```
+
+**circle:** 
+
+```
+group {
+    required int x;
+    required int y;
+    required int r;
+}
+```
+
+**box:**  
+
+```
+group {
+    required int x1;
+    required int y1;
+    required int x2;
+    required int y2;
+}
+```
+
+**iseg:** 
+
+
+```
+group {
+    required int x1;
+    required int y1;
+    required int x2;
+    required int y2;
+}
+``` 
+
+**path**:
+  
+```
+group {
+    repeated group {
+        required int x;
+        required int y;
+    }
+}
+```
+
+
+<span class="tablecap">Table 2. HIVE to HAWQ Mapping</span>
+
+|HIVE Data Type   | HAWQ Data Type  |
+| :------------| :---------------|
+| boolean        | bool       |
+| tinyint        | int2       |
+| smallint       | int2/smallint      |
+| int            | int4 / int |
+| bigint         | int8 / bigint      |
+| float        | float4       |
+| double       | float8 |
+| string        | varchar       |
+| binary      | bytea       |
+| char | char |
+| varchar  | varchar  |
+
+
+## <a id="topic1__section5"></a>Options
+
+**General Options**
+
+<dt>-? (show help) </dt>  
+<dd>Show help, then exit.
+
+<dt>-\\\-version  </dt> 
+<dd>Show the version of this utility, then exit.</dd>
+
+
+**Connection Options**
+
+<dt>-h , -\\\-host \<hostname\> </dt>
+<dd>Specifies the host name of the machine on which the HAWQ master database 
server is running. If not specified, reads from the environment variable 
`$PGHOST` or defaults to `localhost`.</dd>
+
+<dt> -p , -\\\-port \<port\> </dt> 
+<dd>Specifies the TCP port on which the HAWQ master database server is 
listening for connections. If not specified, reads from the environment 
variable `$PGPORT` or defaults to 5432.</dd>
+
+<dt>-U , -\\\-user \<username\> </dt> 
+<dd>The database role name to connect as. If not specified, reads from the 
environment variable `$PGUSER` or defaults to the current system user name.</dd>
+
+<dt>-d  , -\\\-database \<databasename\>  </dt>
+<dd>The database to register the Parquet HDFS data into. The default is 
`postgres`<dd>
+
+<dt>-f , -\\\-filepath \<hdfspath\></dt>
+<dd>The path of the file or directory in HDFS containing the files to be 
registered.</dd>
+ 
+<dt>\<tablename\> </dt>
+<dd>The HAWQ table that will store the data to be registered. If the --config 
option is not supplied, the table cannot use hash distribution. Random table 
distribution is strongly preferred. If hash distribution must be used, make 
sure that the distribution policy for the data files described in the YAML file 
is consistent with the table being registered into.</dd>
+
+####Miscellaneous Options
+
+The following options are used with specific use models.
+
+<dt>-e , -\\\-eof \<eof\></dt>
+<dd>Specify the end of the file to be registered. \<eof\> represents the valid 
content length of the file, in bytes to be used, a value between 0 the actual 
size of the file. If this option is not included, the actual file size, or size 
of files within a folder, is used. Used with Use Model 1.</dd>
+
+<dt>-F , -\\\-force</dt>
+<dd>Used for disaster recovery of a cluster. Clears all HDFS-related catalog 
contents in `pg_aoseg.pg_paqseg_$relid `and re-registers files to a specified 
table. The HDFS files are not removed or modified. To use this option for 
recovery, data is assumed to be periodically imported to the cluster to be 
recovered. Used with Usage Model 2.</dd>
+
+<dt>-c , -\\\-config \<yml_config\> </dt> 
+<dd>Registers files specified by YAML-format configuration files into HAWQ. 
Used with Usage Model 2.</dd>
+
+
+## <a id="topic1__section6"></a>Example: Usage Model 2
+
+This example shows how to register files using a YAML configuration file. This 
file is usually generated by the `hawq extract` command. 
+
+Create a table and insert data into the table:
+
+```
+=> CREATE TABLE paq1(a int, b varchar(10))with(appendonly=true, 
orientation=parquet);`
+=> INSERT INTO paq1 values(generate_series(1,1000), 'abcde');
+```
+
+Extract the table's metadata.
+
+```
+hawq extract -o paq1.yml paq1
+```
+
+Use the YAML file to register the new table paq2:
+
+```
+hawq register --config paq1.yml paq2
+```
+
+Select the new table to determine if the content has already been registered:
+
+```
+=> SELECT count(*) FROM paq2;
+```
+The result should return 1000.
+
+## See Also
+
+[hawq extract](hawqextract.html#topic1)
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqrestart.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqrestart.html.md.erb 
b/markdown/reference/cli/admin_utilities/hawqrestart.html.md.erb
new file mode 100644
index 0000000..6d80e90
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqrestart.html.md.erb
@@ -0,0 +1,112 @@
+---
+title: hawq restart
+---
+
+Shuts down and then restarts a HAWQ system after shutdown is complete.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq restart <object> [-l|--logdir <logfile_directory>] [-q|--quiet] 
[-v|--verbose]    
+        [-M|--mode smart | fast | immediate] [-u|--reload] [-m|--masteronly] 
[-R|--restrict]
+        [-t|--timeout <timeout_seconds>]  [-U | --special-mode maintenance]
+        [--ignore-bad-hosts cluster | allsegments]
+     
+```
+
+``` pre
+hawq restart -? | -h | --help 
+
+hawq restart --version
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq restart` utility is used to shut down and restart the HAWQ server 
processes. It is essentially equivalent to performing a `hawq stop -M           
                              smart` operation followed by `hawq                
                         start`.
+
+The \<object\> in the command specifies what entity should be started: e.g. a 
cluster, a segment, the master node, standby node, or all segments in the 
cluster.
+
+When the `hawq restart` command runs, the utility uploads changes made to the 
master `pg_hba.conf` file or to the runtime configuration parameters in the 
master `hawq-site.xml` file without interruption of service. Note that any 
active sessions will not pick up the changes until they reconnect to the 
database.
+
+## Objects
+
+<dt>cluster  </dt>
+<dd>Restart a HAWQ cluster.</dd>
+
+<dt>master  </dt>
+<dd>Restart HAWQ master.</dd>
+
+<dt>segment  </dt>
+<dd>Restart a local segment node.</dd>
+
+<dt>standby  </dt>
+<dd>Restart a HAWQ standby.</dd>
+
+<dt>allsegments  </dt>
+<dd>Restart all segments.</dd>
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-a (do not prompt)  </dt>
+<dd>Do not prompt the user for confirmation.</dd>
+
+<dt>-l, -\\\-logdir \<logfile\_directory\>  </dt>
+<dd>Specifies the log directory for logs of the management tools. The default 
is `~/hawq/Adminlogs/`.</dd>
+
+<dt>-q, -\\\-quiet   </dt>
+<dd>Run in quiet mode. Command output is not displayed on the screen, but is 
still written to the log file.</dd>
+
+<dt>-v, -\\\-verbose  </dt>
+<dd>Displays detailed status, progress and error messages output by the 
utility.</dd>
+
+<dt>-t,  -\\\-timeout \<timeout\_seconds\>  </dt>
+<dd>Specifies a timeout in seconds to wait for a segment instance to start up. 
If a segment instance was shutdown abnormally (due to power failure or killing 
its `postgres` database listener process, for example), it may take longer to 
start up due to the database recovery and validation process. If not specified, 
the default timeout is 60 seconds.</dd>
+
+<dt>-M, -\\\-mode smart | fast | immediate  </dt>
+<dd>Smart shutdown is the default. Shutdown fails with a warning message, if 
active connections are found.
+
+Fast shut down interrupts and rolls back any transactions currently in 
progress .
+
+Immediate shutdown aborts transactions in progress and kills all `postgres` 
processes without allowing the database server to complete transaction 
processing or clean up any temporary or in-process work files. Because of this, 
immediate shutdown is not recommended. In some instances, it can cause database 
corruption that requires manual recovery.</dd>
+
+<dt>-u, -\\\-reload  </dt>
+<dd>Utility mode. This mode runs on the master, only, and only allows incoming 
sessions that specify gp\_session\_role=utility. It allows bash scripts to 
reload the parameter values and connect but protects the system from normal 
clients who might be trying to connect to the system during startup.</dd>
+
+<dt>-R, -\\\-restrict   </dt>
+<dd>Starts HAWQ in restricted mode (only database superusers are allowed to 
connect).</dd>
+
+<dt>-U, -\\\-special-mode maintenance   </dt>
+<dd>(Superuser only) Start HAWQ in \[maintenance | upgrade\] mode. In 
maintenance mode, the `gp_maintenance_conn` parameter is set.</dd>
+
+<dt>-\\\-ignore\-bad\-hosts cluster | allsegments  </dt>
+<dd>Overrides copying configuration files to a host on which SSH validation 
fails. If ssh to a skipped host is reestablished, make sure the configuration 
files are re-synched once it is reachable.</dd>
+
+<dt>-? , -h , -\\\-help (help)  </dt>
+<dd>Displays the online help.</dd>
+
+<dt>-\\\-version (show utility version)  </dt>
+<dd>Displays the version of this utility.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Restart a HAWQ cluster:
+
+``` shell
+$ hawq restart cluster
+```
+
+Restart a HAWQ system in restricted mode (only allow superuser connections):
+
+``` shell
+$ hawq restart cluster -R
+```
+
+Start the HAWQ master instance only and connect in utility mode:
+
+``` shell
+$ hawq start master -m PGOPTIONS='-c gp_session_role=utility' psql
+```
+
+## <a id="topic1__section6"></a>See Also
+
+[hawq stop](hawqstop.html#topic1), [hawq start](hawqstart.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqscp.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqscp.html.md.erb 
b/markdown/reference/cli/admin_utilities/hawqscp.html.md.erb
new file mode 100644
index 0000000..77f64a8
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqscp.html.md.erb
@@ -0,0 +1,95 @@
+---
+title: hawq scp
+---
+
+Copies files between multiple hosts at once.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq scp -f <hostfile_hawqssh> | -h <hostname> [-h <hostname> ...] 
+    [--ignore-bad-hosts] [-J <character>] [-r] [-v] 
+    [[<user>@]<hostname>:]<file_to_copy> [...]
+    [[<user>@]<hostname>:]<copy_to_path>
+
+hawq scp -? 
+
+hawq scp --version
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq scp` utility allows you to copy one or more files from the specified 
hosts to other specified hosts in one command using SCP (secure copy). For 
example, you can copy a file from the HAWQ master host to all of the segment 
hosts at the same time.
+
+To specify the hosts involved in the SCP session, use the `-f` option to 
specify a file containing a list of host names, or use the `-h` option to name 
single host names on the command-line. At least one host name (`-h`) or a host 
file (`-f`) is required. The `-J` option allows you to specify a single 
character to substitute for the *hostname* in the `<file_to_copy>` and 
`<copy_to_path>` destination strings. If `-J` is not specified, the default 
substitution character is an equal sign (`=`). For example, the following 
command will copy `.bashrc` from the local host to `/home/gpadmin` on all hosts 
named in `hostfile_gpssh`:
+
+``` shell
+$ hawq scp -f hostfile_hawqssh .bashrc =:/home/gpadmin
+```
+
+If a user name is not specified in the host list or with *user*`@` in the file 
path, `hawq scp` will copy files as the currently logged in user. To determine 
the currently logged in user, invoke the `whoami` command. By default, `hawq 
scp` copies to `$HOME` of the session user on the remote hosts after login. To 
ensure the file is copied to the correct location on the remote hosts, use 
absolute paths.
+
+Before using `hawq scp`, you must have a trusted host setup between the hosts 
involved in the SCP session. You can use the utility `hawq ssh-exkeys` to 
update the known host files and exchange public keys between hosts if you have 
not done so already.
+
+## <a id="topic1__section9"></a>Arguments
+<dt>-f \<hostfile\_hawqssh\>  </dt>
+<dd>Specifies the name of a file that contains a list of hosts that will 
participate in this SCP session. The syntax of the host file is one host per 
line as follows:
+
+``` pre
+<hostname>
+```
+</dd>
+
+<dt>-h \<hostname\>  </dt>
+<dd>Specifies a single host name that will participate in this SCP session. 
You can use the `-h` option multiple times to specify multiple host names.</dd>
+
+<dt>\<file\_to\_copy\>  </dt>
+<dd>The name (or absolute path) of a file or directory that you want to copy 
to other hosts (or file locations). This can be either a file on the local host 
or on another named host.</dd>
+
+<dt>\<copy\_to\_path\>  </dt>
+<dd>The path where you want the file(s) to be copied on the named hosts. If an 
absolute path is not used, the file will be copied relative to `$HOME` of the 
session user. You can also use the equal sign '`=`' (or another character that 
you specify with the `-J` option) in place of a \<hostname\>. This will then 
substitute in each host name as specified in the supplied host file (`-f`) or 
with the `-h` option.</dd>
+
+## <a id="topic1__section4"></a>Options
+
+<dt>
+-\\\-ignore-bad-hosts 
+</dt>
+<dd>
+Overrides copying configuration files to a host on which SSH validation fails. 
If SSH to a skipped host is reestablished, make sure the files are re-synched 
once it is reachable.
+</dd>
+
+<dt>-J \<character\>  </dt>
+<dd>The `-J` option allows you to specify a single character to substitute for 
the \<hostname\> in the `<file_to_copy\>` and `<copy_to_path\>` destination 
strings. If `-J` is not specified, the default substitution character is an 
equal sign (`=`).</dd>
+
+
+<dt>-v (verbose mode)  </dt>
+<dd>Reports additional messages in addition to the SCP command output.</dd>
+
+<dt>-r (recursive mode)  </dt>
+<dd>If \<file\_to\_copy\> is a directory, copies the contents of 
\<file\_to\_copy\> and all subdirectories.</dd>
+
+<dt>-? (help)  </dt>
+<dd>Displays the online help.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Copy the file named `installer.tar` to `/` on all the hosts in the file 
`hostfile_hawqssh`.
+
+``` shell
+$ hawq scp -f hostfile_hawqssh installer.tar =:/
+```
+
+Copy the file named *myfuncs.so* to the specified location on the hosts named 
`sdw1` and `sdw2`:
+
+``` shell
+$ hawq scp -h sdw1 -h sdw2 myfuncs.so =:/usr/local/-db/lib
+```
+
+## See Also
+
+[hawq ssh](hawqssh.html#topic1), [hawq ssh-exkeys](hawqssh-exkeys.html#topic1)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqssh-exkeys.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqssh-exkeys.html.md.erb 
b/markdown/reference/cli/admin_utilities/hawqssh-exkeys.html.md.erb
new file mode 100644
index 0000000..2567faf
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqssh-exkeys.html.md.erb
@@ -0,0 +1,105 @@
+---
+title: hawq ssh-exkeys
+---
+
+Exchanges SSH public keys between hosts.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq ssh-exkeys -f <hostfile_exkeys> | - h <hostname> [-h <hostname> ...] [-p 
<password>]
+
+hawq ssh-exkeys -e <hostfile_exkeys> -x <hostfile_hawqexpand>  [-p <password>]
+
+hawq ssh-exkeys --version
+
+hawq ssh-exkeys [-? | --help]
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq ssh-exkeys` utility exchanges SSH keys between the specified host 
names (or host addresses). This allows SSH connections between HAWQ hosts and 
network interfaces without a password prompt. The utility is used to initially 
prepare a HAWQ system for password-free SSH access, and also to add additional 
ssh keys when expanding a HAWQ system.
+
+To specify the hosts involved in an initial SSH key exchange, use the `-f` 
option to specify a file containing a list of host names (recommended), or use 
the `-h` option to name single host names on the command-line. At least one 
host name (`-h`) or a host file is required. Note that the local host is 
included in the key exchange by default.
+
+To specify new expansion hosts to be added to an existing HAWQ system, use the 
`-e` and `-x` options. The `-e` option specifies a file containing a list of 
existing hosts in the system that already have SSH keys. The `-x` option 
specifies a file containing a list of new hosts that need to participate in the 
SSH key exchange.
+
+Keys are exchanged as the currently logged in user. A good practice is 
performing the key exchange process twice: once as `root` and once as the 
`gpadmin` user (the designated owner of your HAWQ installation). The HAWQ 
management utilities require that the same non-root user be created on all 
hosts in the HAWQ system, and the utilities must be able to connect as that 
user to all hosts without a password prompt.
+
+The `hawq ssh-exkeys` utility performs key exchange using the following steps:
+
+-   Creates an RSA identification key pair for the current user if one does 
not already exist. The public key of this pair is added to the 
`authorized_keys` file of the current user.
+-   Updates the `known_hosts` file of the current user with the host key of 
each host specified using the `-h`, `-f`, `-e`, and `-x` options.
+-   Connects to each host using `ssh` and obtains the `authorized_keys`, 
`known_hosts`, and `id_rsa.pub` files to set up password-free access.
+-   Adds keys from the `id_rsa.pub` files obtained from each host to the 
`authorized_keys` file of the current user.
+-   Updates the `authorized_keys`, `known_hosts`, and `id_rsa.pub` files on 
all hosts with new host information (if any).
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-e \<hostfile\_exkeys\>  </dt>
+<dd>When doing a system expansion, this is the name and location of a file 
containing all configured host names and host addresses (interface names) for 
each host in your *current* HAWQ system (master, standby master and segments), 
one name per line without blank lines or extra spaces. Hosts specified in this 
file cannot be specified in the host file used with `-x`.</dd>
+
+<dt>-f \<hostfile\_exkeys\>  </dt>
+<dd>Specifies the name and location of a file containing all configured host 
names and host addresses (interface names) for each host in your HAWQ system 
(master, standby master and segments), one name per line without blank lines or 
extra spaces.</dd>
+
+<dt>-h \<hostname\>  </dt>
+<dd>Specifies a single host name (or host address) that will participate in 
the SSH key exchange. You can use the `-h` option multiple times to specify 
multiple host names and host addresses.</dd>
+
+<dt>-p \<password\>  </dt>
+<dd>Specifies the password used to log in to the hosts. The hosts should share 
the same password. This option is useful when invoking `hawq ssh-exkeys` in a 
script.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+<dt>-x \<hostfile\_hawqexpand\>  </dt>
+<dd>When doing a system expansion, this is the name and location of a file 
containing all configured host names and host addresses (interface names) for 
each new segment host you are adding to your HAWQ system, one name per line 
without blank lines or extra spaces. Hosts specified in this file cannot be 
specified in the host file used with `-e`.</dd>
+
+<dt>-?, --help (help)  </dt>
+<dd>Displays the online help.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Exchange SSH keys between all host names and addresses listed in the file 
`hostfile_exkeys`:
+
+``` shell
+$ hawq ssh-exkeys -f hostfile_exkeys
+```
+
+Exchange SSH keys between the hosts `sdw1`, `sdw2`, and `sdw3`:
+
+``` shell
+$ hawq ssh-exkeys -h sdw1 -h sdw2 -h sdw3
+```
+
+Exchange SSH keys between existing hosts `sdw1`, `sdw2`, and `sdw3`, and new 
hosts `sdw4` and `sdw5` as part of a system expansion operation:
+
+``` shell
+$ cat hostfile_exkeys
+mdw
+mdw-1
+mdw-2
+smdw
+smdw-1
+smdw-2
+sdw1
+sdw1-1
+sdw1-2
+sdw2
+sdw2-1
+sdw2-2
+sdw3
+sdw3-1
+sdw3-2
+$ cat hostfile_hawqexpand
+sdw4
+sdw4-1
+sdw4-2
+sdw5
+sdw5-1
+sdw5-2
+$ hawq ssh-exkeys -e hostfile_exkeys -x hostfile_hawqexpand
+```
+
+## <a id="topic1__section6"></a>See Also
+
+[hawq ssh](hawqssh.html#topic1), [hawq scp](hawqscp.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqssh.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqssh.html.md.erb 
b/markdown/reference/cli/admin_utilities/hawqssh.html.md.erb
new file mode 100644
index 0000000..ee31308
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqssh.html.md.erb
@@ -0,0 +1,105 @@
+---
+title: hawq ssh
+---
+
+Provides SSH access to multiple hosts at once.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq ssh -f <hostfile_hawqssh>) | (-h <hostname> [-h <hostname> ...]
+    [-e]
+    [-u <username>]
+    [-v]
+    [<bash_command>]
+
+hawq ssh [-? | --help]
+
+hawq ssh --version
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq ssh` utility allows you to run bash shell commands on multiple hosts 
at once using SSH (secure shell). You can execute a single command by 
specifying it on the command-line, or omit the command to enter into an 
interactive command-line session.
+
+To specify the hosts involved in the SSH session, use the `-f` option to 
specify a file containing a list of host names, or use the `-h` option to name 
single host names on the command-line. At least one host name (`-h`) or a host 
file (`-f`) is required. Note that the current host is ***not*** included in 
the session by default — to include the local host, you must explicitly 
declare it in the list of hosts involved in the session.
+
+Before using `hawq ssh`, you must have a trusted host setup between the hosts 
involved in the SSH session. You can use the utility `hawq ssh-exkeys` to 
update the known host files and exchange public keys between hosts if you have 
not done so already.
+
+If you do not specify a command on the command-line, `hawq ssh` will go into 
interactive mode. At the `hawq ssh` command prompt (`=>`), you can enter a 
command as you would in a regular bash terminal command-line, and the command 
will be executed on all hosts involved in the session. To end an interactive 
session, press `CTRL`+`D` on the keyboard or type `exit` or `quit`.
+
+If a user name is not specified in the host file or via the `-u` option, `hawq 
ssh` will execute commands as the currently logged in user. To determine the 
currently logged in user, do a `whoami` command. By default, `hawq ssh` goes to 
`$HOME` of the session user on the remote hosts after login. To ensure commands 
are executed correctly on all remote hosts, you should always enter absolute 
paths.
+
+## <a id="args"></a>Arguments
+<dt>-f \<hostfile\_hawqssh\>  </dt>
+<dd>Specifies the name of a file that contains a list of hosts that will 
participate in this SSH session. The host name is required, and you can 
optionally specify an alternate user name and/or SSH port number per host. The 
syntax of the host file is one host per line as follows:
+
+``` pre
+[username@]hostname[:ssh_port]
+```
+</dd>
+
+<dt>-h \<hostname\>  </dt>
+<dd>Specifies a single host name that will participate in this SSH session. 
You can use the `-h` option multiple times to specify multiple host names.</dd>
+
+
+## <a id="topic1__section4"></a>Options
+
+<dt>\<bash\_command\>   </dt>
+<dd>A bash shell command to execute on all hosts involved in this session 
(optionally enclosed in quotes). If not specified, `hawq ssh` will start an 
interactive session.</dd>
+
+<dt>-e (echo)  </dt>
+<dd>Optional. Echoes the commands passed to each host and their resulting 
output while running in non-interactive mode.</dd>
+
+<dt>-u \<username\>  </dt>
+<dd>Specifies the userid for the SSH session.</dd>
+
+<dt>-v (verbose mode)  </dt>
+<dd>Reports additional messages in addition to the command output when running 
in non-interactive mode.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+<dt>-?, -\\\-help </dt>
+<dd>Displays the online help.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Start an interactive group SSH session with all hosts listed in the file 
`hostfile_hawqssh`:
+
+``` shell
+$ hawq ssh -f hostfile_hawqssh
+```
+
+At the `hawq ssh` interactive command prompt, run a shell command on all the 
hosts involved in this session.
+
+``` pre
+=> ls -a /data/path-to-masterdd/*
+```
+
+Exit an interactive session:
+
+``` pre
+=> exit
+=> quit
+```
+
+Start a non-interactive group SSH session with the hosts named `sdw1` and 
`sdw2` and pass a file containing several commands named `command_file` to 
`hawq ssh`:
+
+``` shell
+$ hawq ssh -h sdw1 -h sdw2 -v -e < command_file
+```
+
+Execute single commands in non-interactive mode on hosts `sdw2` and 
`localhost`:
+
+``` shell
+$ hawq ssh -h sdw2 -h localhost -v -e 'ls -a /data/primary/*'
+$ hawq ssh -h sdw2 -h localhost -v -e 'echo $GPHOME'
+$ hawq ssh -h sdw2 -h localhost -v -e 'ls -1 | wc -l'
+```
+
+## See Also
+
+[hawq ssh-exkeys](hawqssh-exkeys.html#topic1), [hawq scp](hawqscp.html#topic1)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqstart.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqstart.html.md.erb 
b/markdown/reference/cli/admin_utilities/hawqstart.html.md.erb
new file mode 100644
index 0000000..ff7b427
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqstart.html.md.erb
@@ -0,0 +1,119 @@
+---
+title: hawq start
+---
+
+Starts a HAWQ system.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq start <object> [-l| --logdir <logfile_directory>] [-q| --quiet] 
+        [-v|--verbose] [-m|--masteronly]  [-t|--timeout <timeout_seconds>] 
+        [-R | --restrict] [-U | --special-mode maintenance]
+        [--ignore-bad-hosts cluster | allsegments]
+     
+```
+
+``` pre
+hawq start -? | -h | --help 
+
+hawq start --version
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq start` utility is used to start the HAWQ server processes. When you 
start a HAWQ system, you are actually starting several `postgres` database 
server listener processes at once (the master and all of the segment 
instances). The `hawq start` utility handles the startup of the individual 
instances. Each instance is started in parallel.
+
+The *object* in the command specifies what entity should be started: e.g. a 
cluster, a segment, the master node, standby node, or all segments in the 
cluster.
+
+The first time an administrator runs `hawq start cluster`, the utility creates 
a static hosts cache file named `$GPHOME/etc/slaves` to store the segment host 
names. Subsequently, the utility uses this list of hosts to start the system 
more efficiently. The utility will create a new hosts cache file at each 
startup.
+
+The `hawq start master` command starts only the HAWQ master, without segment 
or standby nodes. These can be started later, using `hawq start segment` and/or 
`hawq                                         start standby`.
+
+**Note:** Typically you should always use `hawq start cluster` or `hawq 
restart cluster` to start the cluster. If you do end up using `hawq start       
                                  standby|master|segment` to start nodes 
individually, make sure you always start the standby before the active master. 
Otherwise, the standby can become unsynchronized with the active master.
+
+Before you can start a HAWQ system, you must have initialized the system or 
node by using `hawq init <object>` first.
+
+## Objects
+
+<dt>cluster  </dt>
+<dd>Start a HAWQ cluster.</dd>
+
+<dt>master  </dt>
+<dd>Start HAWQ master.</dd>
+
+<dt>segment  </dt>
+<dd>Start a local segment node.</dd>
+
+<dt>standby  </dt>
+<dd>Start a HAWQ standby.</dd>
+
+<dt>allsegments  </dt>
+<dd>Start all segments.</dd>
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-l , -\\\-logdir \<logfile\_directory\>  </dt>
+<dd>Specifies the log directory for logs of the management tools. The default 
is `~/hawq/Adminlogs/`.</dd>
+
+<dt>-q , -\\\-quiet   </dt>
+<dd>Run in quiet mode. Command output is not displayed on the screen, but is 
still written to the log file.</dd>
+
+<dt>-v , -\\\-verbose  </dt>
+<dd>Displays detailed status, progress and error messages output by the 
utility.</dd>
+
+<dt>-m , -\\\-masteronly  </dt>
+<dd>Optional. Starts the HAWQ master instance only, in utility mode, which may 
be useful for maintenance tasks. This mode only allows connections to the 
master in utility mode. For example:
+
+``` shell
+$ PGOPTIONS='-c gp_role=utility' psql
+```
+</dd>
+
+<dt>-R , -\\\-restrict (restricted mode)  </dt>
+<dd>Starts HAWQ in restricted mode (only database superusers are allowed to 
connect).</dd>
+
+<dt>-t , -\\\-timeout \<timeout\_seconds\>  </dt>
+<dd>Specifies a timeout in seconds to wait for a segment instance to start up. 
If a segment instance was shutdown abnormally (due to power failure or killing 
its `postgres` database listener process, for example), it may take longer to 
start up due to the database recovery and validation process. If not specified, 
the default timeout is 60 seconds.</dd>
+
+<dt>-U , -\\\-special-mode maintenance   </dt>
+<dd>(Superuser only) Start HAWQ in \[maintenance | upgrade\] mode. In 
maintenance mode, the `gp_maintenance_conn` parameter is set.</dd>
+
+<dt>-\\\-ignore-bad-hosts cluster | allsegments  </dt>
+<dd>Overrides copying configuration files to a host on which SSH validation 
fails. If ssh to a skipped host is reestablished, make sure the configuration 
files are re-synched once it is reachable.</dd>
+
+<dt>-? , -h , -\\\-help (help)  </dt>
+<dd>Displays the online help.</dd>
+
+<dt>--version (show utility version)  </dt>
+<dd>Displays the version of this utility.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Start a HAWQ system:
+
+``` shell
+$ hawq start cluster
+```
+
+Start a HAWQ master in maintenance mode:
+
+``` shell
+$ hawq start master -m
+```
+
+Start a HAWQ system in restricted mode (only allow superuser connections):
+
+``` shell
+$ hawq start cluster -R
+```
+
+Start the HAWQ master instance only and connect in utility mode:
+
+``` shell
+$ hawq start master -m PGOPTIONS='-c gp_session_role=utility' psql
+```
+
+## <a id="topic1__section6"></a>See Also
+
+[hawq stop](hawqstop.html#topic1), [hawq init](hawqinit.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqstate.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqstate.html.md.erb 
b/markdown/reference/cli/admin_utilities/hawqstate.html.md.erb
new file mode 100644
index 0000000..3927442
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqstate.html.md.erb
@@ -0,0 +1,65 @@
+---
+title: hawq state
+---
+
+Shows the status of a running HAWQ system.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq state 
+     [-b]
+     [-l <logfile_directory> | --logdir <logfile_directory>]
+     [(-v | --verbose) | (-q | --quiet)]  
+     
+hawq state [-h | --help]
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq state` utility displays information about a running HAWQ instance. A 
HAWQ system is comprised of multiple PostgreSQL database instances (segments) 
spanning multiple machines, and the `hawq state` utility can provide additional 
status information, such as:
+
+-   Total segment count.
+-   Which segments are down.
+-   Master and segment configuration information (hosts, data directories, 
etc.).
+-   The ports used by the system.
+-   Whether a standby master is present, and if it is active.
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-b (brief status)  </dt>
+<dd>Display a brief summary of the state of the HAWQ system. This is the 
default mode.</dd>
+
+<dt>-l, -\\\-logdir \<logfile\_directory\>  </dt>
+<dd>Specifies the directory to check for logfiles. The default is 
`$GPHOME/hawqAdminLogs`. 
+
+Log files within the directory are named according to the command being 
invoked, for example:  hawq\_config\_\<log\_id\>.log, 
hawq\_state\_\<log\_id\>.log, etc.</dd>
+
+<dt>-q, -\\\-quiet  </dt>
+<dd>Run in quiet mode. Except for warning messages, command output is not 
displayed on the screen. However, this information is still written to the log 
file.</dd>
+
+<dt>-v, -\\\-verbose  </dt>
+<dd>Displays error messages and outputs detailed status and progress 
information.</dd>
+
+<dt>-h, -\\\-help (help)  </dt>
+<dd>Displays the online help.</dd>
+
+## <a id="topic1__section6"></a>Examples
+
+Show brief status information of a HAWQ system:
+
+``` shell
+$ hawq state -b
+```
+
+Change the log directory from `hawqAdminLogs` to `TodaysLogs`.
+
+```shell
+$ hawq state -l TodaysLogs
+$ ls TodaysLogs
+hawq_config_20160707.log  hawq_init_20160707.log   master.initdb
+```
+
+## <a id="topic1__section7"></a>See Also
+
+[hawq start](hawqstart.html#topic1), [gplogfilter](gplogfilter.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/admin_utilities/hawqstop.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/admin_utilities/hawqstop.html.md.erb 
b/markdown/reference/cli/admin_utilities/hawqstop.html.md.erb
new file mode 100644
index 0000000..dd54156
--- /dev/null
+++ b/markdown/reference/cli/admin_utilities/hawqstop.html.md.erb
@@ -0,0 +1,104 @@
+---
+title: hawq stop
+---
+
+Stops or restarts a HAWQ system.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+hawq stop <object> [-a | --prompt]
+       [-M (smart|fast|immediate) | --mode (smart|fast|immediate)]   
+       [-t <timeout_seconds> | --timeout <timeout_seconds>]  
+       [-l <logfile_directory> | --logdir <logfile_directory>]
+       [(-v | --verbose) | (-q | --quiet)]
+
+hawq stop [-? | -h | --help]
+```
+
+## <a id="topic1__section3"></a>Description
+
+The `hawq stop` utility is used to stop the database servers that comprise a 
HAWQ system. When you stop a HAWQ system, you are actually stopping several 
`postgres` database server processes at once (the master and all of the segment 
instances). The `hawq           stop` utility handles the shutdown of the 
individual instances. Each instance is shut down in parallel.
+
+By default, you are not allowed to shut down HAWQ if there are any client 
connections to the database. Use the `-M fast` option to roll back all in 
progress transactions and terminate any connections before shutting down. If 
there are any transactions in progress, the default behavior is to wait for 
them to commit before shutting down.
+
+With the `-u` option, the utility uploads changes made to the master 
`pg_hba.conf` file or to *runtime* configuration parameters in the master 
`hawq-site.xml` file without interruption of service. Note that any active 
sessions will not pick up the changes until they reconnect to the database.
+If the HAWQ cluster has active connections, use the command `hawq stop cluster 
-u -M fast` to ensure that changes to the parameters are reloaded.  
+
+## Objects
+
+<dt>cluster  </dt>
+<dd>Stop a HAWQ cluster.</dd>
+
+<dt>master  </dt>
+<dd>Shuts down a HAWQ master instance that was started in maintenance 
mode.</dd>
+
+<dt>segment  </dt>
+<dd>Stop a local segment node.</dd>
+
+<dt>standby  </dt>
+<dd>Stop the HAWQ standby master process.</dd>
+
+<dt>allsegments  </dt>
+<dd>Stop all segments.</dd>
+
+## <a id="topic1__section4"></a>Options
+
+<dt>-a, -\\\-prompt  </dt>
+<dd>Do not prompt the user for confirmation before executing.</dd>
+
+<dt>-l, -\\\-logdir \<logfile\_directory\>  </dt>
+<dd>The directory to write the log file. The default is 
`~/hawq/Adminlogs/`.</dd>
+
+<dt>-M, -\\\-mode (smart | fast | immediate)  </dt>
+<dd>Smart shutdown is the default. Shutdown fails with a warning message, if 
active connections are found.
+
+Fast shut down interrupts and rolls back any transactions currently in 
progress .
+
+Immediate shutdown aborts transactions in progress and kills all `postgres` 
processes without allowing the database server to complete transaction 
processing or clean up any temporary or in-process work files. Because of this, 
immediate shutdown is not recommended. In some instances, it can cause database 
corruption that requires manual recovery.</dd>
+
+<dt>-q, -\\\-quiet  </dt>
+<dd>Run in quiet mode. Command output is not displayed on the screen, but is 
still written to the log file.</dd>
+
+<dt>-t, -\\\-timeout \<timeout\_seconds\>  </dt>
+<dd>Specifies a timeout threshold (in seconds) to wait for a segment instance 
to shutdown. If a segment instance does not shut down in the specified number 
of seconds, `hawq stop` displays a message indicating that one or more segments 
are still in the process of shutting down and that you cannot restart HAWQ 
until the segment instance(s) are stopped. This option is useful in situations 
where `hawq stop` is executed and there are very large transactions that need 
to rollback. These large transactions can take over a minute to rollback and 
surpass the default timeout period of 600 seconds.</dd>
+
+<dt>-u, -\\\-reload   </dt>
+<dd>This option reloads configuration parameter values without restarting the 
HAWQ cluster.</dd>
+
+<dt>-v, -\\\-verbose  </dt>
+<dd>Displays detailed status, progress and error messages output by the 
utility.</dd>
+
+<dt>-?, -h, -\\\-help (help) </dt>
+<dd>Displays the online help.</dd>
+
+
+## <a id="topic1__section5"></a>Examples
+
+Stop a HAWQ system in smart mode:
+
+``` shell
+$ hawq stop cluster -M smart
+```
+
+Stop a HAWQ system in fast mode:
+
+``` shell
+$ hawq stop cluster -M fast
+```
+
+Stop a master instance that was started in maintenance mode:
+
+``` shell
+$ hawq stop master -m
+```
+
+Reload the `hawq-site.xml` and `pg_hba.conf` files after making configuration 
changes but do not shutdown the HAWQ array:
+
+``` shell
+$ hawq stop cluster -u
+```
+
+## <a id="topic1__section6"></a>See Also
+
+[hawq start](hawqstart.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/cli/client_utilities/createdb.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/cli/client_utilities/createdb.html.md.erb 
b/markdown/reference/cli/client_utilities/createdb.html.md.erb
new file mode 100644
index 0000000..31b0c80
--- /dev/null
+++ b/markdown/reference/cli/client_utilities/createdb.html.md.erb
@@ -0,0 +1,105 @@
+---
+title: createdb
+---
+
+Creates a new database.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+
+createdb [<connection_options>] [<database_options>] [-e | --echo] [<dbname> 
['<description>']]
+
+createdb --help 
+
+createdb --version
+
+```
+where:
+
+``` pre
+<connection_options> =
+       [-h <host> | --host <host>] 
+       [-p <port> | -- port <port>] 
+       [-U <username> | --username <username>] 
+    [-W | --password] 
+         
+<database_options> =
+    [-D <tablespace> | --tablespace <tablespace>]
+    [-E <encoding> | --encoding <encoding>]
+    [-O <username> | --owner <username>] 
+    [-T <template>| --template <template>] 
+```
+
+## <a id="topic1__section3"></a>Description
+
+`createdb` creates a new database in a HAWQ system.
+
+Normally, the database user who executes this command becomes the owner of the 
new database. However a different owner can be specified via the `-O` option, 
if the executing user has appropriate privileges.
+
+`createdb` is a wrapper around the SQL command `CREATE DATABASE`.
+
+## <a id="topic1__section4"></a>Options
+
+<dt>**\<dbname\>**</dt>
+<dd>The name of the database to be created. The name must be unique among all 
other databases in the HAWQ system. If not specified, reads from the 
environment variable `PGDATABASE`, then `PGUSER` or defaults to the current 
system user.</dd>
+
+<dt>\<description\></dt>
+<dd>Optional comment to be associated with the newly created database. 
Descriptions containing white space must be enclosed in quotes.</dd>
+
+<dt>-e, --echo </dt>
+<dd>Echo the commands that createdb generates and sends to the server.</dd>
+
+**\<database_options\>**
+
+<dt>-D, -\\\-tablespace \<tablespace\>  </dt>
+<dd>The default tablespace for the database.</dd>
+
+<dt>-E, -\\\-encoding \<encoding\> </dt>
+<dd>Character set encoding to use in the new database. Specify a string 
constant (such as `'UTF8'`), an integer encoding number, or `DEFAULT` to use 
the default encoding.</dd>
+
+<dt>-O, -\\\-owner \<username\>  </dt>
+<dd>The name of the database user who will own the new database. Defaults to 
the user executing this command.</dd>
+
+<dt>-T, -\\\-template \<template\>  </dt>
+<dd>The name of the template from which to create the new database. Defaults 
to `template1`.</dd>
+
+**\<connection_options\>**
+ 
+<dt>-h, -\\\-host \<hostname\>  </dt>
+<dd>The host name of the machine on which the HAWQ master database server is 
running. If not specified, reads from the environment variable `PGHOST` or 
defaults to localhost.</dd>
+
+<dt>-p, -\\\-port \<port\>  </dt>
+<dd>The TCP port on which the HAWQ master database server is listening for 
connections. If not specified, reads from the environment variable `PGPORT` or 
defaults to 5432.</dd>
+
+<dt>-U, -\\\-username \<username\>  </dt>
+<dd>The database role name to connect as. If not specified, reads from the 
environment variable `PGUSER` or defaults to the current system role name.</dd>
+
+<dt>-w, -\\\-no-password  </dt>
+<dd>Never issue a password prompt. If the server requires password 
authentication and a password is not available by other means such as a 
`.pgpass` file, the connection attempt will fail. This option can be useful in 
batch jobs and scripts where no user is present to enter a password.</dd>
+
+<dt>-W, -\\\-password  </dt>
+<dd>Force a password prompt.</dd>
+
+
+**Other Options**
+
+<dt>-\\\-help  </dt>
+<dd>Displays the online help.</dd>
+
+<dt>-\\\-version  </dt>
+<dd>Displays the version of this utility.</dd>
+
+## <a id="topic1__section6"></a>Examples
+
+To create the database `testdb` using the default options:
+
+``` shell
+$ createdb testdb
+```
+
+To create the database `demo` using the HAWQ master on host `gpmaster`, port 
`54321`, using the `LATIN1` encoding scheme:
+
+``` shell
+$ createdb -p 54321 -h gpmaster -E LATIN1 demo
+```


Reply via email to