Repository: incubator-hawq-docs
Updated Branches:
  refs/heads/develop 97317c4df -> 459e3bc7d


changing to use relative links instead of root links, to account for different 
versions of the docs


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/459e3bc7
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/459e3bc7
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/459e3bc7

Branch: refs/heads/develop
Commit: 459e3bc7dc45994718960e2f33b09b7969dea1ac
Parents: 97317c4
Author: David Yozie <[email protected]>
Authored: Wed Sep 28 14:33:02 2016 -0700
Committer: David Yozie <[email protected]>
Committed: Wed Sep 28 14:33:02 2016 -0700

----------------------------------------------------------------------
 admin/ClusterExpansion.html.md.erb                   | 10 +++++-----
 admin/RecommendedMonitoringTasks.html.md.erb         |  4 ++--
 admin/ambari-admin.html.md.erb                       | 10 +++++-----
 admin/startstop.html.md.erb                          |  4 ++--
 ddl/ddl-partition.html.md.erb                        |  2 +-
 ddl/ddl-table.html.md.erb                            |  4 ++--
 overview/HAWQArchitecture.html.md.erb                |  4 ++--
 overview/ManagementTools.html.md.erb                 |  2 +-
 overview/RedundancyFailover.html.md.erb              |  2 +-
 overview/ResourceManagement.html.md.erb              |  2 +-
 overview/TableDistributionStorage.html.md.erb        |  6 +++---
 plext/using_pljava.html.md.erb                       |  4 ++--
 reference/HAWQEnvironmentVariables.html.md.erb       |  2 +-
 reference/guc/parameter_definitions.html.md.erb      |  4 ++--
 reference/sql/ALTER-RESOURCE-QUEUE.html.md.erb       |  4 ++--
 reference/sql/CREATE-RESOURCE-QUEUE.html.md.erb      |  4 ++--
 resourcemgmt/ConfigureResourceManagement.html.md.erb |  6 +++---
 resourcemgmt/HAWQResourceManagement.html.md.erb      |  2 +-
 resourcemgmt/ResourceQueues.html.md.erb              |  8 ++++----
 resourcemgmt/YARNIntegration.html.md.erb             |  2 +-
 resourcemgmt/best-practices.html.md.erb              |  4 ++--
 troubleshooting/Troubleshooting.html.md.erb          | 10 +++++-----
 22 files changed, 50 insertions(+), 50 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/admin/ClusterExpansion.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/ClusterExpansion.html.md.erb 
b/admin/ClusterExpansion.html.md.erb
index d74881e..d99c760 100644
--- a/admin/ClusterExpansion.html.md.erb
+++ b/admin/ClusterExpansion.html.md.erb
@@ -4,7 +4,7 @@ title: Expanding a Cluster
 
 Apache HAWQ supports dynamic node expansion. You can add segment nodes while 
HAWQ is running without having to suspend or terminate cluster operations.
 
-**Note:** This topic describes how to expand a cluster using the command-line 
interface. If you are using Ambari to manage your HAWQ cluster, see [Expanding 
the HAWQ Cluster](/20/admin/ambari-admin.html#amb-expand) in [Managing HAWQ 
Using Ambari](/20/admin/ambari-admin.html)
+**Note:** This topic describes how to expand a cluster using the command-line 
interface. If you are using Ambari to manage your HAWQ cluster, see [Expanding 
the HAWQ Cluster](../admin/ambari-admin.html#amb-expand) in [Managing HAWQ 
Using Ambari](../admin/ambari-admin.html)
 
 ## <a id="topic_kkc_tgb_h5"></a>Guidelines for Cluster Expansion 
 
@@ -15,12 +15,12 @@ There are several recommendations to keep in mind when 
modifying the size of you
 -   When you add a new node, install both a DataNode and a physical segment on 
the new node.
 -   After adding a new node, you should always rebalance HDFS data to maintain 
cluster performance.
 -   Adding or removing a node also necessitates an update to the HDFS metadata 
cache. This update will happen eventually, but can take some time. To speed the 
update of the metadata cache, execute **`select gp_metadata_cache_clear();`**.
--   Note that for hash distributed tables, expanding the cluster will not 
immediately improve performance since hash distributed tables use a fixed 
number of virtual segments. In order to obtain better performance with hash 
distributed tables, you must redistribute the table to the updated cluster by 
either the [ALTER TABLE](/20/reference/sql/ALTER-TABLE.html) or [CREATE TABLE 
AS](/20/reference/sql/CREATE-TABLE-AS.html) command.
+-   Note that for hash distributed tables, expanding the cluster will not 
immediately improve performance since hash distributed tables use a fixed 
number of virtual segments. In order to obtain better performance with hash 
distributed tables, you must redistribute the table to the updated cluster by 
either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE 
AS](../reference/sql/CREATE-TABLE-AS.html) command.
 -   If you are using hash tables, consider updating the 
`default_hash_table_bucket_number` server configuration parameter to a larger 
value after expanding the cluster but before redistributing the hash tables.
 
 ## <a id="task_hawq_expand"></a>Adding a New Node to an Existing HAWQ Cluster 
 
-The following procedure describes the steps required to add a node to an 
existing HAWQ cluster.  First ensure that the new node has been configured per 
the instructions found in [Apache HAWQ System 
Requirements](/20/requirements/system-requirements.html) and [Select HAWQ Host 
Machines](/20/install/select-hosts.html).
+The following procedure describes the steps required to add a node to an 
existing HAWQ cluster.  First ensure that the new node has been configured per 
the instructions found in [Apache HAWQ System 
Requirements](../requirements/system-requirements.html) and [Select HAWQ Host 
Machines](../install/select-hosts.html).
 
 For example purposes in this procedure, we are adding a new node named `sdw4`.
 
@@ -71,7 +71,7 @@ For example purposes in this procedure, we are adding a new 
node named `sdw4`.
         $ hawq ssh-exkeys -e hawq_hosts -x new_hosts
         ```
 
-    8.  (Optional) If you enabled temporary password-based authentication 
while preparing/configuring your new HAWQ host system, turn off password-based 
authentication as described in [Apache HAWQ System 
Requirements](/20/requirements/system-requirements.html#topic_pwdlessssh).
+    8.  (Optional) If you enabled temporary password-based authentication 
while preparing/configuring your new HAWQ host system, turn off password-based 
authentication as described in [Apache HAWQ System 
Requirements](../requirements/system-requirements.html#topic_pwdlessssh).
 
     8.  After setting up passwordless ssh, you can execute the following hawq 
command to check the target machine's configuration.
 
@@ -220,7 +220,7 @@ For example purposes in this procedure, we are adding a new 
node named `sdw4`.
        |\> 256 and <= 512|1 \* \#nodes|
        |\> 512|512| 
    
-18. If you are using hash distributed tables and wish to take advantage of the 
performance benefits of using a larger cluster, redistribute the data in all 
hash-distributed tables by using either the [ALTER 
TABLE](/20/reference/sql/ALTER-TABLE.html) or [CREATE TABLE 
AS](/20/reference/sql/CREATE-TABLE-AS.html) command. You should redistribute 
the table data if you modified the `default_hash_table_bucket_number` 
configuration parameter. 
+18. If you are using hash distributed tables and wish to take advantage of the 
performance benefits of using a larger cluster, redistribute the data in all 
hash-distributed tables by using either the [ALTER 
TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE 
AS](../reference/sql/CREATE-TABLE-AS.html) command. You should redistribute the 
table data if you modified the `default_hash_table_bucket_number` configuration 
parameter. 
 
 
        **Note:** The redistribution of table data can take a significant 
amount of time.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/admin/RecommendedMonitoringTasks.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/RecommendedMonitoringTasks.html.md.erb 
b/admin/RecommendedMonitoringTasks.html.md.erb
index 3007aee..5083b44 100644
--- a/admin/RecommendedMonitoringTasks.html.md.erb
+++ b/admin/RecommendedMonitoringTasks.html.md.erb
@@ -188,7 +188,7 @@ GROUP BY 1;
   </tr>
   <tr>
     <td>
-    <p>Vacuum all system catalogs (tables in the <code>pg_catalog</code> 
schema) that are approaching <a 
href="/20/reference/guc/parameter_definitions.html">vacuum_freeze_min_age</a>.</p>
+    <p>Vacuum all system catalogs (tables in the <code>pg_catalog</code> 
schema) that are approaching <a 
href="../reference/guc/parameter_definitions.html">vacuum_freeze_min_age</a>.</p>
     <p>Recommended frequency: daily</p>
     <p>Severity: CRITICAL</p>
     </td>
@@ -196,7 +196,7 @@ GROUP BY 1;
       <p><p>Vacuum an individual system catalog table:</p>
       <pre><code>VACUUM &lt;<i>table</i>&gt;;</code></pre>
     </td>
-    <td>After the <a 
href="/20/reference/guc/parameter_definitions.html">vacuum_freeze_min_age</a> 
value is reached, VACUUM will no longer replace transaction IDs with 
<code>FrozenXID</code> while scanning a table. Perform vacuum on these tables 
before the limit is reached.</td>
+    <td>After the <a 
href="../reference/guc/parameter_definitions.html">vacuum_freeze_min_age</a> 
value is reached, VACUUM will no longer replace transaction IDs with 
<code>FrozenXID</code> while scanning a table. Perform vacuum on these tables 
before the limit is reached.</td>
   </tr>
     <td>
       <p>Update table statistics.</p>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/admin/ambari-admin.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/ambari-admin.html.md.erb b/admin/ambari-admin.html.md.erb
index 6b5f1e1..e41adc6 100644
--- a/admin/ambari-admin.html.md.erb
+++ b/admin/ambari-admin.html.md.erb
@@ -8,7 +8,7 @@ Ambari provides an easy interface to perform some of the most 
common HAWQ and PX
 
 HAWQ supports integration with YARN for global resource management. In a YARN 
managed environment, HAWQ can request resources (containers) dynamically from 
YARN, and return resources when HAWQ’s workload is not heavy.
 
-See also [Integrating YARN with HAWQ](/20/resourcemgmt/YARNIntegration.html) 
for command-line instructions and additional details about using HAWQ with YARN.
+See also [Integrating YARN with HAWQ](../resourcemgmt/YARNIntegration.html) 
for command-line instructions and additional details about using HAWQ with YARN.
 
 ### When to Perform
 
@@ -156,11 +156,11 @@ There are several recommendations to keep in mind when 
modifying the size of you
 -  When you add a new node, install both a DataNode and a HAWQ segment on the 
new node.
 -  After adding a new node, you should always rebalance HDFS data to maintain 
cluster performance.
 -  Adding or removing a node also necessitates an update to the HDFS metadata 
cache. This update will happen eventually, but can take some time. To speed the 
update of the metadata cache, select the **Service Actions > Clear HAWQ's HDFS 
Metadata Cache** option in Ambari.
--  Note that for hash distributed tables, expanding the cluster will not 
immediately improve performance since hash distributed tables use a fixed 
number of virtual segments. In order to obtain better performance with hash 
distributed tables, you must redistribute the table to the updated cluster by 
either the [ALTER TABLE](/20/reference/sql/ALTER-TABLE.html) or [CREATE TABLE 
AS](/20/reference/sql/CREATE-TABLE-AS.html) command.
+-  Note that for hash distributed tables, expanding the cluster will not 
immediately improve performance since hash distributed tables use a fixed 
number of virtual segments. In order to obtain better performance with hash 
distributed tables, you must redistribute the table to the updated cluster by 
either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE 
AS](../reference/sql/CREATE-TABLE-AS.html) command.
 -  If you are using hash tables, consider updating the 
`default_hash_table_bucket_number` server configuration parameter to a larger 
value after expanding the cluster but before redistributing the hash tables.
 
 ### Procedure
-First ensure that the new node(s) has been configured per the instructions 
found in [Apache HAWQ System 
Requirements](/20/requirements/system-requirements.html) and [Select HAWQ Host 
Machines](/20/install/select-hosts.html).
+First ensure that the new node(s) has been configured per the instructions 
found in [Apache HAWQ System 
Requirements](../requirements/system-requirements.html) and [Select HAWQ Host 
Machines](../install/select-hosts.html).
 
 1.  If you have any user-defined function (UDF) libraries installed in your 
existing HAWQ cluster, install them on the new node(s) that you want to add to 
the HAWQ cluster.
 2.  Access the Ambari web console at http://ambari.server.hostname:8080, and 
login as the "admin" user. \(The default password is also "admin".\)
@@ -199,12 +199,12 @@ First ensure that the new node(s) has been configured per 
the instructions found
 19.  Consider the impact of rebalancing HDFS to other components, such as 
HBase, before you complete this step.
     <br/><br/>Rebalance your HDFS data by selecting the **HDFS** service and 
then choosing **Service Actions > Rebalance HDFS**. Follow the Ambari 
instructions to complete the rebalance action.
 20.  Speed up the clearing of the metadata cache by first selecting the 
**HAWQ** service and then selecting **Service Actions > Clear HAWQ's HDFS 
Metadata Cache**.
-21.  If you are using hash distributed tables and wish to take advantage of 
the performance benefits of using a larger cluster, redistribute the data in 
all hash-distributed tables by using either the [ALTER 
TABLE](/20/reference/sql/ALTER-TABLE.html) or [CREATE TABLE 
AS](/20/reference/sql/CREATE-TABLE-AS.html) command. You should redistribute 
the table data if you modified the `default_hash_table_bucket_number` 
configuration parameter.
+21.  If you are using hash distributed tables and wish to take advantage of 
the performance benefits of using a larger cluster, redistribute the data in 
all hash-distributed tables by using either the [ALTER 
TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE 
AS](../reference/sql/CREATE-TABLE-AS.html) command. You should redistribute the 
table data if you modified the `default_hash_table_bucket_number` configuration 
parameter.
 
     **Note:** The redistribution of table data can take a significant amount 
of time.
 22.  (Optional.) If you changed the **Exchange SSH Keys** property value 
before adding the host(s), change the value back to `false` after Ambari 
exchanges keys with the new hosts. This prevents Ambari from exchanging keys 
with all hosts every time the HAWQ master is started or restarted.
 
-23.  (Optional.) If you enabled temporary password-based authentication while 
preparing/configuring your HAWQ host systems, turn off password-based 
authentication as described in [Apache HAWQ System 
Requirements](/20/requirements/system-requirements.html#topic_pwdlessssh).
+23.  (Optional.) If you enabled temporary password-based authentication while 
preparing/configuring your HAWQ host systems, turn off password-based 
authentication as described in [Apache HAWQ System 
Requirements](../requirements/system-requirements.html#topic_pwdlessssh).
 
 #### <a id="manual-config-steps"></a>Manually Updating the HAWQ Configuration
 If you need to expand your HAWQ cluster without restarting the HAWQ service, 
follow these steps to manually apply the new HAWQ configuration. (Use these 
steps *instead* of following Step 7 in the above procedure.):

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/admin/startstop.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/startstop.html.md.erb b/admin/startstop.html.md.erb
index 106fa7d..5c07f96 100644
--- a/admin/startstop.html.md.erb
+++ b/admin/startstop.html.md.erb
@@ -12,7 +12,7 @@ Use the `hawq start `*`object`* and `hawq stop `*`object`* 
commands to start and
 
 Do not issue a `KILL` command to end any Postgres process. Instead, use the 
database command `pg_cancel_backend()`.
 
-For information about [hawq 
start](/20/reference/cli/admin_utilities/hawqstart.html) and [hawq 
stop](/20/reference/cli/admin_utilities/hawqstop.html), see the appropriate 
pages in the HAWQ Management Utility Reference or enter `hawq start -h` or 
`hawq stop -h` on the command line.
+For information about [hawq 
start](../reference/cli/admin_utilities/hawqstart.html) and [hawq 
stop](../reference/cli/admin_utilities/hawqstop.html), see the appropriate 
pages in the HAWQ Management Utility Reference or enter `hawq start -h` or 
`hawq stop -h` on the command line.
 
 ## <a id="task_g1y_xtm_s5"></a>Initialize HAWQ 
 
@@ -68,7 +68,7 @@ The `hawq restart` command with the appropriate cluster or 
node command can stop
 
 Reload changes to the HAWQ configuration files without interrupting the system.
 
-The `hawq stop` command can reload changes to the pg\_hba.conf configuration 
file and to *runtime* parameters in the hawq-site.xml file and pg\_hba.conf 
file without service interruption. Active sessions pick up changes when they 
reconnect to the database. Many server configuration parameters require a full 
system restart \(`hawq restart cluster`\) to activate. For information about 
server configuration parameters, see the [Server Configuration Parameter 
Reference](/20/reference/guc/guc_config.html).
+The `hawq stop` command can reload changes to the pg\_hba.conf configuration 
file and to *runtime* parameters in the hawq-site.xml file and pg\_hba.conf 
file without service interruption. Active sessions pick up changes when they 
reconnect to the database. Many server configuration parameters require a full 
system restart \(`hawq restart cluster`\) to activate. For information about 
server configuration parameters, see the [Server Configuration Parameter 
Reference](../reference/guc/guc_config.html).
 
 -   Reload configuration file changes without shutting down the system using 
the `hawq stop` command:
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/ddl/ddl-partition.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-partition.html.md.erb b/ddl/ddl-partition.html.md.erb
index 39fd43c..f790161 100644
--- a/ddl/ddl-partition.html.md.erb
+++ b/ddl/ddl-partition.html.md.erb
@@ -37,7 +37,7 @@ Not all tables are good candidates for partitioning. If the 
answer is *yes* to a
 
 Do not create more partitions than are needed. Creating too many partitions 
can slow down management and maintenance jobs, such as vacuuming, recovering 
segments, expanding the cluster, checking disk usage, and others.
 
-Partitioning does not improve query performance unless the query optimizer can 
eliminate partitions based on the query predicates. Queries that scan every 
partition run slower than if the table were not partitioned, so avoid 
partitioning if few of your queries achieve partition elimination. Check the 
explain plan for queries to make sure that partitions are eliminated. See 
[Query Profiling](/20/query/query-profiling.html) for more about partition 
elimination.
+Partitioning does not improve query performance unless the query optimizer can 
eliminate partitions based on the query predicates. Queries that scan every 
partition run slower than if the table were not partitioned, so avoid 
partitioning if few of your queries achieve partition elimination. Check the 
explain plan for queries to make sure that partitions are eliminated. See 
[Query Profiling](../query/query-profiling.html) for more about partition 
elimination.
 
 Be very careful with multi-level partitioning because the number of partition 
files can grow very quickly. For example, if a table is partitioned by both day 
and city, and there are 1,000 days of data and 1,000 cities, the total number 
of partitions is one million. Column-oriented tables store each column in a 
physical table, so if this table has 100 columns, the system would be required 
to manage 100 million files for the table.
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/ddl/ddl-table.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-table.html.md.erb b/ddl/ddl-table.html.md.erb
index 7120031..62ece36 100644
--- a/ddl/ddl-table.html.md.erb
+++ b/ddl/ddl-table.html.md.erb
@@ -12,7 +12,7 @@ The `CREATE TABLE` command creates a table and defines its 
structure. When you c
 -   Any table constraints to limit the data that a column or table can 
contain. See [Setting Table Constraints](#topic28).
 -   The distribution policy of the table, which determines how HAWQ divides 
data is across the segments. See [Choosing the Table Distribution 
Policy](#topic34).
 -   The way the table is stored on disk.
--   The table partitioning strategy for large tables, which specifies how the 
data should be divided. See [Creating and Managing 
Databases](/20/ddl/ddl-database.html).
+-   The table partitioning strategy for large tables, which specifies how the 
data should be divided. See [Creating and Managing 
Databases](../ddl/ddl-database.html).
 
 ### <a id="topic27"></a>Choosing Column Data Types 
 
@@ -126,7 +126,7 @@ For hash tables, the `SELECT INTO` function always uses 
random distribution.
 
 `CREATE TABLE`'s optional clause `DISTRIBUTED BY` specifies the distribution 
policy for a table. The default is a random distribution policy. You can also 
choose to distribute data as a hash-based policy, where the `bucketnum` 
attribute sets the number of hash buckets used by a hash-distributed table. 
HASH distributed tables are created with the number of hash buckets specified 
by the `default_hash_table_bucket_number` parameter.
 
-Policies for different application scenarios can be specified to optimize 
performance. The number of virtual segments used for query execution can now be 
tuned using the `hawq_rm_nvseg_perquery_limit `and 
`hawq_rm_nvseg_perquery_perseg_limit` parameters, in connection with the 
`default_hash_table_bucket_number` parameter, which sets the default 
`bucketnum`. For more information, see the guidelines for Virtual Segments in 
the next section and in [Query 
Performance](/20/query/query-performance.html#topic38).
+Policies for different application scenarios can be specified to optimize 
performance. The number of virtual segments used for query execution can now be 
tuned using the `hawq_rm_nvseg_perquery_limit `and 
`hawq_rm_nvseg_perquery_perseg_limit` parameters, in connection with the 
`default_hash_table_bucket_number` parameter, which sets the default 
`bucketnum`. For more information, see the guidelines for Virtual Segments in 
the next section and in [Query 
Performance](../query/query-performance.html#topic38).
 
 #### <a id="topic_wff_mqm_gv"></a>Performance Tuning 
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/overview/HAWQArchitecture.html.md.erb
----------------------------------------------------------------------
diff --git a/overview/HAWQArchitecture.html.md.erb 
b/overview/HAWQArchitecture.html.md.erb
index 84fc3c2..d42d241 100755
--- a/overview/HAWQArchitecture.html.md.erb
+++ b/overview/HAWQArchitecture.html.md.erb
@@ -52,7 +52,7 @@ By default, the interconnect uses UDP \(User Datagram 
Protocol\) to send message
 
 The HAWQ resource manager obtains resources from YARN and responds to resource 
requests. Resources are buffered by the HAWQ resource manager to support low 
latency queries. The HAWQ resource manager can also run in standalone mode. In 
these deployments, HAWQ manages resources by itself without YARN.
 
-See [How HAWQ Manages Resources](/20/resourcemgmt/HAWQResourceManagement.html) 
for more details on HAWQ resource management.
+See [How HAWQ Manages Resources](../resourcemgmt/HAWQResourceManagement.html) 
for more details on HAWQ resource management.
 
 ## <a id="topic_mrl_psq_f5"></a>HAWQ Catalog Service 
 
@@ -62,7 +62,7 @@ The HAWQ catalog service stores all metadata, such as UDF/UDT 
information, relat
 
 The HAWQ fault tolerance service \(FTS\) is responsible for detecting segment 
failures and accepting heartbeats from segments.
 
-See [Understanding the Fault Tolerance Service](/20/admin/FaultTolerance.html) 
for more information on this service.
+See [Understanding the Fault Tolerance Service](../admin/FaultTolerance.html) 
for more information on this service.
 
 ## <a id="topic_jtc_nkm_g5"></a>HAWQ Dispatcher 
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/overview/ManagementTools.html.md.erb
----------------------------------------------------------------------
diff --git a/overview/ManagementTools.html.md.erb 
b/overview/ManagementTools.html.md.erb
index 39f072b..0c7439d 100755
--- a/overview/ManagementTools.html.md.erb
+++ b/overview/ManagementTools.html.md.erb
@@ -6,4 +6,4 @@ HAWQ management tools are consolidated into one `hawq` command.
 
 The `hawq` command can init, start and stop each segment separately, and 
supports dynamic expansion of the cluster.
 
-See [HAWQ Management Tools Reference](/20/reference/cli/management_tools.html) 
for a list of all tools available in HAWQ.
+See [HAWQ Management Tools Reference](../reference/cli/management_tools.html) 
for a list of all tools available in HAWQ.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/overview/RedundancyFailover.html.md.erb
----------------------------------------------------------------------
diff --git a/overview/RedundancyFailover.html.md.erb 
b/overview/RedundancyFailover.html.md.erb
index bf439f6..90eec63 100755
--- a/overview/RedundancyFailover.html.md.erb
+++ b/overview/RedundancyFailover.html.md.erb
@@ -13,7 +13,7 @@ HAWQ employs several mechanisms for ensuring high 
availability. The foremost mec
 * Master mirroring. Clusters have a standby master in the event of failure of 
the primary master.
 * Dual clusters. Administrators can create a secondary cluster and 
synchronizes its data with the primary cluster either through dual ETL or 
backup and restore mechanisms.
 
-In addition to high availability managed on the HAWQ level, you can enable 
high availability in HDFS for HAWQ by implementing the high availability 
feature for NameNodes. See [HAWQ Filespaces and High Availability Enabled 
HDFS](/20/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html).
+In addition to high availability managed on the HAWQ level, you can enable 
high availability in HDFS for HAWQ by implementing the high availability 
feature for NameNodes. See [HAWQ Filespaces and High Availability Enabled 
HDFS](../admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html).
 
 
 ## <a id="aboutsegmentfailover"></a>About Segment Fault Tolerance 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/overview/ResourceManagement.html.md.erb
----------------------------------------------------------------------
diff --git a/overview/ResourceManagement.html.md.erb 
b/overview/ResourceManagement.html.md.erb
index 5a5adda..8f7e2fd 100755
--- a/overview/ResourceManagement.html.md.erb
+++ b/overview/ResourceManagement.html.md.erb
@@ -11,4 +11,4 @@ HAWQ has the ability to manage resources by using the 
following mechanisms:
 -   Dynamic resource allocation at query runtime. HAWQ dynamically allocates 
resources based on resource queue definitions. HAWQ automatically distributes 
resources based on running \(or queued\) queries and resource queue capacities.
 -   Resource limitations on virtual segments and queries. You can configure 
HAWQ to enforce limits on CPU and memory usage both for virtual segments and 
the resource queues used by queries.
 
-For more details on resource management in HAWQ and how it works, see 
[Managing Resources](/20/resourcemgmt/HAWQResourceManagement.html).
+For more details on resource management in HAWQ and how it works, see 
[Managing Resources](../resourcemgmt/HAWQResourceManagement.html).

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/overview/TableDistributionStorage.html.md.erb
----------------------------------------------------------------------
diff --git a/overview/TableDistributionStorage.html.md.erb 
b/overview/TableDistributionStorage.html.md.erb
index 8bf6542..aa03b59 100755
--- a/overview/TableDistributionStorage.html.md.erb
+++ b/overview/TableDistributionStorage.html.md.erb
@@ -16,7 +16,7 @@ Randomly distributed tables have some benefits over hash 
distributed tables. For
 
 On the other hand, for some queries, hash distributed tables are faster than 
randomly distributed tables. For example, hash distributed tables have some 
performance benefits for some TPC-H queries. You should choose the distribution 
policy that is best suited for your application's scenario.
 
-See [Choosing the Table Distribution Policy](/20/ddl/ddl-table.html) for more 
details.
+See [Choosing the Table Distribution Policy](../ddl/ddl-table.html) for more 
details.
 
 ## Data Locality
 
@@ -33,9 +33,9 @@ PXF is an extensible framework that allows HAWQ to access 
data in external
 sources as readable or writable HAWQ tables. PXF has built-in connectors for
 accessing data inside HDFS files, Hive tables, and HBase tables. PXF also
 integrates with HCatalog to query Hive tables directly. See [Working with PXF
-and External Data](/20/pxf/HawqExtensionFrameworkPXF.html) for more
+and External Data](../pxf/HawqExtensionFrameworkPXF.html) for more
 details.
 
 Users can create custom PXF connectors to access other parallel data stores or
 processing engines. Connectors are Java plug-ins that use the PXF API. For more
-information see [PXF External Tables and 
API](/20/pxf/PXFExternalTableandAPIReference.html).
+information see [PXF External Tables and 
API](../pxf/PXFExternalTableandAPIReference.html).

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/plext/using_pljava.html.md.erb
----------------------------------------------------------------------
diff --git a/plext/using_pljava.html.md.erb b/plext/using_pljava.html.md.erb
index 3cce857..d19fbbe 100644
--- a/plext/using_pljava.html.md.erb
+++ b/plext/using_pljava.html.md.erb
@@ -59,7 +59,7 @@ HAWQ uses the `pljava_classpath` server configuration 
parameter in place of the
 
 The following server configuration parameters are used by PL/Java in HAWQ. 
These parameters replace the `pljava.*` parameters that are used in the 
standard PostgreSQL PL/Java implementation.
 
-<p class="note"><b>Note:</b> See the <a 
href="/20/reference/hawq-reference.html">HAWQ Reference</a> for information 
about HAWQ server configuration parameters.</p>
+<p class="note"><b>Note:</b> See the <a 
href="../reference/hawq-reference.html">HAWQ Reference</a> for information 
about HAWQ server configuration parameters.</p>
 
 #### pljava\_classpath
 
@@ -597,7 +597,7 @@ Main-Class: Example
 Specification-Title: "Example"
 Specification-Version: "1.0"
 Created-By: 1.6.0_35-b10-428-11M3811
-Build-Date: 01/20/2013 10:09 AM
+Build-Date: 01../2013 10:09 AM
 ```
 
 Compile the Java code:

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/reference/HAWQEnvironmentVariables.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/HAWQEnvironmentVariables.html.md.erb 
b/reference/HAWQEnvironmentVariables.html.md.erb
index 8061781..ce21798 100644
--- a/reference/HAWQEnvironmentVariables.html.md.erb
+++ b/reference/HAWQEnvironmentVariables.html.md.erb
@@ -66,7 +66,7 @@ The password used if the server demands password 
authentication. Use of this env
 
 The name of the password file to use for lookups. If not set, it defaults to 
`~/.pgpass`.
 
-See The Password File under [Configuring Client 
Authentication](/20/clientaccess/client_auth.html).
+See The Password File under [Configuring Client 
Authentication](../clientaccess/client_auth.html).
 
 ### <a id="pgoptions"></a>PGOPTIONS
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/reference/guc/parameter_definitions.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/guc/parameter_definitions.html.md.erb 
b/reference/guc/parameter_definitions.html.md.erb
index 645335b..2012155 100644
--- a/reference/guc/parameter_definitions.html.md.erb
+++ b/reference/guc/parameter_definitions.html.md.erb
@@ -2458,7 +2458,7 @@ Sets the maximum number of append-only relations that can 
be written to or loade
 
 ## <a name="max_connections"></a>max\_connections
 
-The maximum number of concurrent connections allowed on master. In a HAWQ 
system, user client connections go through the HAWQ master instance only. 
Segment instances should allow 5-10 times the amount as the master. When you 
increase this parameter, you must increase 
[max\_prepared\_transactions](#max_prepared_transactions) as well. For more 
information about limiting concurrent connections, see [Configuring Client 
Authentication](/20/clientaccess/client_auth.html).
+The maximum number of concurrent connections allowed on master. In a HAWQ 
system, user client connections go through the HAWQ master instance only. 
Segment instances should allow 5-10 times the amount as the master. When you 
increase this parameter, you must increase 
[max\_prepared\_transactions](#max_prepared_transactions) as well. For more 
information about limiting concurrent connections, see [Configuring Client 
Authentication](../../clientaccess/client_auth.html).
 
 Increasing this parameter may cause HAWQ to request more shared memory. See 
[shared\_buffers](#shared_buffers) for information about HAWQ server instance 
shared memory buffers.
 
@@ -2828,7 +2828,7 @@ Specifies the order in which schemas are searched when an 
object is referenced b
 
 ## <a name="seg_max_connections"></a>seg\_max\_connections
 
-The maximum number of concurrent connections on a segment. In a HAWQ system, 
user client connections go through the HAWQ master instance only. Segment 
instances should allow 5-10 times the amount of connections allowed on the 
master (see [max\_connections](#max_connections).) When you increase this 
parameter, you must increase 
[max\_prepared\_transactions](#max_prepared_transactions) as well. For more 
information about limiting concurrent connections, see [Configuring Client 
Authentication](/20/clientaccess/client_auth.html).
+The maximum number of concurrent connections on a segment. In a HAWQ system, 
user client connections go through the HAWQ master instance only. Segment 
instances should allow 5-10 times the amount of connections allowed on the 
master (see [max\_connections](#max_connections).) When you increase this 
parameter, you must increase 
[max\_prepared\_transactions](#max_prepared_transactions) as well. For more 
information about limiting concurrent connections, see [Configuring Client 
Authentication](../../clientaccess/client_auth.html).
 
 Increasing this parameter may cause HAWQ to request more shared memory. See 
[shared\_buffers](#shared_buffers) for information about HAWQ server instance 
shared memory buffers.
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/reference/sql/ALTER-RESOURCE-QUEUE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/ALTER-RESOURCE-QUEUE.html.md.erb 
b/reference/sql/ALTER-RESOURCE-QUEUE.html.md.erb
index e1a31db..ec051e8 100644
--- a/reference/sql/ALTER-RESOURCE-QUEUE.html.md.erb
+++ b/reference/sql/ALTER-RESOURCE-QUEUE.html.md.erb
@@ -40,9 +40,9 @@ When modifying the resource queue, use the 
MEMORY\_LIMIT\_CLUSTER and CORE\_LIMI
 
 To modify the role associated with the resource queue, use the [ALTER 
ROLE](ALTER-ROLE.html) or [CREATE ROLE](CREATE-ROLE.html) command. You can only 
assign roles to the leaf-level resource queues (resource queues that do not 
have any children.)
 
-The default memory allotment can be overridden on a per-query basis by using 
`hawq_rm_stmt_vseg_memory` and` hawq_rm_stmt_nvseg` configuration parameters. 
See [Configuring Resource Quotas for Query 
Statements](/20/resourcemgmt/ConfigureResourceManagement.html#topic_g2p_zdq_15).
+The default memory allotment can be overridden on a per-query basis by using 
`hawq_rm_stmt_vseg_memory` and` hawq_rm_stmt_nvseg` configuration parameters. 
See [Configuring Resource Quotas for Query 
Statements](../../resourcemgmt/ConfigureResourceManagement.html#topic_g2p_zdq_15).
 
-To see the status of a resource queue, see [Checking Existing Resource 
Queues](/20/resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
+To see the status of a resource queue, see [Checking Existing Resource 
Queues](../../resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
 
 See also [Best Practices for Using Resource 
Queues](../../bestpractices/managing_resources_bestpractices.html#topic_hvd_pls_wv).
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/reference/sql/CREATE-RESOURCE-QUEUE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-RESOURCE-QUEUE.html.md.erb 
b/reference/sql/CREATE-RESOURCE-QUEUE.html.md.erb
index 8fa6e89..8f9fe93 100644
--- a/reference/sql/CREATE-RESOURCE-QUEUE.html.md.erb
+++ b/reference/sql/CREATE-RESOURCE-QUEUE.html.md.erb
@@ -108,11 +108,11 @@ By default, both limits are set to **-1**, which means 
the limits are disabled.
 
 ## <a id="topic1__section5"></a>Notes
 
-To check on the configuration of a resource queue, you can query the 
`pg_resqueue` catalog table. To see the runtime status of all resource queues, 
you can use the `pg_resqueue_status`. See [Checking Existing Resource 
Queues](/20/resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
+To check on the configuration of a resource queue, you can query the 
`pg_resqueue` catalog table. To see the runtime status of all resource queues, 
you can use the `pg_resqueue_status`. See [Checking Existing Resource 
Queues](../../resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
 
 `CREATE RESOURCE QUEUE` cannot be run within a transaction.
 
-To see the status of a resource queue, see [Checking Existing Resource 
Queues](/20/resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
+To see the status of a resource queue, see [Checking Existing Resource 
Queues](../../resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
 
 ## <a id="topic1__section6"></a>Examples
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/resourcemgmt/ConfigureResourceManagement.html.md.erb
----------------------------------------------------------------------
diff --git a/resourcemgmt/ConfigureResourceManagement.html.md.erb 
b/resourcemgmt/ConfigureResourceManagement.html.md.erb
index cc17e06..23fe860 100644
--- a/resourcemgmt/ConfigureResourceManagement.html.md.erb
+++ b/resourcemgmt/ConfigureResourceManagement.html.md.erb
@@ -81,8 +81,8 @@ In some cases, you may want to specify additional resource 
quotas on the query s
 
 The following configuration properties allow a user to control resource quotas 
without altering corresponding resource queues.
 
--   
[hawq\_rm\_stmt\_vseg\_memory](/20/reference/guc/parameter_definitions.html)
--   [hawq\_rm\_stmt\_nvseg](/20/reference/guc/parameter_definitions.html)
+-   [hawq\_rm\_stmt\_vseg\_memory](../reference/guc/parameter_definitions.html)
+-   [hawq\_rm\_stmt\_nvseg](../reference/guc/parameter_definitions.html)
 
 However, the changed resource quota for the virtual segment cannot exceed the 
resource queue’s maximum capacity in HAWQ.
 
@@ -117,4 +117,4 @@ To alleviate the load on NameNode, you can limit V, the 
number of virtual segmen
 -   `hawq_rm_nvseg_perquery_limit` limits the maximum number of virtual 
segments that can be used for one statement execution on a cluster-wide level.  
The hash buckets defined in `default_hash_table_bucket_number` cannot exceed 
this number. The default value is 512.
 -   `default_hash_table_bucket_number` defines the number of buckets used by 
default when you create a hash table. When you query a hash table, the query's 
virtual segment resources are fixed and allocated based on the bucket number 
defined for the table. A best practice is to tune this configuration parameter 
after you expand the cluster.
 
-You can also limit the number of virtual segments used by queries when 
configuring your resource queues. \(See [CREATE RESOURCE 
QUEUE](/20/reference/sql/CREATE-RESOURCE-QUEUE.html).\) The global 
configuration parameters are a hard limit, however, and any limits set on the 
resource queue or on the statement-level cannot be larger than these limits set 
on the cluster-wide level.
+You can also limit the number of virtual segments used by queries when 
configuring your resource queues. \(See [CREATE RESOURCE 
QUEUE](../reference/sql/CREATE-RESOURCE-QUEUE.html).\) The global configuration 
parameters are a hard limit, however, and any limits set on the resource queue 
or on the statement-level cannot be larger than these limits set on the 
cluster-wide level.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/resourcemgmt/HAWQResourceManagement.html.md.erb
----------------------------------------------------------------------
diff --git a/resourcemgmt/HAWQResourceManagement.html.md.erb 
b/resourcemgmt/HAWQResourceManagement.html.md.erb
index 095bc9d..dd5c9b3 100644
--- a/resourcemgmt/HAWQResourceManagement.html.md.erb
+++ b/resourcemgmt/HAWQResourceManagement.html.md.erb
@@ -66,4 +66,4 @@ Resource manager adjusts segment localhost original resource 
capacity from (8192
 Resource manager adjusts segment localhost original global resource manager 
resource capacity from (8192 MB, 5 CORE) to (5120 MB, 5 CORE)
 ```
 
-See [Viewing the Database Server Log Files](/20/admin/monitor.html#topic28) 
for more information on working with HAWQ log files.
+See [Viewing the Database Server Log Files](../admin/monitor.html#topic28) for 
more information on working with HAWQ log files.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/resourcemgmt/ResourceQueues.html.md.erb
----------------------------------------------------------------------
diff --git a/resourcemgmt/ResourceQueues.html.md.erb 
b/resourcemgmt/ResourceQueues.html.md.erb
index ab71547..2c9ea48 100644
--- a/resourcemgmt/ResourceQueues.html.md.erb
+++ b/resourcemgmt/ResourceQueues.html.md.erb
@@ -30,7 +30,7 @@ The HAWQ resource manager follows several principles when 
allocating resources t
 
 **Enforcing Limits on Resources**
 
-You can configure HAWQ to enforce limits on resource usage by setting memory 
and CPU usage limits on both segments and resource queues. See [Configuring 
Segment Resource Capacity](ConfigureResourceManagement.html) and [Creating 
Resource Queues](ResourceQueues.html). For some best practices on designing and 
using resource queues in HAWQ, see [Best Practices for Managing 
Resources](/20/bestpractices/managing_resources_bestpractices.html).
+You can configure HAWQ to enforce limits on resource usage by setting memory 
and CPU usage limits on both segments and resource queues. See [Configuring 
Segment Resource Capacity](ConfigureResourceManagement.html) and [Creating 
Resource Queues](ResourceQueues.html). For some best practices on designing and 
using resource queues in HAWQ, see [Best Practices for Managing 
Resources](../bestpractices/managing_resources_bestpractices.html).
 
 For a high-level overview of how resource management works in HAWQ, see 
[Managing Resources](HAWQResourceManagement.html).
 
@@ -68,7 +68,7 @@ postgres=# show hawq_rm_nresqueue_limit;
 
 Use CREATE RESOURCE QUEUE to create a new resource queue. Only a superuser can 
run this DDL statement.
 
-Creating a resource queue involves giving it a name, a parent, setting the CPU 
and memory limits for the queue, and optionally a limit to the number of active 
statements on the resource queue. See [CREATE RESOURCE 
QUEUE](/20/reference/sql/CREATE-RESOURCE-QUEUE.html).
+Creating a resource queue involves giving it a name, a parent, setting the CPU 
and memory limits for the queue, and optionally a limit to the number of active 
statements on the resource queue. See [CREATE RESOURCE 
QUEUE](../reference/sql/CREATE-RESOURCE-QUEUE.html).
 
 **Note:** You can only associate roles and queries with leaf-level resource 
queues. Leaf-level resource queues are resource queues that do not have any 
children.
 
@@ -100,7 +100,7 @@ However, when you alter a resource queue, queued resource 
requests may encounter
 
 To prevent conflicts, HAWQ cancels by default all resource requests that are 
in conflict with the new resource queue definition. This behavior is controlled 
by the `hawq_rm_force_alterqueue_cancel_queued_request` server configuration 
parameter, which is by default set to true \(`on`\). If you set the server 
configuration parameter `hawq_rm_force_alterqueue_cancel_queued_request` to 
false, the actions specified in ALTER RESOURCE QUEUE are canceled if the 
resource manager finds at least one resource request that is in conflict with 
the new resource definitions supplied in the altering command.
 
-For more information, see [ALTER RESOURCE 
QUEUE](/20/reference/sql/ALTER-RESOURCE-QUEUE.html).
+For more information, see [ALTER RESOURCE 
QUEUE](../reference/sql/ALTER-RESOURCE-QUEUE.html).
 
 **Note:** To change the roles \(users\) assigned to a resource queue, use the 
ALTER ROLE command.
 
@@ -159,7 +159,7 @@ FROM pg_resqueue WHERE rsqname='test_queue_1';
  test_queue_1 |      9800 |         100 | 50%         | 50%       |            
 2 | even        | mem:128mb         | 0               | 0               | 0    
                 |1
 ```
 
-The query displays all the attributes and their values of the selected 
resource queue. See [CREATE RESOURCE 
QUEUE](/20/reference/sql/CREATE-RESOURCE-QUEUE.html) for a description of these 
attributes.
+The query displays all the attributes and their values of the selected 
resource queue. See [CREATE RESOURCE 
QUEUE](../reference/sql/CREATE-RESOURCE-QUEUE.html) for a description of these 
attributes.
 
 You can also check the runtime status of existing resource queues by querying 
the `pg_resqueue_status` view:
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/resourcemgmt/YARNIntegration.html.md.erb
----------------------------------------------------------------------
diff --git a/resourcemgmt/YARNIntegration.html.md.erb 
b/resourcemgmt/YARNIntegration.html.md.erb
index 790c099..6898f6c 100644
--- a/resourcemgmt/YARNIntegration.html.md.erb
+++ b/resourcemgmt/YARNIntegration.html.md.erb
@@ -143,7 +143,7 @@ However, if you had set 
`yarn.scheduler.minimum-allocation-mb` to 4GB, then it w
 
 **Note:** If you are specifying 1GB or under for 
`yarn.scheduler.minimum-allocation-mb` in `yarn-site.xml`, then make sure that 
the property is an equal subdivision of 1GB. For example, 1024, 512.
 
-See [Handling Segment Resource 
Fragmentation](/20/troubleshooting/Troubleshooting.html) for general 
information on resource fragmentation.
+See [Handling Segment Resource 
Fragmentation](../troubleshooting/Troubleshooting.html) for general information 
on resource fragmentation.
 
 ## <a id="topic_rtd_cjh_15"></a>Enabling YARN Mode in HAWQ 
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/resourcemgmt/best-practices.html.md.erb
----------------------------------------------------------------------
diff --git a/resourcemgmt/best-practices.html.md.erb 
b/resourcemgmt/best-practices.html.md.erb
index db69871..74bd815 100644
--- a/resourcemgmt/best-practices.html.md.erb
+++ b/resourcemgmt/best-practices.html.md.erb
@@ -6,10 +6,10 @@ When configuring resource management, you can apply certain 
best practices to en
 
 The following is a list of high-level best practices for optimal resource 
management:
 
--   Make sure segments do not have identical IP addresses. See [Segments Do 
Not Appear in 
gp\_segment\_configuration](/20/troubleshooting/Troubleshooting.html) for an 
explanation of this problem.
+-   Make sure segments do not have identical IP addresses. See [Segments Do 
Not Appear in 
gp\_segment\_configuration](../troubleshooting/Troubleshooting.html) for an 
explanation of this problem.
 -   Configure all segments to have the same resource capacity. See 
[Configuring Segment Resource Capacity](ConfigureResourceManagement.html).
 -   To prevent resource fragmentation, ensure that your deployment's segment 
resource capacity \(standalone mode\) or YARN node resource capacity \(YARN 
mode\) is a multiple of all virtual segment resource quotas. See [Configuring 
Segment Resource Capacity](ConfigureResourceManagement.html) \(HAWQ standalone 
mode\) and [Setting HAWQ Segment Resource Capacity in 
YARN](YARNIntegration.html).
--   Ensure that enough registered segments are available and usable for query 
resource requests. If the number of unavailable or unregistered segments is 
higher than a set limit, then query resource requests are rejected. Also ensure 
that the variance of dispatched virtual segments across physical segments is 
not greater than the configured limit. See [Rejection of Query Resource 
Requests](/20/troubleshooting/Troubleshooting.html).
+-   Ensure that enough registered segments are available and usable for query 
resource requests. If the number of unavailable or unregistered segments is 
higher than a set limit, then query resource requests are rejected. Also ensure 
that the variance of dispatched virtual segments across physical segments is 
not greater than the configured limit. See [Rejection of Query Resource 
Requests](../troubleshooting/Troubleshooting.html).
 -   Use multiple master and segment temporary directories on separate, large 
disks (2TB or greater) to load balance writes to temporary files (for example, 
`/disk1/tmp /disk2/tmp`). For a given query, HAWQ will use a separate temp 
directory (if available) for each virtual segment to store spill files. 
Multiple HAWQ sessions will also use separate temp directories where available 
to avoid disk contention. If you configure too few temp directories, or you 
place multiple temp directories on the same disk, you increase the risk of disk 
contention or running out of disk space when multiple virtual segments target 
the same disk. 
 -   Configure minimum resource levels in YARN, and tune the timeout of when 
idle resources are returned to YARN. See [Tune HAWQ Resource Negotiations with 
YARN](YARNIntegration.html).
 -   Make sure that the property `yarn.scheduler.minimum-allocation-mb` in 
`yarn-site.xml` is an equal subdivision of 1GB. For example, 1024, 512. See 
[Setting HAWQ Segment Resource Capacity in 
YARN](YARNIntegration.html#topic_pzf_kqn_c5).

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/459e3bc7/troubleshooting/Troubleshooting.html.md.erb
----------------------------------------------------------------------
diff --git a/troubleshooting/Troubleshooting.html.md.erb 
b/troubleshooting/Troubleshooting.html.md.erb
index 3589ce2..2b7414b 100644
--- a/troubleshooting/Troubleshooting.html.md.erb
+++ b/troubleshooting/Troubleshooting.html.md.erb
@@ -21,12 +21,12 @@ A query is not executing as quickly as you would expect. 
Here is how to investig
     2.  Are there many failed disks?
 
 2.  Check table statistics. Have the tables involved in the query been 
analyzed?
-3.  Check the plan of the query and run [`EXPLAIN 
ANALYZE`](/20/reference/sql/EXPLAIN.html) to determine the bottleneck. 
+3.  Check the plan of the query and run [`EXPLAIN 
ANALYZE`](../reference/sql/EXPLAIN.html) to determine the bottleneck. 
     Sometimes, there is not enough memory for some operators, such as Hash 
Join, or spill files are used. If an operator cannot perform all of its work in 
the memory allocated to it, it caches data on disk in *spill files*. Compared 
with no spill files, a query will run much slower.
 
-4.  Check data locality statistics using [`EXPLAIN 
ANALYZE`](/20/reference/sql/EXPLAIN.html). Alternately you can check the logs. 
Data locality result for every query could also be found in the log of HAWQ. 
See [Data Locality 
Statistics](../query/query-performance.html#topic_amk_drc_d5) for information 
on the statistics.
-5.  Check resource queue status. You can query view `pg_resqueue_status` to 
check if the target queue has already dispatched some resource to the queries, 
or if the target queue is lacking resources. See [Checking Existing Resource 
Queues](/20/resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
-6.  Analyze a dump of the resource manager's status to see more resource queue 
status. See [Analyzing Resource Manager 
Status](/20/resourcemgmt/ResourceQueues.html#topic_zrh_pkc_f5).
+4.  Check data locality statistics using [`EXPLAIN 
ANALYZE`](../reference/sql/EXPLAIN.html). Alternately you can check the logs. 
Data locality result for every query could also be found in the log of HAWQ. 
See [Data Locality 
Statistics](../query/query-performance.html#topic_amk_drc_d5) for information 
on the statistics.
+5.  Check resource queue status. You can query view `pg_resqueue_status` to 
check if the target queue has already dispatched some resource to the queries, 
or if the target queue is lacking resources. See [Checking Existing Resource 
Queues](../resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
+6.  Analyze a dump of the resource manager's status to see more resource queue 
status. See [Analyzing Resource Manager 
Status](../resourcemgmt/ResourceQueues.html#topic_zrh_pkc_f5).
 
 ## <a id="topic_vm5_znx_15"></a>Rejection of Query Resource Requests
 
@@ -94,7 +94,7 @@ Different HAWQ resource queues can have different virtual 
segment resource quota
 
 In standalone mode, the segment resources are all exclusively occupied by 
HAWQ. Resource fragmentation can occur when segment capacity is not a multiple 
of a virtual segment resource quota. For example, a segment has 15GB memory 
capacity, but the virtual segment resource quota is set to 2GB. The maximum 
possible memory consumption in a segment is 14GB. Therefore, you should 
configure segment resource capacity as a multiple of all virtual segment 
resource quotas.
 
-In YARN mode, resources are allocated from the YARN resource manager. The HAWQ 
resource manager acquires a YARN container by 1 vcore. For example, if YARN 
reports that a segment having 64GB memory and 16 vcore is configured for YARN 
applications, HAWQ requests YARN containers by 4GB memory and 1 vcore. In this 
manner, HAWQ resource manager acquires YARN containers on demand. If the 
capacity of the YARN container is not a multiple of the virtual segment 
resource quota, resource fragmentation may occur. For example, if the YARN 
container resource capacity is 3GB memory 1 vcore, one segment may have 1 or 3 
YARN containers for HAWQ query execution. In this situation, if the virtual 
segment resource quota is 2GB memory, then HAWQ will always have 1 GB memory 
that cannot be utilized. Therefore, it is recommended to configure YARN node 
resource capacity carefully to make YARN container resource quota as a multiple 
of all virtual segment resource quotas. In addition, make sure your CPU to m
 emory ratio is a multiple of the amount configured for 
`yarn.scheduler.minimum-allocation-mb`. See [Setting HAWQ Segment Resource 
Capacity in YARN](/20/resourcemgmt/YARNIntegration.html#topic_pzf_kqn_c5) for 
more information.
+In YARN mode, resources are allocated from the YARN resource manager. The HAWQ 
resource manager acquires a YARN container by 1 vcore. For example, if YARN 
reports that a segment having 64GB memory and 16 vcore is configured for YARN 
applications, HAWQ requests YARN containers by 4GB memory and 1 vcore. In this 
manner, HAWQ resource manager acquires YARN containers on demand. If the 
capacity of the YARN container is not a multiple of the virtual segment 
resource quota, resource fragmentation may occur. For example, if the YARN 
container resource capacity is 3GB memory 1 vcore, one segment may have 1 or 3 
YARN containers for HAWQ query execution. In this situation, if the virtual 
segment resource quota is 2GB memory, then HAWQ will always have 1 GB memory 
that cannot be utilized. Therefore, it is recommended to configure YARN node 
resource capacity carefully to make YARN container resource quota as a multiple 
of all virtual segment resource quotas. In addition, make sure your CPU to m
 emory ratio is a multiple of the amount configured for 
`yarn.scheduler.minimum-allocation-mb`. See [Setting HAWQ Segment Resource 
Capacity in YARN](../resourcemgmt/YARNIntegration.html#topic_pzf_kqn_c5) for 
more information.
 
 If resource fragmentation occurs, queued requests are not processed until 
either some running queries return resources or the global resource manager 
provides more resources. If you encounter resource fragmentation, you should 
double check the configured capacities of the resource queues for any errors. 
For example, an error might be that the global resource manager container's 
memory to core ratio is not a multiple of virtual segment resource quota.
 

Reply via email to