http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/resourcemgmt/ConfigureResourceManagement.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/resourcemgmt/ConfigureResourceManagement.html.md.erb 
b/markdown/resourcemgmt/ConfigureResourceManagement.html.md.erb
new file mode 100644
index 0000000..1b66068
--- /dev/null
+++ b/markdown/resourcemgmt/ConfigureResourceManagement.html.md.erb
@@ -0,0 +1,120 @@
+---
+title: Configuring Resource Management
+---
+
+This topic provides configuration information for system administrators and 
database superusers responsible for managing resources in a HAWQ system.
+
+To configure resource management in HAWQ, follow these high-level steps:
+
+1.  Decide which kind of resource management you need in your HAWQ deployment. 
HAWQ supports two modes of global resource management:
+    -   Standalone mode, or no global resource management. When configured to 
run in standalone mode, HAWQ consumes cluster node resources without 
considering the resource requirements of co-existing applications, and the HAWQ 
resource manager assumes it can use all the resources from registered segments, 
unless configured otherwise. See [Using Standalone Mode](#topic_url_pls_zt).
+    -   External global resource manager mode. Currently HAWQ supports YARN as 
a global resource manager. When you configure YARN as the global resource 
manager in a HAWQ cluster, HAWQ becomes an unmanaged YARN application. HAWQ 
negotiates resources with the YARN resource manager to consume YARN cluster 
resources.
+2.  If you are using standalone mode for HAWQ resource management, decide on 
whether to limit the amount of memory and CPU usage allocated per HAWQ segment. 
See [Configuring Segment Resource Capacity](#topic_htk_fxh_15).
+3.  If you are using YARN as your global resource manager, configure the 
resource queue in YARN where HAWQ will register itself as a YARN application. 
Then configure HAWQ with the location and configuration requirements for 
communicating with YARN's resource manager. See [Integrating YARN with 
HAWQ](YARNIntegration.html) for details.
+4.  In HAWQ, create and define resource queues. See [Working with Hierarchical 
Resource Queues](ResourceQueues.html).
+
+## <a id="topic_url_pls_zt"></a>Using Standalone Mode 
+
+Standalone mode means that the HAWQ resource manager assumes it can use all 
resources from registered segments unless configured otherwise.
+
+To configure HAWQ to run without a global resource manager, add the following 
property configuration to your `hawq-site.xml` file:
+
+``` xml
+<property>
+      <name>hawq_global_rm_type</name>
+      <value>none</value>
+</property>
+```
+
+### <a id="id_wgb_44m_q5"></a>hawq\_global\_rm\_type 
+
+HAWQ global resource manager type. Valid values are `yarn` and `none`. Setting 
this parameter to `none` indicates that the HAWQ resource manager will manages 
its own resources. Setting the value to `yarn` means that HAWQ will negotiate 
with YARN for resources.
+
+|Value Range|Default|Set Classifications|
+|-----------|-------|-------------------|
+|yarn or none|none|master<br/><br/>system<br/><br/>restart|
+
+## <a id="topic_htk_fxh_15"></a>Configuring Segment Resource Capacity 
+
+When you run the HAWQ resource manager in standalone mode 
\(`hawq_global_rm_type=none`\), then you can set limits on the resources used 
by each HAWQ cluster segment.
+
+In `hawq-site.xml`, add the following parameters:
+
+``` xml
+<property>
+   <name>hawq_rm_memory_limit_perseg</name>
+   <value>8GB</value>
+</property>
+<property>
+   <name>hawq_rm_nvcore_limit_perseg</name>
+   <value>4</value>
+</property>
+```
+
+**Note:** Due to XML configuration validation, you must set these properties 
for either mode even though they do not apply if you are using YARN mode.
+
+You must configure all segments with identical resource capacities. Memory 
should be set as a multiple of 1GB, such as 1GB per core, 2GB per core or 4GB 
per core. For example, if you want to use the ratio of 4GB per core, then you 
must configure all segments to use a 4GB per core resource capacity.
+
+After you set limits on the segments, you can then use resource queues to 
configure additional resource management rules in HAWQ.
+
+**Note:** To reduce the likelihood of resource fragmentation, you should make 
sure that the segment resource capacity configured for HAWQ 
\(`hawq_rm_memory_limit_perseg`\) is a multiple of the resource quotas for all 
virtual segments.
+
+### <a id="id_qqq_s4m_q5"></a>hawq\_rm\_memory\_limit\_perseg 
+
+Limit of memory usage by a HAWQ segment when `hawq_global_rm_type` is set to 
`none`. For example, `8GB`.
+
+|Value Range|Default|Set Classifications|
+|-----------|-------|-------------------|
+| no specific lower or upper limit | 64GB |session<br/><br/>reload|
+
+### <a id="id_xpv_t4m_q5"></a>hawq\_rm\_nvcore\_limit\_perseg 
+
+Maximum number of virtual cores that can be used for query execution in a HAWQ 
segment when `hawq_global_rm_type` is set to `none`. For example, `2.0`.
+
+|Value Range|Default|Set Classifications|
+|-----------|-------|-------------------|
+|1.0 to maximum integer|1.0|master<br/><br/>session<br/><br/>reload|
+
+## <a id="topic_g2p_zdq_15"></a>Configuring Resource Quotas for Query 
Statements 
+
+In some cases, you may want to specify additional resource quotas on the query 
statement level.
+
+The following configuration properties allow a user to control resource quotas 
without altering corresponding resource queues.
+
+-   [hawq\_rm\_stmt\_vseg\_memory](../reference/guc/parameter_definitions.html)
+-   [hawq\_rm\_stmt\_nvseg](../reference/guc/parameter_definitions.html)
+
+However, the changed resource quota for the virtual segment cannot exceed the 
resource queue’s maximum capacity in HAWQ.
+
+In the following example, when executing the next query statement, the HAWQ 
resource manager will attempt to allocate 10 virtual segments and each segment 
has a 256MB memory quota.
+
+``` sql
+postgres=# SET hawq_rm_stmt_vseg_memory='256mb';
+SET
+postgres=# SET hawq_rm_stmt_nvseg=10;
+SET
+postgres=# CREATE TABLE t(i integer);
+CREATE TABLE
+postgres=# INSERT INTO t VALUES(1);
+INSERT 0 1
+```
+
+Note that given the dynamic nature of resource allocation in HAWQ, you cannot 
expect that each segment has reserved resources for every query. The HAWQ 
resource manager will only attempt to allocate those resources. In addition, 
the number of virtual segments allocated for the query statement cannot amount 
to a value larger than the value set in global configuration parameter 
`hawq_rm_nvseg_perquery_limit` and `hawq_rm_nvseg_perquery_perseg_limit`.
+
+## <a id="topic_tl5_wq1_f5"></a>Configuring the Maximum Number of Virtual 
Segments 
+
+You can limit the number of virtual segments used during statement execution 
on a cluster-wide level.
+
+Limiting the number of virtual segments used during statement execution is 
useful for preventing resource bottlenecks during data load and the 
overconsumption of resources without performance benefits. The number of files 
that can be opened concurrently for write on both NameNode and DataNode are 
limited. Consider the following scenario:
+
+-   You need to load data into a table with P partitions
+-   There are N nodes in the cluster and V virtual segments per node started 
for the load query
+
+Then there will be P \* V files opened per DataNode and at least P \* V 
threads started in the DataNode. If the number of partitions and the number of 
virtual segments per node is very high, the DataNode becomes a bottleneck. On 
the NameNode side, there will be V \* N connections. If the number of nodes is 
very high, then NameNode can become a bottleneck.
+
+To alleviate the load on NameNode, you can limit V, the number of virtual 
segments started per node. Use the following server configuration parameters:
+
+-   `hawq_rm_nvseg_perquery_limit` limits the maximum number of virtual 
segments that can be used for one statement execution on a cluster-wide level.  
The hash buckets defined in `default_hash_table_bucket_number` cannot exceed 
this number. The default value is 512.
+-   `default_hash_table_bucket_number` defines the number of buckets used by 
default when you create a hash table. When you query a hash table, the query's 
virtual segment resources are fixed and allocated based on the bucket number 
defined for the table. A best practice is to tune this configuration parameter 
after you expand the cluster.
+
+You can also limit the number of virtual segments used by queries when 
configuring your resource queues. \(See [CREATE RESOURCE 
QUEUE](../reference/sql/CREATE-RESOURCE-QUEUE.html).\) The global configuration 
parameters are a hard limit, however, and any limits set on the resource queue 
or on the statement-level cannot be larger than these limits set on the 
cluster-wide level.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/resourcemgmt/HAWQResourceManagement.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/resourcemgmt/HAWQResourceManagement.html.md.erb 
b/markdown/resourcemgmt/HAWQResourceManagement.html.md.erb
new file mode 100644
index 0000000..dd5c9b3
--- /dev/null
+++ b/markdown/resourcemgmt/HAWQResourceManagement.html.md.erb
@@ -0,0 +1,69 @@
+---
+title: How HAWQ Manages Resources
+---
+
+HAWQ manages resources (CPU, memory, I/O and file handles) using a variety of 
mechanisms including global resource management, resource queues and the 
enforcement of limits on resource usage.
+
+## <a id="global-env"></a>Globally Managed Environments
+
+In Hadoop clusters, resources are frequently managed globally by YARN. YARN 
provides resources to MapReduce jobs and any other applications that are 
configured to work with YARN. In this type of environment, resources are 
allocated in units called containers. In a HAWQ environment, segments and node 
managers control the consumption of resources and enforce resource limits on 
each node.
+
+The following diagram depicts the layout of a HAWQ cluster in a YARN-managed 
Hadoop environment:
+
+![](../mdimages/hawq_high_level_architecture.png)
+
+When you run HAWQ natively in a Hadoop cluster, you can configure HAWQ to 
register as an application in YARN. After configuration, HAWQ's resource 
manager communicates with YARN to acquire resources \(when needed to execute 
queries\) and return resources \(when no longer needed\) back to YARN.
+
+Resources obtained from YARN are then managed in a distributed fashion by 
HAWQ's resource manager, which is hosted on the HAWQ master.
+
+## <a id="section_w4f_vx4_15"></a>HAWQ Resource Queues 
+
+Resource queues are the main tool for managing the degree of concurrency in a 
HAWQ system. Resource queues are database objects that you create with the 
CREATE RESOURCE QUEUE SQL statement. You can use them to manage the number of 
active queries that may execute concurrently, and the maximum amount of memory 
and CPU usage each type of query is allocated. Resource queues can also guard 
against queries that would consume too many resources and degrade overall 
system performance.
+
+Internally, HAWQ manages its resources dynamically based on a system of 
hierarchical resource queues. HAWQ uses resource queues to allocate resources 
efficiently to concurrently running queries. Resource queues are organized as a 
n-ary tree, as depicted in the diagram below.
+
+![](../mdimages/svg/hawq_resource_queues.svg)
+
+When HAWQ is initialized, there is always one queue named `pg_root` at the 
root of the tree and one queue named `pg_default`. If YARN is configured, 
HAWQ's resource manager automatically fetches the capacity of this root queue 
from the global resource manager. When you create a new resource queue, you 
must specify a parent queue. This forces all resource queues to organize into a 
tree.
+
+When a query comes in, after query parsing and semantic analysis, the 
optimizer coordinates with HAWQ resource manager on the resource usage for the 
query and get an optimized plan given the resources available for the query. 
The resource allocation for each query is sent with the plan together to the 
segments. Consequently, each query executor \(QE\) knows the resource quota for 
the current query and enforces the resource consumption during the whole 
execution. When query execution finishes or is cancelled. the resource is 
returned to the HAWQ resource manager.
+
+**About Branch Queues and Leaf Queues**
+
+In this hierarchical resource queue tree depicted in the diagram, there are 
branch queues \(rectangles outlined in dashed lines\) and leaf queues 
\(rectangles drawn with solid lines\). Only leaf queues can be associated with 
roles and accept queries.
+
+**Query Resource Allocation Policy**
+
+The HAWQ resource manager follows several principles when allocating resources 
to queries:
+
+-   Resources are allocated only to queues that have running or queued queries.
+-   When multiple queues are busy, the resource manager balances resources 
among queues based on resource queue capacities.
+-   In one resource queue, when multiple queries are waiting for resources, 
resources are distributed evenly to each query in a best effort manner.
+
+## Enforcing Limits on Resources
+
+You can configure HAWQ to enforce limits on resource usage by setting memory 
and CPU usage limits on both segments and resource queues. See [Configuring 
Segment Resource Capacity](ConfigureResourceManagement.html) and [Creating 
Resource Queues](ResourceQueues.html).
+
+**Cluster Memory to Core Ratio**
+
+The HAWQ resource manager chooses a cluster memory to core ratio when most 
segments have registered and when the resource manager has received a cluster 
report from YARN \(if the resource manager is running in YARN mode.\) The HAWQ 
resource manager selects the ratio based on the amount of memory available in 
the cluster and the number of cores available on registered segments. The 
resource manager selects the smallest ratio possible in order to minimize the 
waste of resources.
+
+HAWQ trims each segment's resource capacity automatically to match the 
selected ratio. For example, if the resource manager chooses 1GB per core as 
the ratio, then a segment with 5GB of memory and 8 cores will have 3 cores cut. 
These cores will not be used by HAWQ. If a segment has 12GB and 10 cores, then 
2GB of memory will be cut by HAWQ.
+
+After the HAWQ resource manager has selected its ratio, then the ratio will 
not change until you restart the HAWQ master node. Therefore, memory and core 
resources for any segments added dynamically to the cluster are automtaically 
cut based on the fixed ratio.
+
+To find out the cluster memory to core ratio selected by the resource manager, 
check the HAWQ master database logs for messages similar to the following:
+
+```
+Resource manager chooses ratio 1024 MB per core as cluster level memory to 
core ratio, there are 3072 MB memory 0 CORE resource unable to be utilized.
+```
+
+You can also check the master logs to see how resources are being cut from 
individual segments due to the cluster memory to core ratio. For example:
+
+```
+Resource manager adjusts segment localhost original resource capacity from 
(8192 MB, 5 CORE) to (5120 MB, 5 CORE)
+
+Resource manager adjusts segment localhost original global resource manager 
resource capacity from (8192 MB, 5 CORE) to (5120 MB, 5 CORE)
+```
+
+See [Viewing the Database Server Log Files](../admin/monitor.html#topic28) for 
more information on working with HAWQ log files.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/resourcemgmt/ResourceManagerStatus.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/resourcemgmt/ResourceManagerStatus.html.md.erb 
b/markdown/resourcemgmt/ResourceManagerStatus.html.md.erb
new file mode 100644
index 0000000..4029642
--- /dev/null
+++ b/markdown/resourcemgmt/ResourceManagerStatus.html.md.erb
@@ -0,0 +1,152 @@
+---
+title: Analyzing Resource Manager Status
+--- 
+
+You can use several queries to force the resource manager to dump more details 
about active resource context status, current resource queue status, and HAWQ 
segment status.
+
+## <a id="topic_zrh_pkc_f5"></a>Connection Track Status 
+
+
+Any query execution requiring resource allocation from HAWQ resource manager 
has one connection track instance tracking the whole resource usage lifecycle. 
You can find all resource requests and allocated resources in this dump.
+
+The following is an example query to obtain connection track status:
+
+``` sql
+postgres=# SELECT * FROM dump_resource_manager_status(1);
+```
+
+``` pre
+                              dump_resource_manager_status
+----------------------------------------------------------------------------------------
+ Dump resource manager connection track status to 
/tmp/resource_manager_conntrack_status
+(1 row)
+```
+
+The following output is an example of resource context \(connection track\) 
status.
+
+``` pre
+Number of free connection ids : 65535
+Number of connection tracks having requests to handle : 0
+Number of connection tracks having responses to send : 
0SOCK(client=192.0.2.0:37396:time=2015-11-15-20:54:35.379006),
+CONN(id=44:user=role_2:queue=queue2:prog=3:time=2015-11-15-20:54:35.378631:lastact=2015-11-15-20:54:35.378631:
+headqueue=2015-11-15-20:54:35.378631),ALLOC(session=89:resource=(1024 MB, 
0.250000 CORE)x(1:min=1:act=-1):
+slicesize=5:io bytes size=3905568:vseg limit per seg=8:vseg limit per 
query=1000:fixsegsize=1:reqtime=2015-11-15-20:54:35.379144:
+alloctime=2015-11-15-20:54:35.379144:stmt=128 MB x 
0),LOC(size=3:host(sdw3:3905568):host(sdw2:3905568):
+host(sdw1:3905568)),RESOURCE(hostsize=0),MSG(id=259:size=96:contsize=96:recvtime=1969-12-31-16:00:00.0,
+client=192.0.2.0:37396),COMMSTAT(fd=5:readbuffer=0:writebuffer=0
+buffers:toclose=false:forceclose=false)
+```
+
+|Output Field|Description|
+|------------|-----------|
+|`Number of free connection ids`|Provides connection track id resource. HAWQ 
resource manager supports maximum 65536 live connection track instances.|
+|`Number of connection tracks having requests to handle`|Counts the number of 
requests accepted by resource manager but not processed yet.|
+|`Number of connection tracks having responses to send`|Counts the number of 
responses generated by resource manager but not sent out yet.|
+|`SOCK`|Provides the request socket connection information.|
+|`CONN`|Provides the information about the role name, target queue, current 
status of the request:<br/><br/>`prog=1` means the connection is 
established<br/><br/>   `prog=2` means the connection is registered by role 
id<br/><br/>`prog=3` means the connection is waiting for resource in the target 
queue<br/><br/>`prog=4` means the resource has been allocated to this 
connection<br/><br/>`prog>5` means some failure or abnormal statuses|
+|`ALLOC`|Provides session id information, resource expectation, session level 
resource limits, statement level resource settings, estimated query workload by 
slice number, and so on.|
+|`LOC`|Provides query scan HDFS data locality information.|
+|`RESOURCE`|Provides information on the already allocated resource.|
+|`MSG`|Provides the latest received message information.|
+|`COMMSTAT`|Shows current socket communication buffer status.|
+
+## <a id="resourcqueuestatus"></a>Resource Queue Status 
+
+You can get more details of the status of resource queues.
+
+Besides the information provided in pg\_resqueue\_status, you can also get 
YARN resource queue maximum capacity report, total number of HAWQ resource 
queues, and HAWQ resource queues’ derived resource capacities.
+
+The following is a query to obtain resource queue status:
+
+``` sql
+postgres=# SELECT * FROM dump_resource_manager_status(2);
+```
+
+``` pre
+                            dump_resource_manager_status
+-------------------------------------------------------------------------------------
+ Dump resource manager resource queue status to 
/tmp/resource_manager_resqueue_status
+(1 row)
+```
+
+The possible output of resource queue status is shown as below.
+
+``` pre
+Maximum capacity of queue in global resource manager cluster 1.000000
+
+Number of resource queues : 4
+
+QUEUE(name=pg_root:parent=NULL:children=3:busy=0:paused=0),
+REQ(conn=0:request=0:running=0),
+SEGCAP(ratio=4096:ratioidx=-1:segmem=128MB:segcore=0.031250:segnum=1536:segnummax=1536),
+QUECAP(memmax=196608:coremax=48.000000:memper=100.000000:mempermax=100.000000:coreper=100.000000:corepermax=100.000000),
+QUEUSE(alloc=(0 MB,0.000000 CORE):request=(0 MB,0.000000 CORE):inuse=(0 
MB,0.000000 CORE))
+
+QUEUE(name=pg_default:parent=pg_root:children=0:busy=0:paused=0),
+REQ(conn=0:request=0:running=0),
+SEGCAP(ratio=4096:ratioidx=-1:segmem=1024MB:segcore=0.250000:segnum=38:segnummax=76),
+QUECAP(memmax=78643:coremax=19.000000:memper=20.000000:mempermax=40.000000:coreper=20.000000:corepermax=40.000000),
+QUEUSE(alloc=(0 MB,0.000000 CORE):request=(0 MB,0.000000 CORE):inuse=(0 
MB,0.000000 CORE))
+```
+
+|Output Field|Description|
+|------------|-----------|
+|`Maximum capacity of queue in global resource manager cluster`|YARN maximum 
capacity report for the resource queue.|
+|`Number of resource queues`|Total number of HAWQ resource queues.|
+|`QUEUE`|Provides basic structural information about the resource queue and 
whether it is busy dispatching resources to some queries.|
+|`REQ`|Provides concurrency counter and the status of waiting queues.|
+|`SEGCAP`|Provides the virtual segment resource quota and dispatchable number 
of virtual segments.|
+|`QUECAP`|Provides derived resource queue capacity and actual percentage of 
the cluster resource a queue can use.|
+|`QUEUSE`|Provides information about queue resource usage.|
+
+## <a id="segmentstatus"></a>HAWQ Segment Status 
+
+Use the following query to obtain the status of a HAWQ segment.
+
+``` sql
+postgres=# SELECT * FROM dump_resource_manager_status(3);
+```
+
+``` pre
+                           dump_resource_manager_status
+-----------------------------------------------------------------------------------
+ Dump resource manager resource pool status to 
/tmp/resource_manager_respool_status
+(1 row)
+```
+
+The following output shows the status of a HAWQ segment status. This example 
describes a host named `sdw1` having resource capacity 64GB memory and 16 
vcore. It now has 64GB available resource ready for use and 16 containers are 
held.
+
+``` pre
+HOST_ID(id=0:hostname:sdw1)
+HOST_INFO(FTSTotalMemoryMB=65536:FTSTotalCore=16:GRMTotalMemoryMB=0:GRMTotalCore=0)
+HOST_AVAILABLITY(HAWQAvailable=true:GLOBAvailable=false)
+HOST_RESOURCE(AllocatedMemory=65536:AllocatedCores=16.000000:AvailableMemory=65536:
+AvailableCores=16.000000:IOBytesWorkload=0:SliceWorkload=0:LastUpdateTime=1447661681125637:
+RUAlivePending=false)
+HOST_RESOURCE_CONTAINERSET(ratio=4096:AllocatedMemory=65536:AvailableMemory=65536:
+AllocatedCore=16.000000:AvailableCore:16.000000)
+        RESOURCE_CONTAINER(ID=0:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=1:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=2:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=3:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=4:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=5:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=6:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=7:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=8:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=9:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=10:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=11:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=12:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=13:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=14:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+        RESOURCE_CONTAINER(ID=15:MemoryMB=4096:Core=1:Life=0:HostName=sdw1)
+```
+
+|Output Field|Description|
+|------------|-----------|
+|`HOST_ID`|Provides the recognized segment name and internal id.|
+|`HOST_INFO`|Provides the configured segment resource capacities. 
GRMTotalMemoryMB and GRMTotalCore shows the limits reported by YARN, 
FTSTotalMemoryMB and FTSTotalCore show the limits configured in HAWQ.|
+|`HOST_AVAILABILITY`|Shows if the segment is available from HAWQ fault 
tolerance service \(FTS\) view or YARN view.|
+|`HOST_RESOURCE`|Shows current allocated and available resource. Estimated 
workload counters are also shown here.|
+|`HOST_RESOURCE_CONTAINERSET`|Shows each held containers.|

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/resourcemgmt/ResourceQueues.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/resourcemgmt/ResourceQueues.html.md.erb 
b/markdown/resourcemgmt/ResourceQueues.html.md.erb
new file mode 100644
index 0000000..cd019c6
--- /dev/null
+++ b/markdown/resourcemgmt/ResourceQueues.html.md.erb
@@ -0,0 +1,204 @@
+---
+title: Working with Hierarchical Resource Queues
+---
+
+This section describes how administrators can define and work with resource 
queues in order to allocate resource usage within HAWQ. By designing 
hierarchical resource queues, system administrators can balance system 
resources to queries as needed.
+
+## <a id="resource_queues"></a>HAWQ Resource Queues 
+
+Resource queues are the main tool for managing the degree of concurrency in a 
HAWQ system. Resource queues are database objects that you create with the 
CREATE RESOURCE QUEUE SQL statement. You can use them to manage the number of 
active queries that may execute concurrently, and the maximum amount of memory 
and CPU usage each type of query is allocated. Resource queues can also guard 
against queries that would consume too many resources and degrade overall 
system performance.
+
+Internally, HAWQ manages its resources dynamically based on a system of 
hierarchical resource queues. HAWQ uses resource queues to allocate resources 
efficiently to concurrently running queries. Resource queues are organized as a 
n-ary tree, as depicted in the diagram below.
+
+![](../mdimages/svg/hawq_resource_queues.svg)
+
+When HAWQ is initialized, there is always one queue named `pg_root` at the 
root of the tree and one queue named `pg_default`. If YARN is configured, 
HAWQ's resource manager automatically fetches the capacity of this root queue 
from the global resource manager. When you create a new resource queue, you 
must specify a parent queue. This forces all resource queues to organize into a 
tree.
+
+When a query comes in, after query parsing and semantic analysis, the 
optimizer coordinates with HAWQ resource manager on the resource usage for the 
query and get an optimized plan given the resources available for the query. 
The resource allocation for each query is sent with the plan together to the 
segments. Consequently, each query executor \(QE\) knows the resource quota for 
the current query and enforces the resource consumption during the whole 
execution. When query execution finishes or is cancelled. the resource is 
returned to the HAWQ resource manager.
+
+**About Branch Queues and Leaf Queues**
+
+In this hierarchical resource queue tree depicted in the diagram, there are 
branch queues \(rectangles outlined in dashed lines\) and leaf queues 
\(rectangles drawn with solid lines\). Only leaf queues can be associated with 
roles and accept queries.
+
+**Query Resource Allocation Policy**
+
+The HAWQ resource manager follows several principles when allocating resources 
to queries:
+
+-   Resources are allocated only to queues that have running or queued queries.
+-   When multiple queues are busy, the resource manager balances resources 
among queues based on resource queue capacities.
+-   In one resource queue, when multiple queries are waiting for resources, 
resources are distributed evenly to each query in a best effort manner.
+
+**Enforcing Limits on Resources**
+
+You can configure HAWQ to enforce limits on resource usage by setting memory 
and CPU usage limits on both segments and resource queues. See [Configuring 
Segment Resource Capacity](ConfigureResourceManagement.html) and [Creating 
Resource Queues](ResourceQueues.html). For some best practices on designing and 
using resource queues in HAWQ, see [Best Practices for Managing 
Resources](../bestpractices/managing_resources_bestpractices.html).
+
+For a high-level overview of how resource management works in HAWQ, see 
[Managing Resources](HAWQResourceManagement.html).
+
+## <a id="topic_dyy_pfp_15"></a>Setting the Maximum Number of Resource Queues 
+
+You can configure the maximum number of resource queues allowed in your HAWQ 
cluster.
+
+By default, the maximum number of resource queues that you can create in HAWQ 
is 128.
+
+You can configure this property in `hawq-site.xml`. The new maximum takes 
effect when HAWQ restarts. For example, the configuration below sets this value 
to 50.
+
+``` xml
+<property>
+   <name>hawq_rm_nresqueue_limit</name>
+   <value>50</value>
+</property>
+```
+
+The minimum value that can be configured is 3, and the maximum is 1024.
+
+To check the currently configured limit, you can execute the following command:
+
+``` sql
+postgres=# SHOW hawq_rm_nresqueue_limit;
+```
+
+``` pre
+ hawq_rm_nresqueue_limit
+----------------------------------------------
+128
+(1 row)
+```
+
+## <a id="topic_p4l_dls_zt"></a>Creating Resource Queues 
+
+Use CREATE RESOURCE QUEUE to create a new resource queue. Only a superuser can 
run this DDL statement.
+
+Creating a resource queue involves giving it a name, a parent, setting the CPU 
and memory limits for the queue, and optionally a limit to the number of active 
statements on the resource queue. See [CREATE RESOURCE 
QUEUE](../reference/sql/CREATE-RESOURCE-QUEUE.html).
+
+**Note:** You can only associate roles and queries with leaf-level resource 
queues. Leaf-level resource queues are resource queues that do not have any 
children.
+
+### Examples
+
+Create a resource queue as a child of `pg_root` with an active query limit of 
20 and memory and core limits of 50%:
+
+``` sql
+CREATE RESOURCE QUEUE myqueue WITH (PARENT='pg_root', ACTIVE_STATEMENTS=20,
+MEMORY_LIMIT_CLUSTER=50%, CORE_LIMIT_CLUSTER=50%);
+```
+
+Create a resource queue as a child of pg\_root with memory and CPU limits and 
a resource overcommit factor:
+
+``` sql
+CREATE RESOURCE QUEUE test_queue_1 WITH (PARENT='pg_root',
+MEMORY_LIMIT_CLUSTER=50%, CORE_LIMIT_CLUSTER=50%, 
RESOURCE_OVERCOMMIT_FACTOR=2);
+```
+
+## <a id="topic_e1b_2ls_zt"></a>Altering Resource Queues 
+
+Use ALTER RESOURCE QUEUE to modify an existing resource queue. Only a 
superuser can run this DDL statement.
+
+The ALTER RESOURCE QUEUE statement allows you to modify resource limits and 
the number of active statements allowed in the queue. You cannot change the 
parent queue of an existing resource queue, and you are subject to the same 
constraints that apply to the creation of resource queues.
+
+You can modify an existing resource queue even when it is active or when one 
of its descendents is active. All queued resource requsts are adjusted based on 
the modifications to the resource queue.
+
+However, when you alter a resource queue, queued resource requests may 
encounter some conflicts. For example, a resource deadlock can occur or some 
requests cannot be satisfied based on the newly modified resource queue 
capacity.
+
+To prevent conflicts, HAWQ cancels by default all resource requests that are 
in conflict with the new resource queue definition. This behavior is controlled 
by the `hawq_rm_force_alterqueue_cancel_queued_request` server configuration 
parameter, which is by default set to true \(`on`\). If you set the server 
configuration parameter `hawq_rm_force_alterqueue_cancel_queued_request` to 
false, the actions specified in ALTER RESOURCE QUEUE are canceled if the 
resource manager finds at least one resource request that is in conflict with 
the new resource definitions supplied in the altering command.
+
+For more information, see [ALTER RESOURCE 
QUEUE](../reference/sql/ALTER-RESOURCE-QUEUE.html).
+
+**Note:** To change the roles \(users\) assigned to a resource queue, use the 
ALTER ROLE command.
+
+### Examples
+
+Change the memory and core limit of a resource queue:
+
+``` sql
+ALTER RESOURCE QUEUE test_queue_1 WITH (MEMORY_LIMIT_CLUSTER=40%,
+CORE_LIMIT_CLUSTER=40%);
+```
+
+Change the active statements maximum for the resource queue:
+
+``` sql
+ALTER RESOURCE QUEUE test_queue_1 WITH (ACTIVE_STATEMENTS=50);
+```
+
+## <a id="topic_hbp_fls_zt"></a>Dropping Resource Queues 
+
+Use DROP RESOURCE QUEUE to remove an existing resource queue.
+
+DROP RESOURCE QUEUE drops an existing resource queue. Only a superuser can run 
this DDL statement when the queue is not busy. You cannot drop a resource queue 
that has at least one child resource queue or a role assigned to it.
+
+The default resource queues `pg_root` and `pg_default` cannot be dropped.
+
+### Examples
+
+Remove a role from a resource queue \(and move the role to the default 
resource queue, `pg_default`\):
+
+``` sql
+ALTER ROLE bob RESOURCE QUEUE NONE;
+```
+
+Remove the resource queue named `adhoc`:
+
+``` sql
+DROP RESOURCE QUEUE adhoc;
+```
+
+## <a id="topic_lqy_gls_zt"></a>Checking Existing Resource Queues 
+
+The HAWQ catalog table `pg_resqueue` saves all existing resource queues.
+
+The following example shows the data selected from `pg_resqueue`.
+
+``` sql
+postgres=# SELECT 
rsqname,parentoid,activestats,memorylimit,corelimit,resovercommit,
+allocpolicy,vsegresourcequota,nvsegupperlimit,nvseglowerlimit,nvsegupperlimitperseg,nvseglowerlimitperseg
+FROM pg_resqueue WHERE rsqname='test_queue_1';
+```
+
+``` pre
+   rsqname    | parentoid | activestats | memorylimit | corelimit | 
resovercommit | allocpolicy | vsegresourcequota | nvsegupperlimit | 
nvseglowerlimit |nvsegupperlimitperseg  | nvseglowerlimitperseg
+--------------+-----------+-------------+-------------+-----------+---------------+-------------+-------------------+-----------------+-----------------+-----------------------+-----------------------
+ test_queue_1 |      9800 |         100 | 50%         | 50%       |            
 2 | even        | mem:128mb         | 0               | 0               | 0    
                 |1
+```
+
+The query displays all the attributes and their values of the selected 
resource queue. See [CREATE RESOURCE 
QUEUE](../reference/sql/CREATE-RESOURCE-QUEUE.html) for a description of these 
attributes.
+
+You can also check the runtime status of existing resource queues by querying 
the `pg_resqueue_status` view:
+
+``` sql
+postgres=# SELECT * FROM pg_resqueue_status;
+```
+
+
+``` pre
+  rsqname   | segmem | segcore  | segsize | segsizemax | inusemem | inusecore 
| rsqholders | rsqwaiters | paused
+------------+--------+----------+---------+------------+----------+-----------+------------+------------+--------
+ pg_root    | 128    | 0.125000 | 64      | 64         | 0        | 0.000000  
| 0          | 0          | F
+ pg_default | 128    | 0.125000 | 32      | 64         | 0        | 0.000000  
| 0          | 0          | F(2 rows)
+```
+
+The query returns the following pieces of data about the resource queue's 
runtime status:
+
+|Resource Queue Runtime|Description|
+|----------------------|-----------|
+|rsqname|HAWQ resource queue name|
+|segmem|Virtual segment memory quota in MB|
+|segcore|Virtual segment vcore quota|
+|segsize|Number of virtual segments the resource queue can dispatch for query 
execution|
+|segsizemax|Maximum number of virtual segments the resource queue can dispatch 
for query execution when overcommit the other queues’ resource quota|
+|inusemem|Accumulated memory in use in MB by current running statements|
+|inusecore|Accumulated vcore in use by current running statements|
+|rsqholders|The total number of concurrent running statements|
+|rsqwaiters|Total number of queuing statements|
+|paused|Indicates whether the resource queue is temporarily paused due to no 
resource status changes. ‘F’ means false, ‘T’ means true, ‘R’ means 
maybe the resource queue has encountered a resource fragmentation problem|
+
+## <a id="topic_scr_3ls_zt"></a>Assigning Roles to Resource Queues 
+
+By default, a role is assigned to `pg_default` resource queue. Assigning a 
role to a branch queue is not allowed.
+
+The following are some examples of creating and assigning a role to a resource 
queue:
+
+``` sql
+CREATE ROLE rmtest1 WITH LOGIN RESOURCE QUEUE pg_default;
+
+ALTER ROLE rmtest1 RESOURCE QUEUE test_queue_1;
+```
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/resourcemgmt/YARNIntegration.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/resourcemgmt/YARNIntegration.html.md.erb 
b/markdown/resourcemgmt/YARNIntegration.html.md.erb
new file mode 100644
index 0000000..6898f6c
--- /dev/null
+++ b/markdown/resourcemgmt/YARNIntegration.html.md.erb
@@ -0,0 +1,252 @@
+---
+title: Integrating YARN with HAWQ
+---
+
+HAWQ supports integration with YARN for global resource management. In a YARN 
managed environment, HAWQ can request resources \(containers\) dynamically from 
YARN, and return resources when HAWQ's workload is not heavy. This feature 
makes HAWQ a native citizen of the whole Hadoop eco-system.
+
+To integrate YARN with HAWQ, use the following high-level steps.
+
+1.  Install YARN, if you have not already done so.
+
+    **Note:** If you are using HDP 2.3, you must set 
`yarn.resourcemanager.system-metrics-publisher.enabled` to `false`. See the 
Release Notes for additional YARN workaround configurations.
+
+2.  Configure YARN using CapacityScheduler and reserve one application queue 
exclusively for HAWQ. See [Configuring YARN for HAWQ](#hawqinputformatexample) 
and [Setting HAWQ Segment Resource Capacity in YARN](#topic_pzf_kqn_c5).
+3.  If desired, enable high availability in YARN. See your Ambari or Hadoop 
documentation for details.
+3.  Enable YARN mode within HAWQ. See [Enabling YARN Mode in 
HAWQ](#topic_rtd_cjh_15).
+4.  After you integrate YARN with HAWQ, adjust HAWQ's resource usage as needed 
by doing any of the following:
+    -   Change the capacity of the corresponding YARN resource queue for HAWQ. 
For example, see the properties described for CapacityScheduler configuration. 
You can then refresh the YARN queues without having to restart or reload HAWQ. 
See See [Configuring YARN for HAWQ](#hawqinputformatexample) and [Setting HAWQ 
Segment Resource Capacity in YARN](#topic_pzf_kqn_c5).
+    -   Change resource consumption within HAWQ on a finer grained level by 
altering HAWQ's resource queues. See [Working with Hierarchical Resource 
Queues](ResourceQueues.html).
+    -   \(Optional\) Tune HAWQ and YARN resource negotiations. For example, 
you can set a minimum number of YARN containers per segment or modify the idle 
timeout for YARN resources in HAWQ. See [Tune HAWQ Resource Negotiations with 
YARN](#topic_wp3_4bx_15).
+
+## <a id="hawqinputformatexample"></a>Configuring YARN for HAWQ 
+
+This topic describes how to configure YARN to manage HAWQ's resources.
+
+When HAWQ has queries that require resources to execute, the HAWQ resource 
manager negotiates with YARN's resource scheduler to allocate resources. Then, 
when HAWQ is not busy, HAWQ's resource manager returns resources to YARN's 
resource scheduler.
+
+To integrate YARN with HAWQ, you must define one YARN application resource 
queue exclusively for HAWQ. YARN resource queues are configured for a specific 
YARN resource scheduler. The YARN resource scheduler uses resource queue 
configuration to allocate resources to applications. There are several 
available YARN resource schedulers; however, HAWQ currently only supports using 
CapacityScheduler to manage YARN resources.
+
+### <a id="capacity_scheduler"></a>Using CapacityScheduler for YARN Resource 
Scheduling 
+
+The following example demonstrates how to configure CapacityScheduler as the 
YARN resource scheduler. In `yarn-site.xml`, use the following configuration to 
enable CapacityScheduler.
+
+``` xml
+<property>
+   <name>yarn.resourcemanager.scheduler.class</name>
+   
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
+</property>
+```
+
+Then, define the queues in CapacityScheduler's configuration. In 
`capacity-scheduler.xml`, you could define the queues as follows:
+
+``` xml
+<property>
+   <name>yarn.scheduler.capacity.root.queues</name>
+   <value>mrque1,mrque2,hawqque</value>
+</property>
+
+```
+
+In the above example configuration, CapacityScheduler has two MapReduce queues 
\(`mrque1` and `mrque2`\) and one HAWQ queue \(`hawqque`\) configured under the 
root queue. Only `hawqque` is defined for HAWQ usage, and it coexists with the 
other two MapReduce queues. These three queues share the resources of the 
entire cluster.
+
+In the following configuration within `capacity-scheduler.xml,` we configure 
the additional properties for the queues to control the capacity of each queue. 
The HAWQ resource queue can utilize 20% to a maximum of 80% resources of the 
whole cluster.
+
+``` xml
+<property>
+   <name>yarn.scheduler.capacity.hawqque.maximum-applications</name>
+   <value>1</value>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.hawqque.capacity</name>
+  <value>20</value>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.hawqque.maximum-capacity</name>
+  <value>80</value>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.hawqque.user-limit-factor</name>
+  <value>2</value>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.mrque1.capacity</name>
+  <value>30</value>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.mrque1.maximum-capacity</name>
+  <value>50</value>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.mrque2.capacity</name>
+  <value>50</value>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.mrque2.maximum-capacity</name>
+  <value>50</value>
+</property>
+```
+
+|Item|Description|
+|----|-----------|
+|yarn.scheduler.capacity.*\<queue\_name\>*.maximum-applications|Maximum number 
of HAWQ applications in the system that can be concurrently active \(both 
running and pending.\) The current recommendation is to let one HAWQ instance 
exclusively use one resource queue.|
+|yarn.scheduler.capacity.*\<queue\_name\>*.capacity|Queue capacity in 
percentage \(%\) as a float \(e.g. 12.5\). The sum of capacities for all 
queues, at each level, must equal 100. Applications in the queue may consume 
more resources than the queue's capacity if there are free resources, which 
provides elasticity.|
+|yarn.scheduler.capacity.*\<queue\_name\>*.maximum-capacity|Maximum queue 
capacity in percentage \(%\) as a float. This limits the elasticity for 
applications in the queue. Defaults to -1 which disables it.|
+|yarn.scheduler.capacity.*\<queue\_name\>*.user-limit-factor|Multiple of the 
queue capacity, which can be configured to allow a single user to acquire more 
resources. By default this is set to 1, which ensures that a single user can 
never take more than the queue's configured capacity irrespective of how idle 
the cluster is. Value is specified as a float.<br/><br/>Setting this to a value 
higher than 1 allows the overcommittment of resources at the application level. 
For example, in terms of HAWQ configuration, if we want twice the maximum 
capacity for the HAWQ's application, we can set this as 2.|
+
+## <a id="topic_pzf_kqn_c5"></a>Setting HAWQ Segment Resource Capacity in YARN 
+
+Similar to how you can set segment resource capacity in HAWQ's standalone 
mode, you can do the same for HAWQ segments managed by YARN.
+
+In HAWQ standalone mode, you can configure the resource capacity of individual 
segments as described in [Configuring Segment Resource 
Capacity](ConfigureResourceManagement.html). If you are using YARN to manage 
HAWQ resources, then you configure the resource capacity of segments by 
configuring YARN. We recommend that you configure all segments with identical 
resource capacity. In `yarn-site.xml`, set the following properties:
+
+``` xml
+<property>
+  <name>yarn.nodemanager.resource.memory-mb</name>
+  <value>4GB</value>
+</property>
+<property>
+  <name>yarn.nodemanager.resource.cpu-vcores</name>
+  <value>1</value>
+</property>
+```
+
+We recommend that in your memory to core ratio that memory is a multiple of 
1GB, such as 1GB per core, 2GB per core or 4 GB per core. 
+
+After you set limits on the segments, you can use resource queues to configure 
additional resource management rules in HAWQ.
+
+### <a id="avoid_fragmentation"></a>Avoiding Resource Fragmentation with YARN 
Managed Resources 
+
+To reduce the likelihood of resource fragmentation in deployments where 
resources are managed by YARN, ensure that you have configured the following:
+
+-   Segment resource capacity configured in 
`yarn.nodemanager.resource.memory-mb` must be a multiple of the virtual segment 
resource quotas that you configure in your resource queues
+-   CPU to memory ratio must be a multiple of the amount configured for 
`yarn.scheduler.minimum-allocation-mb`
+
+For example, if you have the following properties set in YARN:
+
+-   `yarn.scheduler.minimum-allocation-mb=1gb`
+
+    **Note:** This is the default value set by Ambari in some cases.
+
+-   `yarn.nodemanager.resource.memory-mb=48gb`
+-   `yarn.nodemanager.resource.cpu-vcores=16`
+
+Then the CPU to memory ratio calculated by HAWQ equals 3GB \(48 divided by 
16\). Since `yarn.scheduler.minimum-allocation-mb` is set to 1GB, each YARN 
container will be 1GB. Since 3GB is a multiple of 1GB, you should not encounter 
fragmentation.
+
+However, if you had set `yarn.scheduler.minimum-allocation-mb` to 4GB, then it 
would leave 1GB of fragmented space \(4GB minus 3GB.\) To prevent fragmentation 
in this scenario, you could reconfigure 
`yarn.nodemanager.resource.memory-mb=64gb` \(or you could set 
`yarn.scheduler.minimum-allocation-mb=1gb`.\)
+
+**Note:** If you are specifying 1GB or under for 
`yarn.scheduler.minimum-allocation-mb` in `yarn-site.xml`, then make sure that 
the property is an equal subdivision of 1GB. For example, 1024, 512.
+
+See [Handling Segment Resource 
Fragmentation](../troubleshooting/Troubleshooting.html) for general information 
on resource fragmentation.
+
+## <a id="topic_rtd_cjh_15"></a>Enabling YARN Mode in HAWQ 
+
+After you have properly configured YARN, you can enable YARN as HAWQ's global 
resource manager.
+
+To configure YARN as the global resource manager in a HAWQ cluster, add the 
following property configuration to your `hawq-site.xml` file:
+
+``` xml
+<property>
+      <name>hawq_global_rm_type</name>
+      <value>yarn</value>
+</property>
+```
+
+When enabled, the HAWQ resource manager only uses resources allocated from 
YARN.
+
+### Configuring HAWQ in YARN Environments
+
+If you set the global resource manager to YARN, you must also configure the 
following properties in `hawq-site.xml`:
+
+``` xml
+<property>
+      <name>hawq_rm_yarn_address</name>
+      <value>localhost:8032</value>
+</property>
+<property>
+      <name>hawq_rm_yarn_scheduler_address</name>
+      <value>localhost:8030</value>
+</property>
+<property>
+      <name>hawq_rm_yarn_queue_name</name>
+      <value>hawqque</value></property>
+<property>
+      <name>hawq_rm_yarn_app_name</name>
+      <value>hawq</value>
+</property>
+```
+**Note:** If you have enabled high availability for your YARN resource 
managers, then you must configure `yarn.resourcemanager.ha` and 
`yarn.resourcemanager.scheduler.ha` in `yarn-client.xml` located in 
`$GPHOME/etc` instead. The values specified for `hawq_rm_yarn_address` and 
`hawq_rm_yarn_scheduler_address` are ignored. See [Configuring HAWQ in High 
Availablity-Enabled YARN Environments](#highlyavailableyarn)
+
+#### <a id="id_uvp_3pm_q5"></a>hawq\_rm\_yarn\_address 
+
+Server address \(host and port\) of the YARN resource manager server \(the 
value of `yarn.resourcemanager.address`\). User must define this if 
`hawq_global_rm_type` is set to `yarn`. For example, `localhost:8032`.
+
+|Value Range|Default|Set Classifications|
+|-----------|-------|-------------------|
+|valid hostname and port| none set |master|
+
+#### <a id="id_ocq_jpm_q5"></a>hawq\_rm\_yarn\_scheduler\_address 
+
+Server address \(host and port\) of the YARN resource manager scheduler \(the 
value of `yarn.resourcemanager.scheduler.address`\). User must define this if 
`hawq_global_rm_type` is set to `yarn`. For example, `localhost:8030`.
+
+|Value Range|Default|Set Classifications|
+|-----------|-------|-------------------|
+|valid hostname and port| none set |master|
+
+#### <a id="id_y23_kpm_q5"></a>hawq\_rm\_yarn\_queue\_name 
+
+The name of the YARN resource queue to register with HAWQ's resource manager.
+
+|Value Range|Default|Set Classifications|
+|-----------|-------|-------------------|
+|string|default|master|
+
+#### <a id="id_h1c_lpm_q5"></a>hawq\_rm\_yarn\_app\_name 
+
+The name of the YARN application registered with HAWQ's resource manager. For 
example, `hawq`.
+
+|Value Range|Default|Set Classifications|
+|-----------|-------|-------------------|
+|string|hawq|master|
+
+### <a id="highlyavailableyarn"></a>Configuring HAWQ in High 
Availablity-Enabled YARN Environments 
+
+If you have enabled high-availability for your YARN resource managers, then 
specify the following parameters in `yarn-client.xml` located in `$GPHOME/etc` 
instead. 
+
+**Note:** When you use high availability in YARN, HAWQ ignores the values 
specified for `hawq_rm_yarn_address` and `hawq_rm_yarn_scheduler_address` in 
`hawq-site.xml` and uses the values specified in `yarn-client.xml` instead.
+
+``` xml
+    <property>
+      <name>yarn.resourcemanager.ha</name>
+      <value>{0}:8032,{1}:8032</value>
+    </property>
+    
+    <property>
+      <name>yarn.resourcemanager.scheduler.ha</name>
+      <value>{0}:8030,{1}:8030</value>
+    </property>
+```
+
+where {0} and {1} are substituted with the fully qualified hostnames of the 
YARN resource manager host machines.
+
+## <a id="topic_wp3_4bx_15"></a>Tune HAWQ Resource Negotiations with YARN 
+
+To ensure efficient management of resources and highest performance, you can 
configure some aspects of how HAWQ's resource manager negotiate resources from 
YARN.
+
+### <a id="min_yarn_containers"></a>Minimum Number of YARN Containers Per 
Segment 
+
+When HAWQ is integrated with YARN and has no workload, HAWQ does not acquire 
any resources right away. HAWQ's 's resource manager only requests resource 
from YARN when HAWQ receives its first query request. In order to guarantee 
optimal resource allocation for subsequent queries and to avoid frequent YARN 
resource negotiation, you can adjust `hawq_rm_min_resource_perseg` so HAWQ 
receives at least some number of YARN containers per segment regardless of the 
size of the initial query. The default value is 2, which means HAWQ's resource 
manager acquires at least 2 YARN containers for each segment even if the first 
query's resource request is small.
+
+This configuration property cannot exceed the capacity of HAWQ’s YARN queue. 
For example, if HAWQ's queue capacity in YARN is no more than 50% of the whole 
cluster, and each YARN node has a maximum of 64GB memory and 16 vcores, then 
`hawq_rm_min_resource_perseg` in HAWQ cannot be set to more than 8 since HAWQ's 
resource manager acquires YARN containers by vcore. In the case above, the HAWQ 
resource manager acquires a YARN container quota of 4GB memory and 1 vcore.
+
+### <a id="set_yarn_timeout"></a>Setting a Timeout for YARN Resources 
+
+If the level of HAWQ’s workload is lowered, then HAWQ's resource manager may 
have some idle YARN resources. You can adjust `hawq_rm_resource_idle_timeout` 
to let the HAWQ resource manager return idle resources more quickly or more 
slowly.
+
+For example, when HAWQ's resource manager has to reacquire resources, it can 
cause latency for query resource requests. To let HAWQ resource manager retain 
resources longer in anticipation of an upcoming workload, increase the value of 
`hawq_rm_resource_idle_timeout`. The default value of 
`hawq_rm_resource_idle_timeout` is 300 seconds.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/resourcemgmt/best-practices.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/resourcemgmt/best-practices.html.md.erb 
b/markdown/resourcemgmt/best-practices.html.md.erb
new file mode 100644
index 0000000..74bd815
--- /dev/null
+++ b/markdown/resourcemgmt/best-practices.html.md.erb
@@ -0,0 +1,15 @@
+---
+title: Best Practices for Configuring Resource Management
+---
+
+When configuring resource management, you can apply certain best practices to 
ensure that resources are managed both efficiently and for best system 
performance.
+
+The following is a list of high-level best practices for optimal resource 
management:
+
+-   Make sure segments do not have identical IP addresses. See [Segments Do 
Not Appear in 
gp\_segment\_configuration](../troubleshooting/Troubleshooting.html) for an 
explanation of this problem.
+-   Configure all segments to have the same resource capacity. See 
[Configuring Segment Resource Capacity](ConfigureResourceManagement.html).
+-   To prevent resource fragmentation, ensure that your deployment's segment 
resource capacity \(standalone mode\) or YARN node resource capacity \(YARN 
mode\) is a multiple of all virtual segment resource quotas. See [Configuring 
Segment Resource Capacity](ConfigureResourceManagement.html) \(HAWQ standalone 
mode\) and [Setting HAWQ Segment Resource Capacity in 
YARN](YARNIntegration.html).
+-   Ensure that enough registered segments are available and usable for query 
resource requests. If the number of unavailable or unregistered segments is 
higher than a set limit, then query resource requests are rejected. Also ensure 
that the variance of dispatched virtual segments across physical segments is 
not greater than the configured limit. See [Rejection of Query Resource 
Requests](../troubleshooting/Troubleshooting.html).
+-   Use multiple master and segment temporary directories on separate, large 
disks (2TB or greater) to load balance writes to temporary files (for example, 
`/disk1/tmp /disk2/tmp`). For a given query, HAWQ will use a separate temp 
directory (if available) for each virtual segment to store spill files. 
Multiple HAWQ sessions will also use separate temp directories where available 
to avoid disk contention. If you configure too few temp directories, or you 
place multiple temp directories on the same disk, you increase the risk of disk 
contention or running out of disk space when multiple virtual segments target 
the same disk. 
+-   Configure minimum resource levels in YARN, and tune the timeout of when 
idle resources are returned to YARN. See [Tune HAWQ Resource Negotiations with 
YARN](YARNIntegration.html).
+-   Make sure that the property `yarn.scheduler.minimum-allocation-mb` in 
`yarn-site.xml` is an equal subdivision of 1GB. For example, 1024, 512. See 
[Setting HAWQ Segment Resource Capacity in 
YARN](YARNIntegration.html#topic_pzf_kqn_c5).

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/resourcemgmt/index.md.erb
----------------------------------------------------------------------
diff --git a/markdown/resourcemgmt/index.md.erb 
b/markdown/resourcemgmt/index.md.erb
new file mode 100644
index 0000000..7efb756
--- /dev/null
+++ b/markdown/resourcemgmt/index.md.erb
@@ -0,0 +1,12 @@
+---
+title: Managing Resources
+---
+
+This section describes how to use HAWQ's resource management features:
+
+*  <a class="subnav" href="./HAWQResourceManagement.html">How HAWQ Manages 
Resources</a>
+*  <a class="subnav" href="./best-practices.html">Best Practices for 
Configuring Resource Management</a>
+*  <a class="subnav" href="./ConfigureResourceManagement.html">Configuring 
Resource Management</a>
+*  <a class="subnav" href="./YARNIntegration.html">Integrating YARN with 
HAWQ</a>
+*  <a class="subnav" href="./ResourceQueues.html">Working with Hierarchical 
Resource Queues</a>
+*  <a class="subnav" href="./ResourceManagerStatus.html">Analyzing Resource 
Manager Status</a>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/troubleshooting/Troubleshooting.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/troubleshooting/Troubleshooting.html.md.erb 
b/markdown/troubleshooting/Troubleshooting.html.md.erb
new file mode 100644
index 0000000..2b7414b
--- /dev/null
+++ b/markdown/troubleshooting/Troubleshooting.html.md.erb
@@ -0,0 +1,101 @@
+---
+title: Troubleshooting
+---
+
+This chapter describes how to resolve common problems and errors that occur in 
a HAWQ system.
+
+
+
+## <a id="topic_dwd_rnx_15"></a>Query Performance Issues
+
+**Problem:** Query performance is slow.
+
+**Cause:** There can be multiple reasons why a query might be performing 
slowly. For example, the locality of data distribution, the number of virtual 
segments, or the number of hosts used to execute the query can all affect its 
performance. The following procedure describes how to investigate query 
performance issues.
+
+### <a id="task_ayl_pbw_c5"></a>How to Investigate Query Performance Issues
+
+A query is not executing as quickly as you would expect. Here is how to 
investigate possible causes of slowdown:
+
+1.  Check the health of the cluster.
+    1.  Are any DataNodes, segments or nodes down?
+    2.  Are there many failed disks?
+
+2.  Check table statistics. Have the tables involved in the query been 
analyzed?
+3.  Check the plan of the query and run [`EXPLAIN 
ANALYZE`](../reference/sql/EXPLAIN.html) to determine the bottleneck. 
+    Sometimes, there is not enough memory for some operators, such as Hash 
Join, or spill files are used. If an operator cannot perform all of its work in 
the memory allocated to it, it caches data on disk in *spill files*. Compared 
with no spill files, a query will run much slower.
+
+4.  Check data locality statistics using [`EXPLAIN 
ANALYZE`](../reference/sql/EXPLAIN.html). Alternately you can check the logs. 
Data locality result for every query could also be found in the log of HAWQ. 
See [Data Locality 
Statistics](../query/query-performance.html#topic_amk_drc_d5) for information 
on the statistics.
+5.  Check resource queue status. You can query view `pg_resqueue_status` to 
check if the target queue has already dispatched some resource to the queries, 
or if the target queue is lacking resources. See [Checking Existing Resource 
Queues](../resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
+6.  Analyze a dump of the resource manager's status to see more resource queue 
status. See [Analyzing Resource Manager 
Status](../resourcemgmt/ResourceQueues.html#topic_zrh_pkc_f5).
+
+## <a id="topic_vm5_znx_15"></a>Rejection of Query Resource Requests
+
+**Problem:** HAWQ resource manager is rejecting query resource allocation 
requests.
+
+**Cause:** The HAWQ resource manager will reject resource query allocation 
requests under the following conditions:
+
+-   **Too many physical segments are unavailable.**
+
+    HAWQ resource manager expects that the physical segments listed in file 
`$GPHOME/etc/slaves` are already registered and can be queried from table 
`gp_segment_configuration`.
+
+    If the resource manager determines that the number of unregistered or 
unavailable HAWQ physical segments is greater than 
[hawq\_rm\_rejectrequest\_nseg\_limit](../reference/guc/parameter_definitions.html#hawq_rm_rejectrequest_nseg_limit),
 then the resource manager rejects query resource requests directly. The 
purpose of rejecting the query is to guarantee that queries are run in a full 
size cluster. This makes diagnosing query performance problems easier. The 
default value of `hawq_rm_rejectrequest_nseg_limit` is 0.25, which means that 
if more than 0.25 \* the number segments listed in `$GPHOME/etc/slaves` are 
found to be unavailable or unregistered, then the resource manager rejects the 
query's request for resources. For example, if there are 15 segments listed in 
the slaves file, the resource manager calculates that no more than 4 segments 
(0.25 \* 15) can be unavailable
+
+    In most cases, you do not need to modify this default value.
+
+-   **There are unused physical segments with virtual segments allocated for 
the query.**
+
+    The limit defined in 
[hawq\_rm\_tolerate\_nseg\_limit](../reference/guc/parameter_definitions.html#hawq_rm_tolerate_nseg_limit)
 has been exceeded.
+
+-   **Virtual segments have been dispatched too unevenly across physical 
segments.**
+
+    To ensure best query performance, HAWQ resource manager tries to allocate 
virtual segments for query execution as evenly as possible across physical 
segments. However, there can be variance in allocations. HAWQ will reject query 
resource allocation requests that have a variance greater than the value set in 
[hawq\_rm\_nvseg\_variance\_amon\_seg\_limit](../reference/guc/parameter_definitions.html#hawq_rm_nvseg_variance_amon_seg_limit)
+
+    For example, one query execution causes nine (9) virtual segments to be 
dispatched to two (2) physical segments. Assume that one segment has been 
allocated seven (7) virtual segments and another one has allocated two (2) 
virtual segments. Then the variance between the segments is five (5). If 
`hawq_rm_nvseg_variance_amon_seg_limit` is set to the default of one (1), then 
the allocation of resources for this query is rejected and will be reallocated 
later. However, if a physical segment has five virtual segments and the other 
physical segment has four (4), then this resource allocation is accepted.
+
+**Solution:** Check on the status of the nodes in the cluster. Restart 
existing nodes, if necessary, or add new nodes. Modify the 
[hawq\_rm\_nvseg\_variance\_amon\_seg\_limit](../reference/guc/parameter_definitions.html#hawq_rm_nvseg_variance_amon_seg_limit)
 (although note that this can affect query performance.)
+
+## <a id="topic_qq4_rkl_wv"></a>Queries Cancelled Due to High VMEM Usage
+
+**Problem:** Certain queries are cancelled due to high virtual memory usage. 
Example error message:
+
+``` pre
+ERROR: Canceling query because of high VMEM usage. Used: 1748MB, available 
480MB, red zone: 9216MB (runaway_cleaner.c:135) (seg74 bcn-w3:5532 pid=33619) 
(dispatcher.c:1681)
+```
+
+**Cause:** This error occurs when the virtual memory usage on a segment 
exceeds the virtual memory threshold, which is can configured as a percentage 
through the 
[runaway\_detector\_activation\_percent](../reference/guc/parameter_definitions.html#runaway_detector_activation_percent).
+
+If the amount of virtual memory utilized by a physical segment exceeds the 
calculated threshold, then HAWQ begins terminating queries based on memory 
usage, starting with the query that is consuming the largest amount of memory. 
Queries are terminated until the percentage of utilized virtual memory is below 
the specified percentage.
+
+**Solution:** Try temporarily increasing the value of 
`hawq_re_memory_overcommit_max` to allow specific queries to run without error.
+
+Check `pg_log` files for more memory usage details on session and QE 
processes. HAWQ logs terminated query information such as memory allocation 
history and context information as well as query plan operator memory usage 
information. This information is sent to the master and segment instance log 
files.
+
+## <a id="topic_hlj_zxx_15"></a>Segments Do Not Appear in 
gp\_segment\_configuration
+
+**Problem:** Segments have successfully started, but cannot be found in table 
`gp_segment_configuration`.
+
+**Cause:** Your segments may have been assigned identical IP addresses.
+
+Some software and projects have virtualized network interfaces that use 
auto-configured IP addresses. This may cause some HAWQ segments to obtain 
identical IP addresses. The resource manager's fault tolerance service 
component will only recognize one of the segments with an identical IP address.
+
+**Solution:** Change your network's configuration to disallow identical IP 
addresses before starting up the HAWQ cluster.
+
+## <a id="investigatedownsegment"></a>Investigating Segments Marked As Down 
+
+**Problem:** The [HAWQ fault tolerance service 
(FTS)](../admin/FaultTolerance.html) has marked a segment as down in the 
[gp_segment_configuration](../reference/catalog/gp_segment_configuration.html) 
catalog table.
+
+**Cause:**  FTS marks a segment as down when a segment encounters a critical 
error. For example, a temporary directory on the segment fails due to a 
hardware error. Other causes might include network or communication errors, 
resource manager errors, or simply a heartbeat timeout. The segment reports 
critical failures to the HAWQ master through a heartbeat report.
+
+**Solution:** The actions required for recovering a segment varies depending 
upon the reason. In some cases, the segment is only marked as down temporarily 
until the heartbeat interval can recheck the segment's status. To investigate 
the reason why the segment was marked down, check the gp_configuration_history 
catalog table for a corresponding reason. See [Viewing the Current Status of a 
Segment](../admin/FaultTolerance.html#view_segment_status) for a description of 
various reasons that the fault tolerance service may mark a segment as down.
+
+## <a id="topic_mdz_q2y_15"></a>Handling Segment Resource Fragmentation
+
+Different HAWQ resource queues can have different virtual segment resource 
quotas, which can result in resource fragmentation. For example, a HAWQ cluster 
has 4GB memory available for a currently queued query, but the resource queues 
are configured to split four 512MB memory blocks in 4 different segments. It is 
impossible to allocate two 1GB memory virtual segments.
+
+In standalone mode, the segment resources are all exclusively occupied by 
HAWQ. Resource fragmentation can occur when segment capacity is not a multiple 
of a virtual segment resource quota. For example, a segment has 15GB memory 
capacity, but the virtual segment resource quota is set to 2GB. The maximum 
possible memory consumption in a segment is 14GB. Therefore, you should 
configure segment resource capacity as a multiple of all virtual segment 
resource quotas.
+
+In YARN mode, resources are allocated from the YARN resource manager. The HAWQ 
resource manager acquires a YARN container by 1 vcore. For example, if YARN 
reports that a segment having 64GB memory and 16 vcore is configured for YARN 
applications, HAWQ requests YARN containers by 4GB memory and 1 vcore. In this 
manner, HAWQ resource manager acquires YARN containers on demand. If the 
capacity of the YARN container is not a multiple of the virtual segment 
resource quota, resource fragmentation may occur. For example, if the YARN 
container resource capacity is 3GB memory 1 vcore, one segment may have 1 or 3 
YARN containers for HAWQ query execution. In this situation, if the virtual 
segment resource quota is 2GB memory, then HAWQ will always have 1 GB memory 
that cannot be utilized. Therefore, it is recommended to configure YARN node 
resource capacity carefully to make YARN container resource quota as a multiple 
of all virtual segment resource quotas. In addition, make sure your CPU to m
 emory ratio is a multiple of the amount configured for 
`yarn.scheduler.minimum-allocation-mb`. See [Setting HAWQ Segment Resource 
Capacity in YARN](../resourcemgmt/YARNIntegration.html#topic_pzf_kqn_c5) for 
more information.
+
+If resource fragmentation occurs, queued requests are not processed until 
either some running queries return resources or the global resource manager 
provides more resources. If you encounter resource fragmentation, you should 
double check the configured capacities of the resource queues for any errors. 
For example, an error might be that the global resource manager container's 
memory to core ratio is not a multiple of virtual segment resource quota.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/02-pipeline.png
----------------------------------------------------------------------
diff --git a/mdimages/02-pipeline.png b/mdimages/02-pipeline.png
deleted file mode 100644
index 26fec1b..0000000
Binary files a/mdimages/02-pipeline.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/03-gpload-files.jpg
----------------------------------------------------------------------
diff --git a/mdimages/03-gpload-files.jpg b/mdimages/03-gpload-files.jpg
deleted file mode 100644
index d50435f..0000000
Binary files a/mdimages/03-gpload-files.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/1-assign-masters.tiff
----------------------------------------------------------------------
diff --git a/mdimages/1-assign-masters.tiff b/mdimages/1-assign-masters.tiff
deleted file mode 100644
index b5c4cb4..0000000
Binary files a/mdimages/1-assign-masters.tiff and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/1-choose-services.tiff
----------------------------------------------------------------------
diff --git a/mdimages/1-choose-services.tiff b/mdimages/1-choose-services.tiff
deleted file mode 100644
index d21b706..0000000
Binary files a/mdimages/1-choose-services.tiff and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/3-assign-slaves-and-clients.tiff
----------------------------------------------------------------------
diff --git a/mdimages/3-assign-slaves-and-clients.tiff 
b/mdimages/3-assign-slaves-and-clients.tiff
deleted file mode 100644
index 93ea3bd..0000000
Binary files a/mdimages/3-assign-slaves-and-clients.tiff and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/4-customize-services-hawq.tiff
----------------------------------------------------------------------
diff --git a/mdimages/4-customize-services-hawq.tiff 
b/mdimages/4-customize-services-hawq.tiff
deleted file mode 100644
index c6bfee8..0000000
Binary files a/mdimages/4-customize-services-hawq.tiff and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/5-customize-services-pxf.tiff
----------------------------------------------------------------------
diff --git a/mdimages/5-customize-services-pxf.tiff 
b/mdimages/5-customize-services-pxf.tiff
deleted file mode 100644
index 3812aa1..0000000
Binary files a/mdimages/5-customize-services-pxf.tiff and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/6-review.tiff
----------------------------------------------------------------------
diff --git a/mdimages/6-review.tiff b/mdimages/6-review.tiff
deleted file mode 100644
index be7debb..0000000
Binary files a/mdimages/6-review.tiff and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/7-install-start-test.tiff
----------------------------------------------------------------------
diff --git a/mdimages/7-install-start-test.tiff 
b/mdimages/7-install-start-test.tiff
deleted file mode 100644
index b556e9a..0000000
Binary files a/mdimages/7-install-start-test.tiff and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/ext-tables-xml.png
----------------------------------------------------------------------
diff --git a/mdimages/ext-tables-xml.png b/mdimages/ext-tables-xml.png
deleted file mode 100644
index f208828..0000000
Binary files a/mdimages/ext-tables-xml.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/ext_tables.jpg
----------------------------------------------------------------------
diff --git a/mdimages/ext_tables.jpg b/mdimages/ext_tables.jpg
deleted file mode 100644
index d5a0940..0000000
Binary files a/mdimages/ext_tables.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/ext_tables_multinic.jpg
----------------------------------------------------------------------
diff --git a/mdimages/ext_tables_multinic.jpg b/mdimages/ext_tables_multinic.jpg
deleted file mode 100644
index fcf09c4..0000000
Binary files a/mdimages/ext_tables_multinic.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/gangs.jpg
----------------------------------------------------------------------
diff --git a/mdimages/gangs.jpg b/mdimages/gangs.jpg
deleted file mode 100644
index 0d14585..0000000
Binary files a/mdimages/gangs.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/gp_orca_fallback.png
----------------------------------------------------------------------
diff --git a/mdimages/gp_orca_fallback.png b/mdimages/gp_orca_fallback.png
deleted file mode 100644
index 000a6af..0000000
Binary files a/mdimages/gp_orca_fallback.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/gpfdist_instances.png
----------------------------------------------------------------------
diff --git a/mdimages/gpfdist_instances.png b/mdimages/gpfdist_instances.png
deleted file mode 100644
index 6fae2d4..0000000
Binary files a/mdimages/gpfdist_instances.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/gpfdist_instances_backup.png
----------------------------------------------------------------------
diff --git a/mdimages/gpfdist_instances_backup.png 
b/mdimages/gpfdist_instances_backup.png
deleted file mode 100644
index 7cd3e1a..0000000
Binary files a/mdimages/gpfdist_instances_backup.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/gporca.png
----------------------------------------------------------------------
diff --git a/mdimages/gporca.png b/mdimages/gporca.png
deleted file mode 100644
index 2909443..0000000
Binary files a/mdimages/gporca.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/hawq_architecture_components.png
----------------------------------------------------------------------
diff --git a/mdimages/hawq_architecture_components.png 
b/mdimages/hawq_architecture_components.png
deleted file mode 100644
index cea50b0..0000000
Binary files a/mdimages/hawq_architecture_components.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/hawq_hcatalog.png
----------------------------------------------------------------------
diff --git a/mdimages/hawq_hcatalog.png b/mdimages/hawq_hcatalog.png
deleted file mode 100644
index 35b74c3..0000000
Binary files a/mdimages/hawq_hcatalog.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/hawq_high_level_architecture.png
----------------------------------------------------------------------
diff --git a/mdimages/hawq_high_level_architecture.png 
b/mdimages/hawq_high_level_architecture.png
deleted file mode 100644
index d88bf7a..0000000
Binary files a/mdimages/hawq_high_level_architecture.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/partitions.jpg
----------------------------------------------------------------------
diff --git a/mdimages/partitions.jpg b/mdimages/partitions.jpg
deleted file mode 100644
index d366e21..0000000
Binary files a/mdimages/partitions.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/piv-opt.png
----------------------------------------------------------------------
diff --git a/mdimages/piv-opt.png b/mdimages/piv-opt.png
deleted file mode 100644
index f8f192b..0000000
Binary files a/mdimages/piv-opt.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/resource_queues.jpg
----------------------------------------------------------------------
diff --git a/mdimages/resource_queues.jpg b/mdimages/resource_queues.jpg
deleted file mode 100644
index 7f5a54c..0000000
Binary files a/mdimages/resource_queues.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/slice_plan.jpg
----------------------------------------------------------------------
diff --git a/mdimages/slice_plan.jpg b/mdimages/slice_plan.jpg
deleted file mode 100644
index ad8da83..0000000
Binary files a/mdimages/slice_plan.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/source/gporca.graffle
----------------------------------------------------------------------
diff --git a/mdimages/source/gporca.graffle b/mdimages/source/gporca.graffle
deleted file mode 100644
index fb835d5..0000000
Binary files a/mdimages/source/gporca.graffle and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/source/hawq_hcatalog.graffle
----------------------------------------------------------------------
diff --git a/mdimages/source/hawq_hcatalog.graffle 
b/mdimages/source/hawq_hcatalog.graffle
deleted file mode 100644
index f46bfb2..0000000
Binary files a/mdimages/source/hawq_hcatalog.graffle and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/mdimages/standby_master.jpg
----------------------------------------------------------------------
diff --git a/mdimages/standby_master.jpg b/mdimages/standby_master.jpg
deleted file mode 100644
index ef195ab..0000000
Binary files a/mdimages/standby_master.jpg and /dev/null differ

Reply via email to