cluster expansion - include NodeManager config if using YARN
Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/d4d42834 Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/d4d42834 Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/d4d42834 Branch: refs/heads/develop Commit: d4d42834dd3020fde0ddaa205581ad0be8231bbe Parents: cb62d8a Author: Lisa Owen <lo...@pivotal.io> Authored: Wed Oct 12 09:56:31 2016 -0700 Committer: Lisa Owen <lo...@pivotal.io> Committed: Wed Oct 12 09:56:31 2016 -0700 ---------------------------------------------------------------------- admin/ClusterExpansion.html.md.erb | 2 +- admin/ambari-admin.html.md.erb | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/d4d42834/admin/ClusterExpansion.html.md.erb ---------------------------------------------------------------------- diff --git a/admin/ClusterExpansion.html.md.erb b/admin/ClusterExpansion.html.md.erb index d99c760..e4800d3 100644 --- a/admin/ClusterExpansion.html.md.erb +++ b/admin/ClusterExpansion.html.md.erb @@ -12,7 +12,7 @@ This topic provides some guidelines around expanding your HAWQ cluster. There are several recommendations to keep in mind when modifying the size of your running HAWQ cluster: -- When you add a new node, install both a DataNode and a physical segment on the new node. +- When you add a new node, install both a DataNode and a physical segment on the new node. If you are using YARN to manage HAWQ resources, you must also configure a YARN NodeManager on the new node. - After adding a new node, you should always rebalance HDFS data to maintain cluster performance. - Adding or removing a node also necessitates an update to the HDFS metadata cache. This update will happen eventually, but can take some time. To speed the update of the metadata cache, execute **`select gp_metadata_cache_clear();`**. - Note that for hash distributed tables, expanding the cluster will not immediately improve performance since hash distributed tables use a fixed number of virtual segments. In order to obtain better performance with hash distributed tables, you must redistribute the table to the updated cluster by either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html) command. http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/d4d42834/admin/ambari-admin.html.md.erb ---------------------------------------------------------------------- diff --git a/admin/ambari-admin.html.md.erb b/admin/ambari-admin.html.md.erb index e41adc6..a5b2169 100644 --- a/admin/ambari-admin.html.md.erb +++ b/admin/ambari-admin.html.md.erb @@ -153,7 +153,7 @@ This topic provides some guidelines around expanding your HAWQ cluster. There are several recommendations to keep in mind when modifying the size of your running HAWQ cluster: -- When you add a new node, install both a DataNode and a HAWQ segment on the new node. +- When you add a new node, install both a DataNode and a HAWQ segment on the new node. If you are using YARN to manage HAWQ resources, you must also configure a YARN NodeManager on the new node. - After adding a new node, you should always rebalance HDFS data to maintain cluster performance. - Adding or removing a node also necessitates an update to the HDFS metadata cache. This update will happen eventually, but can take some time. To speed the update of the metadata cache, select the **Service Actions > Clear HAWQ's HDFS Metadata Cache** option in Ambari. - Note that for hash distributed tables, expanding the cluster will not immediately improve performance since hash distributed tables use a fixed number of virtual segments. In order to obtain better performance with hash distributed tables, you must redistribute the table to the updated cluster by either the [ALTER TABLE](../reference/sql/ALTER-TABLE.html) or [CREATE TABLE AS](../reference/sql/CREATE-TABLE-AS.html) command.