clean up update, delete, vacuum references

Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/85e8a5da
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/85e8a5da
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/85e8a5da

Branch: refs/heads/master
Commit: 85e8a5da28b810649921f482c56d7530d1e5ac2c
Parents: 37857ea
Author: Lisa Owen <[email protected]>
Authored: Thu Sep 22 14:37:13 2016 -0700
Committer: Lisa Owen <[email protected]>
Committed: Fri Sep 23 12:01:04 2016 -0700

----------------------------------------------------------------------
 admin/RecommendedMonitoringTasks.html.md.erb            |  6 ++----
 admin/monitor.html.md.erb                               |  2 +-
 clientaccess/roles_privs.html.md.erb                    |  2 +-
 datamgmt/BasicDataOperations.html.md.erb                | 12 +++++++-----
 datamgmt/ConcurrencyControl.html.md.erb                 |  7 -------
 datamgmt/Transactions.html.md.erb                       | 10 +++-------
 datamgmt/about_statistics.html.md.erb                   |  8 +++-----
 ...timizing-data-load-and-query-performance.html.md.erb |  6 +++---
 ddl/ddl-partition.html.md.erb                           |  4 ++--
 ddl/ddl-storage.html.md.erb                             |  9 +++------
 reference/catalog/pg_class.html.md.erb                  |  2 +-
 reference/catalog/pg_index.html.md.erb                  |  2 +-
 reference/cli/admin_utilities/hawqload.html.md.erb      |  2 +-
 reference/cli/client_utilities/vacuumdb.html.md.erb     |  2 ++
 reference/guc/guc_category-list.html.md.erb             |  2 +-
 reference/guc/parameter_definitions.html.md.erb         |  6 +-----
 reference/sql/ALTER-TABLE.html.md.erb                   |  2 +-
 reference/sql/BEGIN.html.md.erb                         |  2 +-
 reference/sql/COPY.html.md.erb                          |  2 +-
 reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb         |  2 +-
 reference/sql/CREATE-TABLE.html.md.erb                  |  4 ++--
 reference/sql/CREATE-VIEW.html.md.erb                   |  2 +-
 reference/sql/DROP-TABLE.html.md.erb                    |  2 +-
 reference/sql/GRANT.html.md.erb                         |  2 +-
 reference/sql/PREPARE.html.md.erb                       |  2 +-
 reference/sql/VACUUM.html.md.erb                        | 10 +++++++---
 reference/toolkit/hawq_toolkit.html.md.erb              |  2 +-
 27 files changed, 50 insertions(+), 64 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/admin/RecommendedMonitoringTasks.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/RecommendedMonitoringTasks.html.md.erb 
b/admin/RecommendedMonitoringTasks.html.md.erb
index a01ea55..3007aee 100644
--- a/admin/RecommendedMonitoringTasks.html.md.erb
+++ b/admin/RecommendedMonitoringTasks.html.md.erb
@@ -193,7 +193,7 @@ GROUP BY 1;
     <p>Severity: CRITICAL</p>
     </td>
     <td>
-      <p><p>Vacuum an individual table:</p>
+      <p><p>Vacuum an individual system catalog table:</p>
       <pre><code>VACUUM &lt;<i>table</i>&gt;;</code></pre>
     </td>
     <td>After the <a 
href="/20/reference/guc/parameter_definitions.html">vacuum_freeze_min_age</a> 
value is reached, VACUUM will no longer replace transaction IDs with 
<code>FrozenXID</code> while scanning a table. Perform vacuum on these tables 
before the limit is reached.</td>
@@ -224,9 +224,7 @@ GROUP BY 1;
       <p>Recommended frequency: weekly, or more often if database objects are 
created and dropped frequently</p>
     </td>
     <td>
-      <ol>
-        <li><code>VACUUM</code> the system tables in each database.</li>
-      </ol>
+      <p><code>VACUUM</code> the system tables in each database.</p>
     </td>
     <td>The optimizer retrieves information from the system tables to create 
query plans. If system tables and indexes are allowed to become bloated over 
time, scanning the system tables increases query execution time.</td>
   </tr>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/admin/monitor.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/monitor.html.md.erb b/admin/monitor.html.md.erb
index 1e464e2..d1fbf31 100644
--- a/admin/monitor.html.md.erb
+++ b/admin/monitor.html.md.erb
@@ -78,7 +78,7 @@ HAWQ tracks various metadata information in its system 
catalogs about the object
 
 #### <a id="topic25"></a>Viewing the Last Operation Performed 
 
-You can use the system views *pg\_stat\_operations* and 
*pg\_stat\_partition\_operations* to look up actions performed on an object, 
such as a table. For example, to see the actions performed on a table, such as 
when it was created and when it was last vacuumed and analyzed:
+You can use the system views *pg\_stat\_operations* and 
*pg\_stat\_partition\_operations* to look up actions performed on an object, 
such as a table. For example, to see the actions performed on a table, such as 
when it was created and when it was last analyzed:
 
 ```sql
 => SELECT schemaname as schema, objname as table,

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/clientaccess/roles_privs.html.md.erb
----------------------------------------------------------------------
diff --git a/clientaccess/roles_privs.html.md.erb 
b/clientaccess/roles_privs.html.md.erb
index bd2de0a..4503951 100644
--- a/clientaccess/roles_privs.html.md.erb
+++ b/clientaccess/roles_privs.html.md.erb
@@ -16,7 +16,7 @@ In order to bootstrap the HAWQ system, a freshly initialized 
system always conta
 
 ## <a id="topic2"></a>Security Best Practices for Roles and Privileges 
 
--   **Secure the gpadmin system user.** HAWQ requires a UNIX user id to 
install and initialize the HAWQ system. This system user is referred to as 
`gpadmin` in the HAWQ documentation. This `gpadmin` user is the default 
database superuser in HAWQ, as well as the file system owner of the HAWQ 
installation and its underlying data files. This default administrator account 
is fundamental to the design of HAWQ. The system cannot run without it, and 
there is no way to limit the access of this gpadmin user id. Use roles to 
manage who has access to the database for specific purposes. You should only 
use the `gpadmin` account for system maintenance tasks such as expansion and 
upgrade. Anyone who logs on to a HAWQ host as this user id can read, alter or 
delete any data; including system catalog data and database access rights. 
Therefore, it is very important to secure the gpadmin user id and only provide 
access to essential system administrators. Administrators should only log in to 
HAWQ as `g
 padmin` when performing certain system maintenance tasks \(such as upgrade or 
expansion\). Database users should never log on as `gpadmin`, and ETL or 
production workloads should never run as `gpadmin`.
+-   **Secure the gpadmin system user.** HAWQ requires a UNIX user id to 
install and initialize the HAWQ system. This system user is referred to as 
`gpadmin` in the HAWQ documentation. This `gpadmin` user is the default 
database superuser in HAWQ, as well as the file system owner of the HAWQ 
installation and its underlying data files. This default administrator account 
is fundamental to the design of HAWQ. The system cannot run without it, and 
there is no way to limit the access of this gpadmin user id. Use roles to 
manage who has access to the database for specific purposes. You should only 
use the `gpadmin` account for system maintenance tasks such as expansion and 
upgrade. Anyone who logs on to a HAWQ host as this user id can read, alter or 
delete any data; specifically system catalog data and database access rights. 
Therefore, it is very important to secure the gpadmin user id and only provide 
access to essential system administrators. Administrators should only log in to 
HAWQ as
  `gpadmin` when performing certain system maintenance tasks \(such as upgrade 
or expansion\). Database users should never log on as `gpadmin`, and ETL or 
production workloads should never run as `gpadmin`.
 -   **Assign a distinct role to each user that logs in.** For logging and 
auditing purposes, each user that is allowed to log in to HAWQ should be given 
their own database role. For applications or web services, consider creating a 
distinct role for each application or service. See [Creating New Roles 
\(Users\)](#topic3).
 -   **Use groups to manage access privileges.** See [Role Membership](#topic5).
 -   **Limit users who have the SUPERUSER role attribute.** Roles that are 
superusers bypass all access privilege checks in HAWQ, as well as resource 
queuing. Only system administrators should be given superuser rights. See 
[Altering Role Attributes](#topic4).

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/datamgmt/BasicDataOperations.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/BasicDataOperations.html.md.erb 
b/datamgmt/BasicDataOperations.html.md.erb
index cd2cf16..6c3323c 100644
--- a/datamgmt/BasicDataOperations.html.md.erb
+++ b/datamgmt/BasicDataOperations.html.md.erb
@@ -40,16 +40,18 @@ To insert data into a partitioned table, you specify the 
root partitioned table,
 
 To insert large amounts of data, use external tables or the `COPY` command. 
These load mechanisms are more efficient than `INSERT` for inserting large 
quantities of rows. See [Loading and Unloading 
Data](load/g-loading-and-unloading-data.html#topic1) for more information about 
bulk data loading.
 
-## <a id="topic9"></a>Vacuuming the Database
+## <a id="topic9"></a>Vacuuming the System Catalog Tables
 
-Deleted or updated data rows occupy physical space on disk even though new 
transactions cannot see them. Periodically running the `VACUUM` command removes 
these expired rows. For example:
+Only HAWQ system catalog tables use multiple version concurrency control. 
Deleted or updated data rows in the catalog tables occupy physical space on 
disk even though new transactions cannot see them. Periodically running the 
`VACUUM` command removes these expired rows. 
+
+Periodically running the `VACUUM` command on system catalog tables removes 
these expired rows. The `VACUUM` command also collects table-level statistics 
such as the number of rows and pages.
+
+For example:
 
 ``` sql
-VACUUM mytable;
+VACUUM pg_class;
 ```
 
-The `VACUUM` command collects table-level statistics such as the number of 
rows and pages. Vacuum all tables after loading data.
-
 ### <a id="topic10"></a>Configuring the Free Space Map
 
 Expired rows are held in the *free space map*. The free space map must be 
sized large enough to hold all expired rows in your database. If not, a regular 
`VACUUM` command cannot reclaim space occupied by expired rows that overflow 
the free space map.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/datamgmt/ConcurrencyControl.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/ConcurrencyControl.html.md.erb 
b/datamgmt/ConcurrencyControl.html.md.erb
index 0291ddb..2ced135 100644
--- a/datamgmt/ConcurrencyControl.html.md.erb
+++ b/datamgmt/ConcurrencyControl.html.md.erb
@@ -17,15 +17,8 @@ HAWQ provides multiple lock modes to control concurrent 
access to data in tables
 | Lock Mode              | Associated SQL Commands                             
                                | Conflicts With                                
                                                                          |
 
|------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|
 | ACCESS SHARE           | `SELECT`                                            
                                | ACCESS EXCLUSIVE                              
                                                                          |
-| ROW SHARE              | `SELECT FOR UPDATE`, `SELECT FOR                 
SHARE`                             | EXCLUSIVE, ACCESS EXCLUSIVE                
                                                                             |
 | ROW EXCLUSIVE          | `INSERT`, `COPY`                                    
                                | SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS 
EXCLUSIVE                                                                 |
 | SHARE UPDATE EXCLUSIVE | `VACUUM` (without `FULL`), `ANALYZE`                
                                | SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW 
EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE                                         |
 | SHARE                  | `CREATE INDEX`                                      
                                | ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE 
ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE                                 |
 | SHARE ROW EXCLUSIVE    |                                                    
                                 | ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, 
SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE                         
 |
-| EXCLUSIVE              | `DELETE`, `UPDATE` See 
[Note](#topic_f5l_qnh_kr__lock_note) | ROW SHARE, ROW EXCLUSIVE, SHARE UPDATE 
EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE              
 |
 | ACCESS EXCLUSIVE       | `ALTER TABLE`, `DROP TABLE`, `TRUNCATE`, `REINDEX`, 
`CLUSTER`, `VACUUM FULL`        | ACCESS SHARE, ROW SHARE, ROW EXCLUSIVE, SHARE 
UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, ACCESS EXCLUSIVE |
-
-
-**Note:** In HAWQ, `UPDATE` and `DELETE` acquire the more restrictive lock 
EXCLUSIVE rather than ROW EXCLUSIVE.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/datamgmt/Transactions.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/Transactions.html.md.erb 
b/datamgmt/Transactions.html.md.erb
index b5d7e97..dfc9a5e 100644
--- a/datamgmt/Transactions.html.md.erb
+++ b/datamgmt/Transactions.html.md.erb
@@ -24,7 +24,7 @@ HAWQ accepts the standard SQL transaction levels as follows:
 
 The following information describes the behavior of the HAWQ transaction 
levels:
 
--   **read committed/read uncommitted** — Provides fast, simple, partial 
transaction isolation. With read committed and read uncommitted transaction 
isolation, `SELECT`, `UPDATE`, and `DELETE` transactions operate on a snapshot 
of the database taken when the query started.
+-   **read committed/read uncommitted** — Provides fast, simple, partial 
transaction isolation. With read committed and read uncommitted transaction 
isolation, `SELECT` transactions operate on a snapshot of the database taken 
when the query started.
 
 A `SELECT` query:
 
@@ -33,9 +33,9 @@ A `SELECT` query:
 -   Does not see uncommitted data outside the transaction.
 -   Can possibly see changes that concurrent transactions made if the 
concurrent transaction is committed after the initial read in its own 
transaction.
 
-Successive `SELECT` queries in the same transaction can see different data if 
other concurrent transactions commit changes before the queries start. `UPDATE` 
and `DELETE` commands find only rows committed before the commands started.
+Successive `SELECT` queries in the same transaction can see different data if 
other concurrent transactions commit changes before the queries start.
 
-Read committed or read uncommitted transaction isolation allows concurrent 
transactions to modify or lock a row before `UPDATE` or `DELETE` finds the row. 
Read committed or read uncommitted transaction isolation may be inadequate for 
applications that perform complex queries and updates and require a consistent 
view of the database.
+Read committed or read uncommitted transaction isolation may be inadequate for 
applications that perform complex queries and require a consistent view of the 
database.
 
 -   **serializable/repeatable read** — Provides strict transaction isolation 
in which transactions execute as if they run one after another rather than 
concurrently. Applications on the serializable or repeatable read level must be 
designed to retry transactions in case of serialization failures.
 
@@ -49,10 +49,6 @@ A `SELECT` query:
 
     Successive `SELECT` commands within a single transaction always see the 
same data.
 
-    `UPDATE`, `DELETE, SELECT FOR UPDATE,` and `SELECT FOR SHARE` commands 
find only rows committed before the command started. If a concurrent 
transaction has already updated, deleted, or locked a target row when the row 
is found, the serializable or repeatable read transaction waits for the 
concurrent transaction to update the row, delete the row, or roll back.
-
-    If the concurrent transaction updates or deletes the row, the serializable 
or repeatable read transaction rolls back. If the concurrent transaction rolls 
back, then the serializable or repeatable read transaction updates or deletes 
the row.
-
 The default transaction isolation level in HAWQ is *read committed*. To change 
the isolation level for a transaction, declare the isolation level when you 
`BEGIN` the transaction or use the `SET TRANSACTION` command after the 
transaction starts.
 
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/datamgmt/about_statistics.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/about_statistics.html.md.erb 
b/datamgmt/about_statistics.html.md.erb
index d4b7665..25b3b24 100644
--- a/datamgmt/about_statistics.html.md.erb
+++ b/datamgmt/about_statistics.html.md.erb
@@ -20,7 +20,7 @@ Calculating statistics consumes time and resources, so HAWQ 
produces estimates b
 
 ### <a id="tablesize"></a>Table Size
 
-The query planner seeks to minimize the disk I/O and network traffic required 
to execute a query, using estimates of the number of rows that must be 
processed and the number of disk pages the query must access. The data from 
which these estimates are derived are the `pg_class` system table columns 
`reltuples` and `relpages`, which contain the number of rows and pages at the 
time a `VACUUM` or `ANALYZE` command was last run. As rows are added or 
deleted, the numbers become less accurate. However, an accurate count of disk 
pages is always available from the operating system, so as long as the ratio of 
`reltuples` to `relpages` does not change significantly, the optimizer can 
produce an estimate of the number of rows that is sufficiently accurate to 
choose the correct query execution plan.
+The query planner seeks to minimize the disk I/O and network traffic required 
to execute a query, using estimates of the number of rows that must be 
processed and the number of disk pages the query must access. The data from 
which these estimates are derived are the `pg_class` system table columns 
`reltuples` and `relpages`, which contain the number of rows and pages at the 
time a `VACUUM` or `ANALYZE` command was last run. As rows are added, the 
numbers become less accurate. However, an accurate count of disk pages is 
always available from the operating system, so as long as the ratio of 
`reltuples` to `relpages` does not change significantly, the optimizer can 
produce an estimate of the number of rows that is sufficiently accurate to 
choose the correct query execution plan.
 
 In append-optimized tables, the number of tuples is kept up-to-date in the 
system catalogs, so the `reltuples` statistic is not an estimate. Non-visible 
tuples in the table are subtracted from the total. The `relpages` value is 
estimated from the append-optimized block sizes.
 
@@ -122,8 +122,6 @@ If a sample table is created, the number of rows in the 
sample is calculated to
 
 Running `ANALYZE` with no arguments updates statistics for all tables in the 
database. This could take a very long time, so it is better to analyze tables 
selectively after data has changed. You can also analyze a subset of the 
columns in a table, for example columns used in joins, `WHERE` clauses, `SORT` 
clauses, `GROUP BY` clauses, or `HAVING` clauses.
 
-Analyzing a severely bloated table can generate poor statistics if the sample 
contains empty pages, so it is good practice to vacuum a bloated table before 
analyzing it.
-
 See the SQL Command Reference for details of running the `ANALYZE` command.
 
 Refer to the Management Utility Reference for details of running the 
`analyzedb` command.
@@ -150,7 +148,7 @@ Set the system default statistics target to a different 
value by setting the `de
 $ hawq config -c default_statistics_target -v 50
 ```
 
-The statististics target for individual columns can be set with the `ALTER     
        TABLE` command. For example, some queries can be improved by increasing 
the target for certain columns, especially columns that have irregular 
distributions. You can set the target to zero for columns that never contribute 
to query otpimization. When the target is 0, `ANALYZE` ignores the column. For 
example, the following `ALTER TABLE` command sets the statistics target for the 
`notes` column in the `emp` table to zero:
+The statististics target for individual columns can be set with the `ALTER     
        TABLE` command. For example, some queries can be improved by increasing 
the target for certain columns, especially columns that have irregular 
distributions. You can set the target to zero for columns that never contribute 
to query optimization. When the target is 0, `ANALYZE` ignores the column. For 
example, the following `ALTER TABLE` command sets the statistics target for the 
`notes` column in the `emp` table to zero:
 
 ``` sql
 ALTER TABLE emp ALTER COLUMN notes SET STATISTICS 0;
@@ -168,7 +166,7 @@ Automatic statistics collection has three modes:
 
 -   `none` disables automatic statistics collection.
 -   `on_no_stats` triggers an analyze operation for a table with no existing 
statistics when any of the commands `CREATE TABLE AS SELECT`, `INSERT`, or 
`COPY` are executed on the table.
--   `on_change` triggers an analyze operation when any of the commands `CREATE 
TABLE AS SELECT`, `UPDATE`, `DELETE`, `INSERT`, or `COPY` are executed on the 
table and the number of rows affected exceeds the threshold defined by the 
`gp_autostats_on_change_threshold` configuration parameter.
+-   `on_change` triggers an analyze operation when any of the commands `CREATE 
TABLE AS SELECT`, `INSERT`, or `COPY` are executed on the table and the number 
of rows affected exceeds the threshold defined by the 
`gp_autostats_on_change_threshold` configuration parameter.
 
 The automatic statistics collection mode is set separately for commands that 
occur within a procedural language function and commands that execute outside 
of a function:
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/datamgmt/load/g-optimizing-data-load-and-query-performance.html.md.erb
----------------------------------------------------------------------
diff --git 
a/datamgmt/load/g-optimizing-data-load-and-query-performance.html.md.erb 
b/datamgmt/load/g-optimizing-data-load-and-query-performance.html.md.erb
index 203efc5..ff1c230 100644
--- a/datamgmt/load/g-optimizing-data-load-and-query-performance.html.md.erb
+++ b/datamgmt/load/g-optimizing-data-load-and-query-performance.html.md.erb
@@ -2,9 +2,9 @@
 title: Optimizing Data Load and Query Performance
 ---
 
-Use the following tips to help optimize your data load and subsequent query 
performance.
+Use the following tip to help optimize your data load and subsequent query 
performance.
+
+-   Run `ANALYZE` after loading data. If you significantly altered the data in 
a table, run `ANALYZE` or `VACUUM                     ANALYZE` (system catalog 
tables only) to update table statistics for the query optimizer. Current 
statistics ensure that the optimizer makes the best decisions during query 
planning and avoids poor performance due to inaccurate or nonexistent 
statistics.
 
--   Run `ANALYZE` after loading data. If you significantly altered the data in 
a table, run `ANALYZE` or `VACUUM                     ANALYZE` to update table 
statistics for the query optimizer. Current statistics ensure that the 
optimizer makes the best decisions during query planning and avoids poor 
performance due to inaccurate or nonexistent statistics.
--   Run `VACUUM` after load errors. If the load operation does not run in 
single row error isolation mode, the operation stops at the first error. The 
target table contains the rows loaded before the error occurred. You cannot 
access these rows, but they occupy disk space. Use the `VACUUM` command to 
recover the wasted space.
 
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/ddl/ddl-partition.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-partition.html.md.erb b/ddl/ddl-partition.html.md.erb
index 42b7ebe..39fd43c 100644
--- a/ddl/ddl-partition.html.md.erb
+++ b/ddl/ddl-partition.html.md.erb
@@ -23,7 +23,7 @@ HAWQ uses the partition criteria defined during table 
creation to create each pa
 
 The HAWQ system catalog stores partition hierarchy information so that rows 
inserted into the top-level parent table propagate correctly to the child table 
partitions. To change the partition design or table structure, alter the parent 
table using `ALTER TABLE` with the `PARTITION` clause.
 
-To insert data into a partitioned table, you specify the root partitioned 
table, the table created with the `CREATE TABLE` command. You also can specify 
a leaf child table of the partitioned table in an `INSERT` command. An error is 
returned if the data is not valid for the specified leaf child table. 
Specifying a child table that is not a leaf child table in the `INSERT` command 
is not supported. Execution of other DML commands such as `UPDATE` and `DELETE` 
on any child table of a partitioned table is not supported. These commands must 
be executed on the root partitioned table, the table created with the `CREATE 
TABLE` command.
+To insert data into a partitioned table, you specify the root partitioned 
table, the table created with the `CREATE TABLE` command. You also can specify 
a leaf child table of the partitioned table in an `INSERT` command. An error is 
returned if the data is not valid for the specified leaf child table. 
Specifying a child table that is not a leaf child table in the `INSERT` command 
is not supported.
 
 ## <a id="topic65"></a>Deciding on a Table Partitioning Strategy 
 
@@ -258,7 +258,7 @@ WHERE tablename='sales';
 
 The following table and views show information about partitioned tables.
 
--   *pg\_partition*- Tracks partitioned tables and their inheritance level 
relationships.
+-   *pg\_partition* - Tracks partitioned tables and their inheritance level 
relationships.
 -   *pg\_partition\_templates* - Shows the subpartitions created using a 
subpartition template.
 -   *pg\_partition\_columns* - Shows the partition key columns used in a 
partition design.
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/ddl/ddl-storage.html.md.erb
----------------------------------------------------------------------
diff --git a/ddl/ddl-storage.html.md.erb b/ddl/ddl-storage.html.md.erb
index 4711d00..264e552 100644
--- a/ddl/ddl-storage.html.md.erb
+++ b/ddl/ddl-storage.html.md.erb
@@ -16,7 +16,6 @@ HAWQ provides storage orientation models of either 
row-oriented or Parquet table
 
 Row-oriented storage provides the best options for the following situations:
 
--   **Updates of table data.** Where you load and update the table data 
frequently.
 -   **Frequent INSERTs.** Where rows are frequently inserted into the table
 -   **Number of columns requested in queries.** Where you typically request 
all or the majority of columns in the `SELECT` list or `WHERE` clause of your 
queries, choose a row-oriented model. 
 -   **Number of columns in the table.** Row-oriented storage is most efficient 
when many columns are required at the same time, or when the row-size of a 
table is relatively small. 
@@ -63,12 +62,10 @@ The`DROP TABLE`command removes tables from the database. 
For example:
 DROP TABLE mytable;
 ```
 
-To empty a table of rows without removing the table definition, use `DELETE` 
or `TRUNCATE`. For example:
+`DROP TABLE` always removes any indexes, rules, triggers, and constraints that 
exist for the target table. Specify `CASCADE`to drop a table that is referenced 
by a view. `CASCADE` removes dependent views.
 
-``` sql
-DELETE FROM mytable;
+To empty a table of rows without removing the table definition, use 
`TRUNCATE`. For example:
 
+``` sql
 TRUNCATE mytable;
 ```
-
-`DROP TABLE`always removes any indexes, rules, triggers, and constraints that 
exist for the target table. Specify `CASCADE`to drop a table that is referenced 
by a view. `CASCADE` removes dependent views.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/catalog/pg_class.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/catalog/pg_class.html.md.erb 
b/reference/catalog/pg_class.html.md.erb
index 5dff1f2..c0244b1 100644
--- a/reference/catalog/pg_class.html.md.erb
+++ b/reference/catalog/pg_class.html.md.erb
@@ -69,7 +69,7 @@ The system catalog table `pg_class` catalogs tables and most 
everything else tha
 <td><code class="ph codeph">relpages</code></td>
 <td>integer</td>
 <td> </td>
-<td>Size of the on-disk representation of this table in pages (of 32K each). 
This is only an estimate used by the planner. It is updated by <code class="ph 
codeph">VACUUM</code>, <code class="ph codeph">ANALYZE</code>, and a few DDL 
commands.</td>
+<td>Size of the on-disk representation of this table in pages (of 32K each). 
This is only an estimate used by the planner. It is updated by  <code class="ph 
codeph">ANALYZE</code>, and a few DDL commands.</td>
 </tr>
 <tr class="odd">
 <td><code class="ph codeph">reltuples</code></td>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/catalog/pg_index.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/catalog/pg_index.html.md.erb 
b/reference/catalog/pg_index.html.md.erb
index cd1a054..7abe921 100644
--- a/reference/catalog/pg_index.html.md.erb
+++ b/reference/catalog/pg_index.html.md.erb
@@ -14,7 +14,7 @@ The `pg_index` system catalog table contains part of the 
information about index
 | `indnatts`       | smallint   |                      | The number of 
columns in the index (duplicates pg\_class.relnatts).                           
                                                                                
                                                                                
                                                                                
          |
 | `indisunique`    | boolean    |                      | If true, this is a 
unique index.                                                                   
                                                                                
                                                                                
                                                                                
     |
 | `indisclustered` | boolean    |                      | If true, the table 
was last clustered on this index via the `CLUSTER` command.                     
                                                                                
                                                                                
                                                                                
     |
-| `indisvalid`     | boolean    |                      | If true, the index 
is currently valid for queries. False means the index is possibly incomplete: 
it must still be modified by `INSERT`/`UPDATE` operations, but it cannot safely 
be used for queries.                                                            
                                                                                
       |
+| `indisvalid`     | boolean    |                      | If true, the index 
is currently valid for queries. False means the index is possibly incomplete: 
it must still be modified by `INSERT` operations, but it cannot safely be used 
for queries.                                                                    
                                                                               |
 | `indkey`         | int2vector | pg\_attribute.attnum | This is an array of 
indnatts values that indicate which table columns this index indexes. For 
example a value of 1 3 would mean that the first and the third table columns 
make up the index key. A zero in this array indicates that the corresponding 
index attribute is an expression over the table columns, rather than a simple 
column reference. |
 | `indclass`       | oidvector  | pg\_opclass.oid      | For each column in 
the index key this contains the OID of the operator class to use.               
                                                                                
                                                                                
                                                                                
     |
 | `indexprs`       | text       |                      | Expression trees (in 
`nodeToString()` representation) for index attributes that are not simple 
column references. This is a list with one element for each zero entry in 
indkey. NULL if all index attributes are simple references.                     
                                                                                
               |

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/cli/admin_utilities/hawqload.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/cli/admin_utilities/hawqload.html.md.erb 
b/reference/cli/admin_utilities/hawqload.html.md.erb
index 8d7d984..b9fe441 100644
--- a/reference/cli/admin_utilities/hawqload.html.md.erb
+++ b/reference/cli/admin_utilities/hawqload.html.md.erb
@@ -42,7 +42,7 @@ The client machine where `hawq load` is executed must have 
the following:
 
 ## <a id="topic1__section4"></a>Description
 
-`hawq load` is a data loading utility that acts as an interface to HAWQ's 
external table parallel loading feature. Using a load specification defined in 
a YAML formatted control file, `hawq                     load` executes a load 
by invoking the HAWQ parallel file server ([gpfdist](gpfdist.html#topic1)), 
creating an external table definition based on the source data defined, and 
executing an `INSERT`, `UPDATE` or `MERGE` operation to load the source data 
into the target table in the database.
+`hawq load` is a data loading utility that acts as an interface to HAWQ's 
external table parallel loading feature. Using a load specification defined in 
a YAML formatted control file, `hawq                     load` executes a load 
by invoking the HAWQ parallel file server ([gpfdist](gpfdist.html#topic1)), 
creating an external table definition based on the source data defined, and 
executing an `INSERT` operation to load the source data into the target table 
in the database.
 
 The operation, including any SQL commands specified in the `SQL` collection of 
the YAML control file (see [Control File Format](#topic1__section7)), are 
performed as a single transaction to prevent inconsistent data when performing 
multiple, simultaneous load operations on a target table.
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/cli/client_utilities/vacuumdb.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/cli/client_utilities/vacuumdb.html.md.erb 
b/reference/cli/client_utilities/vacuumdb.html.md.erb
index 30617df..cbc37f3 100644
--- a/reference/cli/client_utilities/vacuumdb.html.md.erb
+++ b/reference/cli/client_utilities/vacuumdb.html.md.erb
@@ -4,6 +4,8 @@ title: vacuumdb
 
 Garbage-collects and analyzes a database.
 
+`vacuumdb` is typically run on system catalog tables. It has no effect when 
run on HAWQ user tables.
+
 ## <a id="topic1__section2"></a>Synopsis
 
 ``` pre

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/guc/guc_category-list.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/guc/guc_category-list.html.md.erb 
b/reference/guc/guc_category-list.html.md.erb
index 715583b..e77f925 100644
--- a/reference/guc/guc_category-list.html.md.erb
+++ b/reference/guc/guc_category-list.html.md.erb
@@ -344,7 +344,7 @@ These parameters adjust the amount of data sampled by an 
`ANALYZE` operation. Ad
 
 ### <a id="topic_qvz_nz3_yv"></a>Automatic Statistics Collection
 
-When automatic statistics collection is enabled, you can run `ANALYZE` 
automatically in the same transaction as an `INSERT`, `UPDATE`, `DELETE`, 
`COPY` or `CREATE TABLE...AS SELECT` statement when a certain threshold of rows 
is affected (`on_change`), or when a newly generated table has no statistics 
(`on_no_stats`). To enable this feature, set the following server configuration 
parameters in your HAWQ `hawq-site.xml` file by using the `hawq config` utility 
and restart HAWQ:
+When automatic statistics collection is enabled, you can run `ANALYZE` 
automatically in the same transaction as an `INSERT`, `COPY` or `CREATE 
TABLE...AS SELECT` statement when a certain threshold of rows is affected 
(`on_change`), or when a newly generated table has no statistics 
(`on_no_stats`). To enable this feature, set the following server configuration 
parameters in your HAWQ `hawq-site.xml` file by using the `hawq config` utility 
and restart HAWQ:
 
 -   [gp\_autostats\_mode](parameter_definitions.html#gp_autostats_mode)
 -   [log\_autostats](parameter_definitions.html#log_autostats)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/guc/parameter_definitions.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/guc/parameter_definitions.html.md.erb 
b/reference/guc/parameter_definitions.html.md.erb
index c792d1b..645335b 100644
--- a/reference/guc/parameter_definitions.html.md.erb
+++ b/reference/guc/parameter_definitions.html.md.erb
@@ -1076,10 +1076,6 @@ The `on_change` option triggers statistics collection 
only when the number of ro
 
 `CREATE TABLE AS SELECT`
 
-`UPDATE`
-
-`DELETE`
-
 `INSERT`
 
 `COPY`
@@ -2391,7 +2387,7 @@ The maximum value is INT\_MAX/1024. If an invalid value 
is specified, the defaul
 
 ## <a name="log_statement"></a>log\_statement
 
-Controls which SQL statements are logged. DDL logs all data definition 
commands like CREATE, ALTER, and DROP commands. MOD logs all DDL statements, 
plus INSERT, UPDATE, DELETE, TRUNCATE, and COPY FROM. PREPARE and EXPLAIN 
ANALYZE statements are also logged if their contained command is of an 
appropriate type.
+Controls which SQL statements are logged. DDL logs all data definition 
commands like CREATE, ALTER, and DROP commands. MOD logs all DDL statements, 
plus INSERT, TRUNCATE, and COPY FROM. PREPARE and EXPLAIN ANALYZE statements 
are also logged if their contained command is of an appropriate type.
 
 <table>
 <colgroup>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/sql/ALTER-TABLE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/ALTER-TABLE.html.md.erb 
b/reference/sql/ALTER-TABLE.html.md.erb
index 7b1d74d..4303f0c 100644
--- a/reference/sql/ALTER-TABLE.html.md.erb
+++ b/reference/sql/ALTER-TABLE.html.md.erb
@@ -314,7 +314,7 @@ When a column is added with `ADD COLUMN`, all existing rows 
in the table are ini
 
 You can specify multiple changes in a single `ALTER TABLE` command, which will 
be done in a single pass over the table.
 
-The `DROP COLUMN` form does not physically remove the column, but simply makes 
it invisible to SQL operations. Subsequent insert and update operations in the 
table will store a null value for the column. Thus, dropping a column is quick 
but it will not immediately reduce the on-disk size of your table, as the space 
occupied by the dropped column is not reclaimed. The space will be reclaimed 
over time as existing rows are updated.
+The `DROP COLUMN` form does not physically remove the column, but simply makes 
it invisible to SQL operations. Subsequent insert operations in the table will 
store a null value for the column. Thus, dropping a column is quick but it will 
not immediately reduce the on-disk size of your table, as the space occupied by 
the dropped column is not reclaimed.
 
 The fact that `ALTER TYPE` requires rewriting the whole table is sometimes an 
advantage, because the rewriting process eliminates any dead space in the 
table. For example, to reclaim the space occupied by a dropped column 
immediately, the fastest way is: `ALTER TABLE <table> ALTER COLUMN <anycol> 
TYPE <sametype>;` Where \<anycol\> is any remaining table column and 
\<sametype\> is the same type that column already has. This results in no 
semantically-visible change in the table, but the command forces rewriting, 
which gets rid of no-longer-useful data.
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/sql/BEGIN.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/BEGIN.html.md.erb b/reference/sql/BEGIN.html.md.erb
index 5c2a9bb..265e66e 100644
--- a/reference/sql/BEGIN.html.md.erb
+++ b/reference/sql/BEGIN.html.md.erb
@@ -31,7 +31,7 @@ READ UNCOMMITTED  </dt>
 
 <dt>READ WRITE  
 READ ONLY  </dt>
-<dd>Determines whether the transaction is read/write or read-only. Read/write 
is the default. When a transaction is read-only, the following SQL commands are 
disallowed: `INSERT`, `UPDATE`, `DELETE`, and `COPY FROM` if the table they 
would write to is not a temporary table; all `CREATE`, `ALTER`, and `DROP` 
commands; `GRANT`, `REVOKE`, `TRUNCATE`; and `EXPLAIN ANALYZE` and `EXECUTE` if 
the command they would execute is among those listed.</dd>
+<dd>Determines whether the transaction is read/write or read-only. Read/write 
is the default. When a transaction is read-only, the following SQL commands are 
disallowed: `INSERT` and `COPY FROM` if the table they would write to is not a 
temporary table; all `CREATE`, `ALTER`, and `DROP` commands; `GRANT`, `REVOKE`, 
`TRUNCATE`; and `EXPLAIN ANALYZE` and `EXECUTE` if the command they would 
execute is among those listed.</dd>
 
 ## <a id="topic1__section5"></a>Notes
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/sql/COPY.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/COPY.html.md.erb b/reference/sql/COPY.html.md.erb
index 6069aa5..aaa2270 100644
--- a/reference/sql/COPY.html.md.erb
+++ b/reference/sql/COPY.html.md.erb
@@ -139,7 +139,7 @@ Files named in a `COPY` command are read or written 
directly by the database ser
 
 `COPY` input and output is affected by `DateStyle`. To ensure portability to 
other HAWQ installations that might use non-default `DateStyle` settings, 
`DateStyle` should be set to ISO before using `COPY TO`.
 
-By default, `COPY` stops operation at the first error. This should not lead to 
problems in the event of a `COPY TO`, but the target table will already have 
received earlier rows in a `COPY FROM`. These rows will not be visible or 
accessible, but they still occupy disk space. This may amount to a considerable 
amount of wasted disk space if the failure happened well into a large `COPY 
FROM` operation. You may wish to invoke `VACUUM` to recover the wasted space. 
Another option would be to use single row error isolation mode to filter out 
error rows while still loading good rows.
+By default, `COPY` stops operation at the first error. This should not lead to 
problems in the event of a `COPY TO`, but the target table will already have 
received earlier rows in a `COPY FROM`. These rows will not be visible or 
accessible, but they still occupy disk space. This may amount to a considerable 
amount of wasted disk space if the failure happened well into a large `COPY 
FROM` operation. You may wish to use single row error isolation mode to filter 
out error rows while still loading good rows.
 
 COPY supports creating readable foreign tables with error tables. The default 
for concurrently inserting into the error table is 127. You can use error 
tables with foreign tables under the following circumstances:
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb 
b/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb
index 2b164dc..2f19eab 100644
--- a/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb
+++ b/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb
@@ -109,7 +109,7 @@ where \<pxf parameters\> is:
 
 ## <a id="topic1__section3"></a>Description
 
-`CREATE EXTERNAL TABLE` or `CREATE EXTERNAL WEB TABLE` creates a new readable 
external table definition in HAWQ. Readable external tables are typically used 
for fast, parallel data loading. Once an external table is defined, you can 
query its data directly (and in parallel) using SQL commands. For example, you 
can select, join, or sort external table data. You can also create views for 
external tables. DML operations (`UPDATE`, `INSERT`, `DELETE`, or`           
TRUNCATE`) are not allowed on readable external tables.
+`CREATE EXTERNAL TABLE` or `CREATE EXTERNAL WEB TABLE` creates a new readable 
external table definition in HAWQ. Readable external tables are typically used 
for fast, parallel data loading. Once an external table is defined, you can 
query its data directly (and in parallel) using SQL commands. For example, you 
can select, join, or sort external table data. You can also create views for 
external tables. DML operations (`UPDATE`, `INSERT`, `DELETE`, or `TRUNCATE`) 
are not permitted on readable external tables.
 
 `CREATE WRITABLE EXTERNAL TABLE` or `CREATE WRITABLE EXTERNAL WEB           
TABLE` creates a new writable external table definition in HAWQ. Writable 
external tables are typically used for unloading data from the database into a 
set of files or named pipes.
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/sql/CREATE-TABLE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-TABLE.html.md.erb 
b/reference/sql/CREATE-TABLE.html.md.erb
index 99ff35e..162a438 100644
--- a/reference/sql/CREATE-TABLE.html.md.erb
+++ b/reference/sql/CREATE-TABLE.html.md.erb
@@ -161,7 +161,7 @@ where \<storage\_parameter\> for a partition is:
 
 `CREATE TABLE` creates a new, initially empty table in the current database. 
The table is owned by the user issuing the command. If a schema name is given 
then the table is created in the specified schema. Otherwise it is created in 
the current schema. Temporary tables exist in a special schema, so a schema 
name may not be given when creating a temporary table. The name of the table 
must be distinct from the name of any other table, external table, sequence, or 
view in the same schema.
 
-The optional constraint clauses specify conditions that new or updated rows 
must satisfy for an insert or update operation to succeed. A constraint is an 
SQL object that helps define the set of valid values in the table in various 
ways. Constraints apply to tables, not to partitions. You cannot add a 
constraint to a partition or subpartition.
+The optional constraint clauses specify conditions that new rows must satisfy 
for an insert operation to succeed. A constraint is an SQL object that helps 
define the set of valid values in the table in various ways. Constraints apply 
to tables, not to partitions. You cannot add a constraint to a partition or 
subpartition.
 
 There are two ways to define constraints: table constraints and column 
constraints. A column constraint is defined as part of a column definition. A 
table constraint definition is not tied to a particular column, and it can 
encompass more than one column. Every column constraint can also be written as 
a table constraint; a column constraint is only a notational convenience for 
use when the constraint only affects one column.
 
@@ -213,7 +213,7 @@ Note also that unlike `INHERITS`, copied columns and 
constraints are not merged
 <dd>Specifies if the column is or is not allowed to contain null values. 
`NULL` is the default.</dd>
 
 <dt>CHECK ( \<expression\> )  </dt>
-<dd>The `CHECK` clause specifies an expression producing a Boolean result 
which new or updated rows must satisfy for an insert or update operation to 
succeed. Expressions evaluating to `TRUE` or `UNKNOWN` succeed. Should any row 
of an insert or update operation produce a `FALSE` result an error exception is 
raised and the insert or update does not alter the database. A check constraint 
specified as a column constraint should reference that column's value only, 
while an expression appearing in a table constraint may reference multiple 
columns. `CHECK` expressions cannot contain subqueries nor refer to variables 
other than columns of the current row.</dd>
+<dd>The `CHECK` clause specifies an expression producing a Boolean result 
which new rows must satisfy for an insert operation to succeed. Expressions 
evaluating to `TRUE` or `UNKNOWN` succeed. Should any row of an insert 
operation produce a `FALSE` result an error exception is raised and the insert 
does not alter the database. A check constraint specified as a column 
constraint should reference that column's value only, while an expression 
appearing in a table constraint may reference multiple columns. `CHECK` 
expressions cannot contain subqueries nor refer to variables other than columns 
of the current row.</dd>
 
 <dt>WITH ( \<storage\_option\>=\<value\> )  </dt>
 <dd>The `WITH` clause can be used to set storage options for the table or its 
indexes. Note that you can also set storage parameters on a particular 
partition or subpartition by declaring the `WITH` clause in the partition 
specification.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/sql/CREATE-VIEW.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-VIEW.html.md.erb 
b/reference/sql/CREATE-VIEW.html.md.erb
index 4a24bb7..e39d8d3 100644
--- a/reference/sql/CREATE-VIEW.html.md.erb
+++ b/reference/sql/CREATE-VIEW.html.md.erb
@@ -77,7 +77,7 @@ names, rank WHERE rank < '11' AND names.id=rank.id;
 
 The SQL standard specifies some additional capabilities for the `CREATE        
   VIEW` statement that are not in HAWQ. The optional clauses for the full SQL 
command in the standard are:
 
--   **CHECK OPTION** — This option has to do with updatable views. All 
`INSERT` and `UPDATE` commands on the view will be checked to ensure data 
satisfy the view-defining condition (that is, the new data would be visible 
through the view). If they do not, the update will be rejected.
+-   **CHECK OPTION** — This option has to do with updatable views. All 
`INSERT` commands on the view will be checked to ensure data satisfy the 
view-defining condition (that is, the new data would be visible through the 
view). If they do not, the insert will be rejected.
 -   **LOCAL** — Check for integrity on this view.
 -   **CASCADED** — Check for integrity on this view and on any dependent 
view. `CASCADED` is assumed if neither `CASCADED` nor `LOCAL` is specified.
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/sql/DROP-TABLE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DROP-TABLE.html.md.erb 
b/reference/sql/DROP-TABLE.html.md.erb
index 98022ce..b277273 100644
--- a/reference/sql/DROP-TABLE.html.md.erb
+++ b/reference/sql/DROP-TABLE.html.md.erb
@@ -12,7 +12,7 @@ DROP TABLE [IF EXISTS] <name> [, ...] [CASCADE | RESTRICT]
 
 ## <a id="topic1__section3"></a>Description
 
-`DROP TABLE` removes tables from the database. Only its owner may drop a 
table. To empty a table of rows without removing the table definition, use 
`DELETE` or `TRUNCATE`.
+`DROP TABLE` removes tables from the database. Only its owner may drop a 
table. To empty a table of rows without removing the table definition, use 
`TRUNCATE`.
 
 `DROP TABLE` always removes any indexes, rules, and constraints that exist for 
the target table. However, to drop a table that is referenced by a view, 
`CASCADE` must be specified. `CASCADE` will remove a dependent view entirely.
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/sql/GRANT.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/GRANT.html.md.erb b/reference/sql/GRANT.html.md.erb
index 4baed38..1673df5 100644
--- a/reference/sql/GRANT.html.md.erb
+++ b/reference/sql/GRANT.html.md.erb
@@ -167,7 +167,7 @@ GRANT admins TO joe;
 
 The `PRIVILEGES` key word in is required in the SQL standard, but optional in 
HAWQ. The SQL standard does not support setting the privileges on more than one 
object per command.
 
-HAWQ allows an object owner to revoke his own ordinary privileges: for 
example, a table owner can make the table read-only to himself by revoking his 
own `INSERT`, `UPDATE`, and `DELETE` privileges. This is not possible according 
to the SQL standard. HAWQ treats the owner's privileges as having been granted 
by the owner to himself; therefore he can revoke them too. In the SQL standard, 
the owner's privileges are granted by an assumed *system* entity.
+HAWQ allows an object owner to revoke his own ordinary privileges: for 
example, a table owner can make the table read-only to himself by revoking his 
own `INSERT` privileges. This is not possible according to the SQL standard. 
HAWQ treats the owner's privileges as having been granted by the owner to 
himself; therefore he can revoke them too. In the SQL standard, the owner's 
privileges are granted by an assumed *system* entity.
 
 The SQL standard allows setting privileges for individual columns within a 
table.
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/sql/PREPARE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/PREPARE.html.md.erb 
b/reference/sql/PREPARE.html.md.erb
index 6f6836c..c633f14 100644
--- a/reference/sql/PREPARE.html.md.erb
+++ b/reference/sql/PREPARE.html.md.erb
@@ -30,7 +30,7 @@ Prepared statements have the largest performance advantage 
when a single session
 <dd>The data type of a parameter to the prepared statement. If the data type 
of a particular parameter is unspecified or is specified as unknown, it will be 
inferred from the context in which the parameter is used. To refer to the 
parameters in the prepared statement itself, use `$1`, `$2`, etc.</dd>
 
 <dt> \<statement\>   </dt>
-<dd>Any `SELECT`, `INSERT`, `UPDATE`, `DELETE`, or `VALUES` statement.</dd>
+<dd>Any `SELECT`, `INSERT`, or `VALUES` statement.</dd>
 
 ## <a id="topic1__section5"></a>Notes
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/sql/VACUUM.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/VACUUM.html.md.erb b/reference/sql/VACUUM.html.md.erb
index 2db5757..9b39a32 100644
--- a/reference/sql/VACUUM.html.md.erb
+++ b/reference/sql/VACUUM.html.md.erb
@@ -2,7 +2,9 @@
 title: VACUUM
 ---
 
-Garbage-collects and optionally analyzes a database.
+Garbage-collects and optionally analyzes a database. 
+
+**Note**: HAWQ `VACUUM` support is provided only for system catalog tables.  
`VACUUM`ing a HAWQ user table has no effect.
 
 ## <a id="topic1__section2"></a>Synopsis
 
@@ -14,11 +16,13 @@ VACUUM [FULL] [FREEZE] [VERBOSE] ANALYZE
 
 ## <a id="topic1__section3"></a>Description
 
-`VACUUM` reclaims storage occupied by deleted tuples. In normal HAWQ 
operation, tuples that are deleted or obsoleted by an update are not physically 
removed from their table; they remain present on disk until a `VACUUM` is done. 
Therefore it is necessary to do `VACUUM` periodically, especially on 
frequently-updated catalog tables. `VACUUM` has no effect on a normal HAWQ 
table, since the delete or update operations are not supported on normal HAWQ 
table.
+`VACUUM` reclaims storage occupied by deleted tuples. In normal HAWQ 
operation, tuples that are deleted or obsoleted by an update are not physically 
removed from their table; they remain present on disk until a `VACUUM` is done. 
Therefore it is necessary to do `VACUUM` periodically, especially on 
frequently-updated catalog tables. (`VACUUM` has no effect on a normal HAWQ 
table, since the delete or update operations are not supported on normal HAWQ 
table.)
 
 With no parameter, `VACUUM` processes every table in the current database. 
With a parameter, `VACUUM` processes only that table. `VACUUM ANALYZE` performs 
a `VACUUM` and then an `ANALYZE` for each selected table. This is a handy 
combination form for routine maintenance scripts. See [ANALYZE](ANALYZE.html) 
for more details about its processing.
 
-Plain `VACUUM` (without `FULL`) simply reclaims space and makes it available 
for re-use. This form of the command can operate in parallel with normal 
reading and writing of the table, as an exclusive lock is not obtained. `VACUUM 
FULL` does more extensive processing, including moving of tuples across blocks 
to try to compact the table to the minimum number of disk blocks. This form is 
much slower and requires an exclusive lock on each table while it is being 
processed.
+Plain `VACUUM` (without `FULL`) simply reclaims space and makes it available 
for re-use. This form of the command can operate in parallel with normal 
reading and writing of the table, as an exclusive lock is not obtained. `VACUUM 
FULL` does more extensive processing, including moving of tuples across blocks 
to try to compact the table to the minimum number of disk blocks. This form is 
much slower and requires an exclusive lock on each table while it is being 
processed.  
+
+**Note:** `VACUUM FULL` is not recommended in HAWQ.
 
 **Outputs**
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/85e8a5da/reference/toolkit/hawq_toolkit.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/toolkit/hawq_toolkit.html.md.erb 
b/reference/toolkit/hawq_toolkit.html.md.erb
index f76963c..45a8a56 100644
--- a/reference/toolkit/hawq_toolkit.html.md.erb
+++ b/reference/toolkit/hawq_toolkit.html.md.erb
@@ -16,7 +16,7 @@ The following views can help identify tables that need 
routine table maintenance
 
 -   [hawq\_stats\_missing](#topic4)
 
-The `VACUUM` command reclaims disk space occupied by deleted or obsolete rows. 
Because of the MVCC transaction concurrency model used in HAWQ, data rows that 
are deleted or updated still occupy physical space on disk even though they are 
not visible to any new transactions. Expired rows increase table size on disk 
and eventually slow down scans of the table.
+The `VACUUM` command is applicable only to system catalog tables. The `VACUUM` 
command reclaims disk space occupied by deleted or obsolete rows. Because of 
the MVCC transaction concurrency model used in HAWQ, data rows that are deleted 
or updated still occupy physical space on disk even though they are not visible 
to any new transactions. Expired rows increase table size on disk and 
eventually slow down scans of the table.
 
 **Note:** VACUUM FULL is not recommended in HAWQ. See 
[VACUUM](../sql/VACUUM.html#topic1).
 

Reply via email to