upd topic & xref titles for new PXF, Managing Data section names

Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/commit/20882b73
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/tree/20882b73
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/diff/20882b73

Branch: refs/heads/tutorial-proto
Commit: 20882b73b24f7a90788d5c5156a1959621c8288d
Parents: 01f3f8e
Author: Lisa Owen <[email protected]>
Authored: Thu Oct 27 16:16:38 2016 -0700
Committer: Lisa Owen <[email protected]>
Committed: Thu Oct 27 16:16:38 2016 -0700

----------------------------------------------------------------------
 admin/BackingUpandRestoringHAWQDatabases.html.md.erb | 2 +-
 datamgmt/dml.html.md.erb                             | 4 ++--
 overview/TableDistributionStorage.html.md.erb        | 4 ++--
 pxf/HawqExtensionFrameworkPXF.html.md.erb            | 2 +-
 reference/guc/parameter_definitions.html.md.erb      | 2 +-
 reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb      | 2 +-
 6 files changed, 8 insertions(+), 8 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/20882b73/admin/BackingUpandRestoringHAWQDatabases.html.md.erb
----------------------------------------------------------------------
diff --git a/admin/BackingUpandRestoringHAWQDatabases.html.md.erb 
b/admin/BackingUpandRestoringHAWQDatabases.html.md.erb
index f7031ed..e9bd526 100644
--- a/admin/BackingUpandRestoringHAWQDatabases.html.md.erb
+++ b/admin/BackingUpandRestoringHAWQDatabases.html.md.erb
@@ -274,7 +274,7 @@ Also, make sure that your `CREATE EXTERNAL TABLE` 
definition has the correct hos
 
 ## <a id="usingpxf"></a>Using PXF 
 
-HAWQ Extension Framework \(PXF\) is an extensible framework that allows HAWQ 
to query external system data. The details of how to install and use PXF can be 
found in [Working with PXF and External 
Data](../pxf/HawqExtensionFrameworkPXF.html).
+HAWQ Extension Framework \(PXF\) is an extensible framework that allows HAWQ 
to query external system data. The details of how to install and use PXF can be 
found in [Using PXF with Unmanaged Data](../pxf/HawqExtensionFrameworkPXF.html).
 
 ### <a id="usingpxftobackupthetpchdatabase"></a>Using PXF to Back Up the tpch 
Database 
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/20882b73/datamgmt/dml.html.md.erb
----------------------------------------------------------------------
diff --git a/datamgmt/dml.html.md.erb b/datamgmt/dml.html.md.erb
index 8951db2..681883a 100644
--- a/datamgmt/dml.html.md.erb
+++ b/datamgmt/dml.html.md.erb
@@ -1,5 +1,5 @@
 ---
-title: Managing Data
+title: Managing Data with HAWQ
 ---
 
 This chapter provides information about manipulating data and concurrent 
access in HAWQ.
@@ -24,7 +24,7 @@ This chapter provides information about manipulating data and 
concurrent access
 
     The topics in this section describe methods for loading and writing data 
into and out of HAWQ, and how to format data files.
 
--   **[Working with PXF and External 
Data](../pxf/HawqExtensionFrameworkPXF.html)**
+-   **[Using PXF with Unmanaged Data](../pxf/HawqExtensionFrameworkPXF.html)**
 
     HAWQ Extension Framework (PXF) is an extensible framework that allows HAWQ 
to query external system data. 
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/20882b73/overview/TableDistributionStorage.html.md.erb
----------------------------------------------------------------------
diff --git a/overview/TableDistributionStorage.html.md.erb 
b/overview/TableDistributionStorage.html.md.erb
index 58f20f2..ec1d8b5 100755
--- a/overview/TableDistributionStorage.html.md.erb
+++ b/overview/TableDistributionStorage.html.md.erb
@@ -32,8 +32,8 @@ HAWQ can access data in external files using the HAWQ 
Extension Framework (PXF).
 PXF is an extensible framework that allows HAWQ to access data in external
 sources as readable or writable HAWQ tables. PXF has built-in connectors for
 accessing data inside HDFS files, Hive tables, and HBase tables. PXF also
-integrates with HCatalog to query Hive tables directly. See [Working with PXF
-and External Data](../pxf/HawqExtensionFrameworkPXF.html) for more
+integrates with HCatalog to query Hive tables directly. See [Using PXF
+with Unmanaged Data](../pxf/HawqExtensionFrameworkPXF.html) for more
 details.
 
 Users can create custom PXF connectors to access other parallel data stores or

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/20882b73/pxf/HawqExtensionFrameworkPXF.html.md.erb
----------------------------------------------------------------------
diff --git a/pxf/HawqExtensionFrameworkPXF.html.md.erb 
b/pxf/HawqExtensionFrameworkPXF.html.md.erb
index c0d7c0f..578d13f 100644
--- a/pxf/HawqExtensionFrameworkPXF.html.md.erb
+++ b/pxf/HawqExtensionFrameworkPXF.html.md.erb
@@ -1,5 +1,5 @@
 ---
-title: Working with PXF and External Data
+title: Using PXF with Unmanaged Data
 ---
 
 HAWQ Extension Framework (PXF) is an extensible framework that allows HAWQ to 
query external system data. 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/20882b73/reference/guc/parameter_definitions.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/guc/parameter_definitions.html.md.erb 
b/reference/guc/parameter_definitions.html.md.erb
index 2012155..f476f74 100644
--- a/reference/guc/parameter_definitions.html.md.erb
+++ b/reference/guc/parameter_definitions.html.md.erb
@@ -3148,7 +3148,7 @@ The estimated cost for vacuuming a buffer that has to be 
read from disk. This re
 
 Specifies the cutoff age (in transactions) that `VACUUM` should use to decide 
whether to replace transaction IDs with *FrozenXID* while scanning a table.
 
-For information about `VACUUM` and transaction ID management, see [Managing 
Data](../../datamgmt/dml.html#topic1) and the [PostgreSQL 
documentation](http://www.postgresql.org/docs/8.2/static/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND).
+For information about `VACUUM` and transaction ID management, see [Managing 
Data with HAWQ](../../datamgmt/dml.html#topic1) and the [PostgreSQL 
documentation](http://www.postgresql.org/docs/8.2/static/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND).
 
 | Value Range            | Default   | Set Classifications    |
 |------------------------|-----------|------------------------|

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/20882b73/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb 
b/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb
index b26ac5c..3479e3e 100644
--- a/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb
+++ b/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb
@@ -164,7 +164,7 @@ For writable external tables, specifies the URI location of 
the `gpfdist` proces
 
 With two `gpfdist` locations listed as in the above example, half of the 
segments would send their output data to the `data1.out` file and the other 
half to the `data2.out` file.
 
-For the `pxf` protocol, the `LOCATION` string specifies the \<host\> and 
\<port\> of the PXF service, the location of the data, and the PXF plug-ins 
(Java classes) used to convert the data between storage format and HAWQ format. 
If the \<port\> is omitted, the \<host\> is taken to be the logical name for 
the high availability name service and the \<port\> is the value of the 
`pxf_service_port` configuration variable, 51200 by default. The URL parameters 
`FRAGMENTER`, `ACCESSOR`, and `RESOLVER` are the names of PXF plug-ins (Java 
classes) that convert between the external data format and HAWQ data format. 
The `FRAGMENTER` parameter is only used with readable external tables. PXF 
allows combinations of these parameters to be configured as profiles so that a 
single `PROFILE` parameter can be specified to access external data, for 
example `?PROFILE=Hive`. Additional \<custom-options\>` can be added to the 
LOCATION URI to further describe the external data format or storage options. 
For 
 details about the plug-ins and profiles provided with PXF and information 
about creating custom plug-ins for other data sources see [Working with PXF and 
External Data](../../pxf/HawqExtensionFrameworkPXF.html).</dd>
+For the `pxf` protocol, the `LOCATION` string specifies the \<host\> and 
\<port\> of the PXF service, the location of the data, and the PXF plug-ins 
(Java classes) used to convert the data between storage format and HAWQ format. 
If the \<port\> is omitted, the \<host\> is taken to be the logical name for 
the high availability name service and the \<port\> is the value of the 
`pxf_service_port` configuration variable, 51200 by default. The URL parameters 
`FRAGMENTER`, `ACCESSOR`, and `RESOLVER` are the names of PXF plug-ins (Java 
classes) that convert between the external data format and HAWQ data format. 
The `FRAGMENTER` parameter is only used with readable external tables. PXF 
allows combinations of these parameters to be configured as profiles so that a 
single `PROFILE` parameter can be specified to access external data, for 
example `?PROFILE=Hive`. Additional \<custom-options\>` can be added to the 
LOCATION URI to further describe the external data format or storage options. 
For 
 details about the plug-ins and profiles provided with PXF and information 
about creating custom plug-ins for other data sources see [Using PXF with 
Unmanaged Data](../../pxf/HawqExtensionFrameworkPXF.html).</dd>
 
 <dt>EXECUTE '\<command\>' ON ...  </dt>
 <dd>Allowed for readable web external tables or writable external tables only. 
For readable web external tables, specifies the OS command to be executed by 
the segment instances. The \<command\> can be a single OS command or a script. 
If \<command\> executes a script, that script must reside in the same location 
on all of the segment hosts and be executable by the HAWQ superuser (`gpadmin`).

Reply via email to