[jira] [Commented] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066788#comment-16066788
 ] 

ASF GitHub Bot commented on HAWQ-1435:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/124#discussion_r124581749
  
--- Diff: markdown/pxf/JdbcPXF.html.md.erb ---
@@ -0,0 +1,213 @@
+---
+title: Accessing External SQL Databases with JDBC (Beta)
+---
+
+
+
+Some of your data may already reside in an external SQL database. The PXF 
JDBC plug-in reads data stored in SQL databases including MySQL, ORACLE, 
PostgreSQL, and Hive.
+
+This section describes how to use PXF with JDBC, including an example of 
creating and querying an external table that accesses data in a MySQL database 
table.
+
+## Prerequisites
+
+Before accessing external SQL databases using HAWQ and PXF, ensure that:
+
+-   The JDBC plug-in is installed on all cluster nodes. See [Installing 
PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
+-   The JDBC driver JAR files for the external SQL database are installed 
on all cluster nodes.
+-   The file locations of external SQL database JDBC JAR files are added 
to `pxf-public.classpath`. If you manage your HAWQ cluster with Ambari, add the 
JARS via the Ambari UI. If you managed your cluster from the command line, edit 
the `/etc/pxf/conf/pxf-public.classpath` file directly.
+
+
+## Querying External SQL Data
+The PXF JDBC plug-in supports the single profile named `Jdbc`.
+
+Use the following syntax to create a HAWQ external table representing 
external SQL database tables you access via JDBC: 
+
+``` sql
+CREATE [READABLE | WRITABLE] EXTERNAL TABLE 
+(   [, ...] | LIKE  )
+LOCATION 
('pxf://[:]/..
+?PROFILE=Jdbc[&=[...]]')
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
+```
+
+JDBC-plug-in-specific keywords and values used in the [CREATE EXTERNAL 
TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described in the 
table below.
+
+| Keyword  | Value |
+|---|-|
+| \| The name of the PXF external table column. The PXF 
\ must exactly match the \ used in the external SQL 
table.|
+| \| The data type of the PXF external table column. The 
PXF \ must be equivalent to the data type used for \ 
in the SQL table.|
+| \| The PXF host. While \ may identify any PXF agent 
node, use the HDFS NameNode as it is guaranteed to be available in a running 
HDFS cluster. If HDFS High Availability is enabled, \ must identify the 
HDFS NameService. |
+| \| The PXF port. If \ is omitted, PXF assumes \ 
identifies a High Availability HDFS Nameservice and connects to the port number 
designated by the `pxf_service_port` server configuration parameter value. 
Default is 51200. |
+| \| The schema name. The default schema name is 
`default`. |
+| \| The database name. The default database name 
is determined by the external SQL server. |
+| \| The table name. |
+| PROFILE| The `PROFILE` keyword must specify `Jdbc`. |
+| \  | The custom options supported by the `Jdbc` profile 
are discussed later in this section.|
+| FORMAT 'CUSTOM' | The JDBC `CUSTOM` `FORMAT` supports only the built-in 
`'pxfwritable_import'` `FORMATTER` property. |
+
+*Note*: When creating PXF external tables, you cannot use the `HEADER` 
option in your `FORMAT` specification.
--- End diff --

Extremely minor, but "Note:" here should be bolded:  **Note:**


> docs - add usage info for pxf jdbc plug-in
> --
>
> Key: HAWQ-1435
> URL: https://issues.apache.org/jira/browse/HAWQ-1435
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> create usage info for the new jdbc plug-in.  there is some good info in the 
> pxf-jdbc README.md. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066790#comment-16066790
 ] 

ASF GitHub Bot commented on HAWQ-1435:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/124#discussion_r124582363
  
--- Diff: markdown/pxf/JdbcPXF.html.md.erb ---
@@ -0,0 +1,213 @@
+---
+title: Accessing External SQL Databases with JDBC (Beta)
+---
+
+
+
+Some of your data may already reside in an external SQL database. The PXF 
JDBC plug-in reads data stored in SQL databases including MySQL, ORACLE, 
PostgreSQL, and Hive.
+
+This section describes how to use PXF with JDBC, including an example of 
creating and querying an external table that accesses data in a MySQL database 
table.
+
+## Prerequisites
+
+Before accessing external SQL databases using HAWQ and PXF, ensure that:
+
+-   The JDBC plug-in is installed on all cluster nodes. See [Installing 
PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
+-   The JDBC driver JAR files for the external SQL database are installed 
on all cluster nodes.
+-   The file locations of external SQL database JDBC JAR files are added 
to `pxf-public.classpath`. If you manage your HAWQ cluster with Ambari, add the 
JARS via the Ambari UI. If you managed your cluster from the command line, edit 
the `/etc/pxf/conf/pxf-public.classpath` file directly.
+
+
+## Querying External SQL Data
+The PXF JDBC plug-in supports the single profile named `Jdbc`.
+
+Use the following syntax to create a HAWQ external table representing 
external SQL database tables you access via JDBC: 
+
+``` sql
+CREATE [READABLE | WRITABLE] EXTERNAL TABLE 
+(   [, ...] | LIKE  )
+LOCATION 
('pxf://[:]/..
+?PROFILE=Jdbc[&=[...]]')
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
+```
+
+JDBC-plug-in-specific keywords and values used in the [CREATE EXTERNAL 
TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described in the 
table below.
+
+| Keyword  | Value |
+|---|-|
+| \| The name of the PXF external table column. The PXF 
\ must exactly match the \ used in the external SQL 
table.|
+| \| The data type of the PXF external table column. The 
PXF \ must be equivalent to the data type used for \ 
in the SQL table.|
+| \| The PXF host. While \ may identify any PXF agent 
node, use the HDFS NameNode as it is guaranteed to be available in a running 
HDFS cluster. If HDFS High Availability is enabled, \ must identify the 
HDFS NameService. |
+| \| The PXF port. If \ is omitted, PXF assumes \ 
identifies a High Availability HDFS Nameservice and connects to the port number 
designated by the `pxf_service_port` server configuration parameter value. 
Default is 51200. |
+| \| The schema name. The default schema name is 
`default`. |
+| \| The database name. The default database name 
is determined by the external SQL server. |
+| \| The table name. |
+| PROFILE| The `PROFILE` keyword must specify `Jdbc`. |
+| \  | The custom options supported by the `Jdbc` profile 
are discussed later in this section.|
+| FORMAT 'CUSTOM' | The JDBC `CUSTOM` `FORMAT` supports only the built-in 
`'pxfwritable_import'` `FORMATTER` property. |
+
+*Note*: When creating PXF external tables, you cannot use the `HEADER` 
option in your `FORMAT` specification.
+
+
+### JDBC Custom Options
+
+You may include one or more custom options in the `LOCATION` URI. Preface 
each option with an ampersand `&`. 
+
+The JDBC plug-in `Jdbc` profile supports the following \s:
--- End diff --

Aren't most of these custom options required in order to setup a JDBC 
connection?  If so, docs should just indicate that up-front, as otherwise it 
seems like these are all optional.


> docs - add usage info for pxf jdbc plug-in
> --
>
> Key: HAWQ-1435
> URL: https://issues.apache.org/jira/browse/HAWQ-1435
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> create usage info for the new jdbc plug-in.  there is some good info in the 
> pxf-jdbc README.md. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066789#comment-16066789
 ] 

ASF GitHub Bot commented on HAWQ-1435:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/124#discussion_r124582946
  
--- Diff: markdown/pxf/JdbcPXF.html.md.erb ---
@@ -0,0 +1,213 @@
+---
+title: Accessing External SQL Databases with JDBC (Beta)
+---
+
+
+
+Some of your data may already reside in an external SQL database. The PXF 
JDBC plug-in reads data stored in SQL databases including MySQL, ORACLE, 
PostgreSQL, and Hive.
+
+This section describes how to use PXF with JDBC, including an example of 
creating and querying an external table that accesses data in a MySQL database 
table.
+
+## Prerequisites
+
+Before accessing external SQL databases using HAWQ and PXF, ensure that:
+
+-   The JDBC plug-in is installed on all cluster nodes. See [Installing 
PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
+-   The JDBC driver JAR files for the external SQL database are installed 
on all cluster nodes.
+-   The file locations of external SQL database JDBC JAR files are added 
to `pxf-public.classpath`. If you manage your HAWQ cluster with Ambari, add the 
JARS via the Ambari UI. If you managed your cluster from the command line, edit 
the `/etc/pxf/conf/pxf-public.classpath` file directly.
+
+
+## Querying External SQL Data
+The PXF JDBC plug-in supports the single profile named `Jdbc`.
+
+Use the following syntax to create a HAWQ external table representing 
external SQL database tables you access via JDBC: 
+
+``` sql
+CREATE [READABLE | WRITABLE] EXTERNAL TABLE 
+(   [, ...] | LIKE  )
+LOCATION 
('pxf://[:]/..
+?PROFILE=Jdbc[&=[...]]')
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
+```
+
+JDBC-plug-in-specific keywords and values used in the [CREATE EXTERNAL 
TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described in the 
table below.
+
+| Keyword  | Value |
+|---|-|
+| \| The name of the PXF external table column. The PXF 
\ must exactly match the \ used in the external SQL 
table.|
+| \| The data type of the PXF external table column. The 
PXF \ must be equivalent to the data type used for \ 
in the SQL table.|
+| \| The PXF host. While \ may identify any PXF agent 
node, use the HDFS NameNode as it is guaranteed to be available in a running 
HDFS cluster. If HDFS High Availability is enabled, \ must identify the 
HDFS NameService. |
+| \| The PXF port. If \ is omitted, PXF assumes \ 
identifies a High Availability HDFS Nameservice and connects to the port number 
designated by the `pxf_service_port` server configuration parameter value. 
Default is 51200. |
+| \| The schema name. The default schema name is 
`default`. |
+| \| The database name. The default database name 
is determined by the external SQL server. |
+| \| The table name. |
+| PROFILE| The `PROFILE` keyword must specify `Jdbc`. |
+| \  | The custom options supported by the `Jdbc` profile 
are discussed later in this section.|
+| FORMAT 'CUSTOM' | The JDBC `CUSTOM` `FORMAT` supports only the built-in 
`'pxfwritable_import'` `FORMATTER` property. |
+
+*Note*: When creating PXF external tables, you cannot use the `HEADER` 
option in your `FORMAT` specification.
+
+
+### JDBC Custom Options
+
+You may include one or more custom options in the `LOCATION` URI. Preface 
each option with an ampersand `&`. 
+
+The JDBC plug-in `Jdbc` profile supports the following \s:
+
+| Option Name   | Description
+|---||
+| JDBC_DRIVER | The JDBC driver class name. |
+| DB_URL | The URL to the database; includes the hostname, port, and 
database name. |
+| USER | The database user name. |
+| PASS | The database password for USER. |
+| PARTITION_BY | The partition column, \:\. 
The JDBC plug-in supports `date`, `int`, and `enum` \s. Use the  
`-MM-dd` format for the `date` \. A null `PARTITION_BY` 
defaults to a single fragment. |
+| RANGE | The query range, \[:\]. \ 
may be empty for an `int` \. |
--- End diff --

Should probably clarify that RANGE and INTERVAL are only used with 
PARTITION_BY?


> docs - add usage info for pxf jdbc plug-in
> 

[jira] [Commented] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066791#comment-16066791
 ] 

ASF GitHub Bot commented on HAWQ-1435:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/124#discussion_r124583252
  
--- Diff: markdown/pxf/JdbcPXF.html.md.erb ---
@@ -0,0 +1,213 @@
+---
+title: Accessing External SQL Databases with JDBC (Beta)
+---
+
+
+
+Some of your data may already reside in an external SQL database. The PXF 
JDBC plug-in reads data stored in SQL databases including MySQL, ORACLE, 
PostgreSQL, and Hive.
+
+This section describes how to use PXF with JDBC, including an example of 
creating and querying an external table that accesses data in a MySQL database 
table.
+
+## Prerequisites
+
+Before accessing external SQL databases using HAWQ and PXF, ensure that:
+
+-   The JDBC plug-in is installed on all cluster nodes. See [Installing 
PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
+-   The JDBC driver JAR files for the external SQL database are installed 
on all cluster nodes.
+-   The file locations of external SQL database JDBC JAR files are added 
to `pxf-public.classpath`. If you manage your HAWQ cluster with Ambari, add the 
JARS via the Ambari UI. If you managed your cluster from the command line, edit 
the `/etc/pxf/conf/pxf-public.classpath` file directly.
+
+
+## Querying External SQL Data
+The PXF JDBC plug-in supports the single profile named `Jdbc`.
+
+Use the following syntax to create a HAWQ external table representing 
external SQL database tables you access via JDBC: 
+
+``` sql
+CREATE [READABLE | WRITABLE] EXTERNAL TABLE 
+(   [, ...] | LIKE  )
+LOCATION 
('pxf://[:]/..
+?PROFILE=Jdbc[&=[...]]')
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
+```
+
+JDBC-plug-in-specific keywords and values used in the [CREATE EXTERNAL 
TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described in the 
table below.
+
+| Keyword  | Value |
+|---|-|
+| \| The name of the PXF external table column. The PXF 
\ must exactly match the \ used in the external SQL 
table.|
+| \| The data type of the PXF external table column. The 
PXF \ must be equivalent to the data type used for \ 
in the SQL table.|
+| \| The PXF host. While \ may identify any PXF agent 
node, use the HDFS NameNode as it is guaranteed to be available in a running 
HDFS cluster. If HDFS High Availability is enabled, \ must identify the 
HDFS NameService. |
+| \| The PXF port. If \ is omitted, PXF assumes \ 
identifies a High Availability HDFS Nameservice and connects to the port number 
designated by the `pxf_service_port` server configuration parameter value. 
Default is 51200. |
+| \| The schema name. The default schema name is 
`default`. |
+| \| The database name. The default database name 
is determined by the external SQL server. |
+| \| The table name. |
+| PROFILE| The `PROFILE` keyword must specify `Jdbc`. |
+| \  | The custom options supported by the `Jdbc` profile 
are discussed later in this section.|
+| FORMAT 'CUSTOM' | The JDBC `CUSTOM` `FORMAT` supports only the built-in 
`'pxfwritable_import'` `FORMATTER` property. |
+
+*Note*: When creating PXF external tables, you cannot use the `HEADER` 
option in your `FORMAT` specification.
+
+
+### JDBC Custom Options
+
+You may include one or more custom options in the `LOCATION` URI. Preface 
each option with an ampersand `&`. 
+
+The JDBC plug-in `Jdbc` profile supports the following \s:
+
+| Option Name   | Description
+|---||
+| JDBC_DRIVER | The JDBC driver class name. |
+| DB_URL | The URL to the database; includes the hostname, port, and 
database name. |
+| USER | The database user name. |
+| PASS | The database password for USER. |
+| PARTITION_BY | The partition column, \:\. 
The JDBC plug-in supports `date`, `int`, and `enum` \s. Use the  
`-MM-dd` format for the `date` \. A null `PARTITION_BY` 
defaults to a single fragment. |
+| RANGE | The query range, \[:\]. \ 
may be empty for an `int` \. |
+| INTERVAL | The interval, \[:\], of one 
fragment.  `INTERVAL` may be empty for an `enum` \. 
\ may be 

[jira] [Commented] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-12 Thread Lisa Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047197#comment-16047197
 ] 

Lisa Owen commented on HAWQ-1435:
-

[~michael.andre.pearce] - please review.

> docs - add usage info for pxf jdbc plug-in
> --
>
> Key: HAWQ-1435
> URL: https://issues.apache.org/jira/browse/HAWQ-1435
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> create usage info for the new jdbc plug-in.  there is some good info in the 
> pxf-jdbc README.md. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047191#comment-16047191
 ] 

ASF GitHub Bot commented on HAWQ-1435:
--

GitHub user lisakowen opened a pull request:

https://github.com/apache/incubator-hawq-docs/pull/124

HAWQ-1435 document new pxf jdbc plug-in

document the community-contributed PXF JDBC plug-in.  include a simple 
mysql example.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lisakowen/incubator-hawq-docs feature/pxf-jdbc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-docs/pull/124.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #124


commit a008125b2864c3acbc3b630030cb614a5ea2679f
Author: Lisa Owen 
Date:   2017-04-19T00:13:57Z

document new pxf jdbc plug-in




> docs - add usage info for pxf jdbc plug-in
> --
>
> Key: HAWQ-1435
> URL: https://issues.apache.org/jira/browse/HAWQ-1435
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> create usage info for the new jdbc plug-in.  there is some good info in the 
> pxf-jdbc README.md. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)