[jira] [Commented] (HAWQ-1638) Issues with website

2018-07-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16539647#comment-16539647
 ] 

ASF GitHub Bot commented on HAWQ-1638:
--

Github user jiny2 commented on the issue:

https://github.com/apache/incubator-hawq-site/pull/17
  
+1, LGTM. Thank you.


> Issues with website
> ---
>
> Key: HAWQ-1638
> URL: https://issues.apache.org/jira/browse/HAWQ-1638
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Radar Lei
>Priority: Major
>
> The HAWQ page looks nice, however there are a few problems with it.
> The phrase
> "Plus, HAWQ® works Apache MADlib) machine learning libraries"
> does not read well. Something missing?
> The first/main references to ASF products such as Hadoop, YARN etc must use 
> the full name, i.e. Apache Hadoop etc.
> The download section does not have any link to the KEYS file, nor any 
> instructions on how to use the KEYS+sig or hashes to validate downloads.
> The download section still includes references to MD5 hashes.
> These are deprecated and can be removed for older releases that have other 
> hashes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1638) Issues with website

2018-07-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538359#comment-16538359
 ] 

ASF GitHub Bot commented on HAWQ-1638:
--

Github user radarwave commented on the issue:

https://github.com/apache/incubator-hawq-site/pull/17
  
Add @dyozie 


> Issues with website
> ---
>
> Key: HAWQ-1638
> URL: https://issues.apache.org/jira/browse/HAWQ-1638
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Radar Lei
>Priority: Major
>
> The HAWQ page looks nice, however there are a few problems with it.
> The phrase
> "Plus, HAWQ® works Apache MADlib) machine learning libraries"
> does not read well. Something missing?
> The first/main references to ASF products such as Hadoop, YARN etc must use 
> the full name, i.e. Apache Hadoop etc.
> The download section does not have any link to the KEYS file, nor any 
> instructions on how to use the KEYS+sig or hashes to validate downloads.
> The download section still includes references to MD5 hashes.
> These are deprecated and can be removed for older releases that have other 
> hashes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1638) Issues with website

2018-07-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538356#comment-16538356
 ] 

ASF GitHub Bot commented on HAWQ-1638:
--

Github user radarwave commented on the issue:

https://github.com/apache/incubator-hawq-site/pull/17
  
@changleicn @edespino @jiny2
Please help to review, thanks.


> Issues with website
> ---
>
> Key: HAWQ-1638
> URL: https://issues.apache.org/jira/browse/HAWQ-1638
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Radar Lei
>Priority: Major
>
> The HAWQ page looks nice, however there are a few problems with it.
> The phrase
> "Plus, HAWQ® works Apache MADlib) machine learning libraries"
> does not read well. Something missing?
> The first/main references to ASF products such as Hadoop, YARN etc must use 
> the full name, i.e. Apache Hadoop etc.
> The download section does not have any link to the KEYS file, nor any 
> instructions on how to use the KEYS+sig or hashes to validate downloads.
> The download section still includes references to MD5 hashes.
> These are deprecated and can be removed for older releases that have other 
> hashes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1638) Issues with website

2018-07-10 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538353#comment-16538353
 ] 

ASF GitHub Bot commented on HAWQ-1638:
--

GitHub user radarwave opened a pull request:

https://github.com/apache/incubator-hawq-site/pull/17

HAWQ-1638. Add how to verify downloaded files section, removed md5 keys.

Added a section about how to verify downloaded files.

Removed MD5 references.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/radarwave/incubator-hawq-site HAWQ-1638

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-site/pull/17.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #17


commit fff3da1168b91ceec758dbfe6b1e0f8e6ae6fb4b
Author: rlei 
Date:   2018-07-10T10:05:01Z

HAWQ-1638. Add how to verify downloaded files section, removed md5 keys.




> Issues with website
> ---
>
> Key: HAWQ-1638
> URL: https://issues.apache.org/jira/browse/HAWQ-1638
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Radar Lei
>Priority: Major
>
> The HAWQ page looks nice, however there are a few problems with it.
> The phrase
> "Plus, HAWQ® works Apache MADlib) machine learning libraries"
> does not read well. Something missing?
> The first/main references to ASF products such as Hadoop, YARN etc must use 
> the full name, i.e. Apache Hadoop etc.
> The download section does not have any link to the KEYS file, nor any 
> instructions on how to use the KEYS+sig or hashes to validate downloads.
> The download section still includes references to MD5 hashes.
> These are deprecated and can be removed for older releases that have other 
> hashes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1638) Issues with website

2018-07-09 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537839#comment-16537839
 ] 

ASF GitHub Bot commented on HAWQ-1638:
--

Github user changleicn commented on the issue:

https://github.com/apache/incubator-hawq-site/pull/16
  
looks good!


> Issues with website
> ---
>
> Key: HAWQ-1638
> URL: https://issues.apache.org/jira/browse/HAWQ-1638
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Radar Lei
>Priority: Major
>
> The HAWQ page looks nice, however there are a few problems with it.
> The phrase
> "Plus, HAWQ® works Apache MADlib) machine learning libraries"
> does not read well. Something missing?
> The first/main references to ASF products such as Hadoop, YARN etc must use 
> the full name, i.e. Apache Hadoop etc.
> The download section does not have any link to the KEYS file, nor any 
> instructions on how to use the KEYS+sig or hashes to validate downloads.
> The download section still includes references to MD5 hashes.
> These are deprecated and can be removed for older releases that have other 
> hashes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1638) Issues with website

2018-07-09 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536746#comment-16536746
 ] 

ASF GitHub Bot commented on HAWQ-1638:
--

Github user radarwave commented on the issue:

https://github.com/apache/incubator-hawq-site/pull/16
  
Download links fix will be update later.

@changleicn  @edespino @jiny2 
Would you help to review? Thanks.


> Issues with website
> ---
>
> Key: HAWQ-1638
> URL: https://issues.apache.org/jira/browse/HAWQ-1638
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Radar Lei
>Priority: Major
>
> The HAWQ page looks nice, however there are a few problems with it.
> The phrase
> "Plus, HAWQ® works Apache MADlib) machine learning libraries"
> does not read well. Something missing?
> The first/main references to ASF products such as Hadoop, YARN etc must use 
> the full name, i.e. Apache Hadoop etc.
> The download section does not have any link to the KEYS file, nor any 
> instructions on how to use the KEYS+sig or hashes to validate downloads.
> The download section still includes references to MD5 hashes.
> These are deprecated and can be removed for older releases that have other 
> hashes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1638) Issues with website

2018-07-09 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536742#comment-16536742
 ] 

ASF GitHub Bot commented on HAWQ-1638:
--

GitHub user radarwave opened a pull request:

https://github.com/apache/incubator-hawq-site/pull/16

HAWQ-1638. Correct typo and use full name for ASF products 

Correct typo for madlib and use full name for ASF products with the 
first/main references.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/radarwave/incubator-hawq-site HAWQ-1638

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-site/pull/16.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #16


commit c6a20e535d9a36d39977269b00b2e6cea05687f8
Author: rlei 
Date:   2018-07-09T09:52:04Z

HAWQ-1638. Correct typo and use full name for ASF products with the 
first/main references.




> Issues with website
> ---
>
> Key: HAWQ-1638
> URL: https://issues.apache.org/jira/browse/HAWQ-1638
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Radar Lei
>Priority: Major
>
> The HAWQ page looks nice, however there are a few problems with it.
> The phrase
> "Plus, HAWQ® works Apache MADlib) machine learning libraries"
> does not read well. Something missing?
> The first/main references to ASF products such as Hadoop, YARN etc must use 
> the full name, i.e. Apache Hadoop etc.
> The download section does not have any link to the KEYS file, nor any 
> instructions on how to use the KEYS+sig or hashes to validate downloads.
> The download section still includes references to MD5 hashes.
> These are deprecated and can be removed for older releases that have other 
> hashes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1562) Incorrect path to default log directory in documentation

2018-02-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354164#comment-16354164
 ] 

ASF GitHub Bot commented on HAWQ-1562:
--

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/136


> Incorrect path to default log directory in documentation
> 
>
> Key: HAWQ-1562
> URL: https://issues.apache.org/jira/browse/HAWQ-1562
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Shubham Sharma
>Assignee: David Yozie
>Priority: Major
>
> In the current documentation six files point to wrong location of default 
> directories. The default log directory of management utilities is 
> ~/hawqAdminLogs but the documentation specifies ~/hawq/Adminlogs/ . There 
> list can be seen 
> [here|https://github.com/apache/incubator-hawq-docs/search?utf8=%E2%9C%93=Adminlogs=]
>  .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1368) normal user who doesn't have home directory may have problem when running hawq register

2018-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341824#comment-16341824
 ] 

ASF GitHub Bot commented on HAWQ-1368:
--

Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/135#discussion_r164250784
  
--- Diff: markdown/reference/cli/admin_utilities/hawqregister.html.md.erb 
---
@@ -200,6 +200,8 @@ group {
 -\\\-version   
 Show the version of this utility, then exit.
 
+-l, -\\\-logdir \  
+Specifies the log directory that `hawq register` uses for logs. The 
default is `~/hawqAdminLogs/`.
--- End diff --

same comment as hawq extract ...  how about "Specifies the directory that 
`hawq register` uses for log files."?


> normal user who doesn't have home directory may have problem when running 
> hawq register
> ---
>
> Key: HAWQ-1368
> URL: https://issues.apache.org/jira/browse/HAWQ-1368
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Lili Ma
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> HAWQ register stores information in hawqregister_MMDD.log under directory 
> ~/hawqAdminLogs, and normal user who doesn't have own home directory may 
> encounter failure when running hawq regsiter.
> We can add -l option in order to set the target log directory and file name 
> of hawq register.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1553) User who doesn't have home directory can not run hawq extract command

2018-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341820#comment-16341820
 ] 

ASF GitHub Bot commented on HAWQ-1553:
--

Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/134#discussion_r164250622
  
--- Diff: markdown/reference/cli/admin_utilities/hawqextract.html.md.erb ---
@@ -73,6 +73,9 @@ where:
 -\\\-version  
 Displays the version of this utility.
 
+-l, -\\\-logdir \  
+Specifies the log directory that `hawq extract` uses for logs. The 
default is `~/hawqAdminLogs/`.
--- End diff --

how about "Specifies the directory that `hawq extract` uses for log files."?


> User who doesn't have home directory can not run hawq extract command
> -
>
> Key: HAWQ-1553
> URL: https://issues.apache.org/jira/browse/HAWQ-1553
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> HAWQ extract stores information in hawqextract_MMDD.log under directory 
> ~/hawqAdminLogs, and a user who doesn't have it's own home directory 
> encounters failure when running hawq extract.
> We can add -l option in order to set the target log directory for hawq 
> extract.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1562) Incorrect path to default log directory in documentation

2017-12-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277017#comment-16277017
 ] 

ASF GitHub Bot commented on HAWQ-1562:
--

GitHub user outofmem0ry opened a pull request:

https://github.com/apache/incubator-hawq-docs/pull/136

HAWQ-1562 Fixed incorrect path to default log directory



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/outofmem0ry/incubator-hawq-docs 
document/HAWQ-1562

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-docs/pull/136.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #136


commit 9c05af6f19c755526679fcd16792d3f272745e85
Author: Shubham Sharma 
Date:   2017-12-04T16:07:51Z

HAWQ-1562 Fixed incorrect path to default log directory




> Incorrect path to default log directory in documentation
> 
>
> Key: HAWQ-1562
> URL: https://issues.apache.org/jira/browse/HAWQ-1562
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Shubham Sharma
>Assignee: David Yozie
>
> In the current documentation six files point to wrong location of default 
> directories. The default log directory of management utilities is 
> ~/hawqAdminLogs but the documentation specifies ~/hawq/Adminlogs/ . There 
> list can be seen 
> [here|https://github.com/apache/incubator-hawq-docs/search?utf8=%E2%9C%93=Adminlogs=]
>  .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1368) normal user who doesn't have home directory may have problem when running hawq register

2017-12-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276975#comment-16276975
 ] 

ASF GitHub Bot commented on HAWQ-1368:
--

GitHub user outofmem0ry opened a pull request:

https://github.com/apache/incubator-hawq-docs/pull/135

HAWQ-1368 Add option to hawq register to specify log directory



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/outofmem0ry/incubator-hawq-docs 
document/HAWQ-1368

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-docs/pull/135.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #135


commit 1ac0fb26d0c358876c3a17c678b761132ccc6941
Author: Shubham Sharma 
Date:   2017-12-04T15:42:15Z

HAWQ-1368 Add option to hawq register to specify log directory




> normal user who doesn't have home directory may have problem when running 
> hawq register
> ---
>
> Key: HAWQ-1368
> URL: https://issues.apache.org/jira/browse/HAWQ-1368
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Lili Ma
>Assignee: Radar Lei
> Fix For: backlog
>
>
> HAWQ register stores information in hawqregister_MMDD.log under directory 
> ~/hawqAdminLogs, and normal user who doesn't have own home directory may 
> encounter failure when running hawq regsiter.
> We can add -l option in order to set the target log directory and file name 
> of hawq register.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1553) User who doesn't have home directory can not run hawq extract command

2017-12-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276972#comment-16276972
 ] 

ASF GitHub Bot commented on HAWQ-1553:
--

GitHub user outofmem0ry opened a pull request:

https://github.com/apache/incubator-hawq-docs/pull/134

HAWQ-1553 Add option to hawq extract to specify log directory



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/outofmem0ry/incubator-hawq-docs 
document/HAWQ-1553

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-docs/pull/134.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #134


commit 7f89466967bfe3d4b3f8f348f83996f8048d5a33
Author: Shubham Sharma 
Date:   2017-12-04T15:31:33Z

HAWQ-1553 Add option to hawq extract to specify log directory




> User who doesn't have home directory can not run hawq extract command
> -
>
> Key: HAWQ-1553
> URL: https://issues.apache.org/jira/browse/HAWQ-1553
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
>
> HAWQ extract stores information in hawqextract_MMDD.log under directory 
> ~/hawqAdminLogs, and a user who doesn't have it's own home directory 
> encounters failure when running hawq extract.
> We can add -l option in order to set the target log directory for hawq 
> extract.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1554) Add registered trademark symbol (®) to website.

2017-11-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16266501#comment-16266501
 ] 

ASF GitHub Bot commented on HAWQ-1554:
--

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-hawq-site/pull/14


> Add registered trademark symbol (®) to website.
> ---
>
> Key: HAWQ-1554
> URL: https://issues.apache.org/jira/browse/HAWQ-1554
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Ed Espino
>Assignee: Ed Espino
>
> Since HAWQ® is a registered trademark, we need to ensure we mark it 
> appropriately to ensure that we can preserve our rights.
> This task tracks the update of the website's home page and download
> page (or section of the website) to add the ® character after the first
> and most prominent mentions of HAWQ in any text (i.e. not inside graphics).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1554) Add registered trademark symbol (®) to website.

2017-11-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16266497#comment-16266497
 ] 

ASF GitHub Bot commented on HAWQ-1554:
--

Github user radarwave commented on the issue:

https://github.com/apache/incubator-hawq-site/pull/14
  
LGTM +1


> Add registered trademark symbol (®) to website.
> ---
>
> Key: HAWQ-1554
> URL: https://issues.apache.org/jira/browse/HAWQ-1554
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Ed Espino
>Assignee: Ed Espino
>
> Since HAWQ® is a registered trademark, we need to ensure we mark it 
> appropriately to ensure that we can preserve our rights.
> This task tracks the update of the website's home page and download
> page (or section of the website) to add the ® character after the first
> and most prominent mentions of HAWQ in any text (i.e. not inside graphics).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1554) Add registered trademark symbol (®) to website.

2017-11-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16266486#comment-16266486
 ] 

ASF GitHub Bot commented on HAWQ-1554:
--

Github user huor commented on the issue:

https://github.com/apache/incubator-hawq-site/pull/14
  
Looks good, +1


> Add registered trademark symbol (®) to website.
> ---
>
> Key: HAWQ-1554
> URL: https://issues.apache.org/jira/browse/HAWQ-1554
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Ed Espino
>Assignee: Ed Espino
>
> Since HAWQ® is a registered trademark, we need to ensure we mark it 
> appropriately to ensure that we can preserve our rights.
> This task tracks the update of the website's home page and download
> page (or section of the website) to add the ® character after the first
> and most prominent mentions of HAWQ in any text (i.e. not inside graphics).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1554) Add registered trademark symbol (®) to website.

2017-11-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16266481#comment-16266481
 ] 

ASF GitHub Bot commented on HAWQ-1554:
--

GitHub user edespino opened a pull request:

https://github.com/apache/incubator-hawq-site/pull/14

HAWQ-1554. Add registered trademark symbol (®) to website.

This task tracks the update of the website's home page and download
page (or section of the website) to add the ® character after the first
and most prominent mentions of HAWQ in any text (i.e. not inside graphics).


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/edespino/incubator-hawq-site HAWQ-1554

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-site/pull/14.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #14


commit bfcc79a5d5e1fc7ab945f57bfb77a368f8a3dd1e
Author: Ed Espino 
Date:   2017-11-27T08:00:08Z

HAWQ-1554. Add registered trademark symbol (®) to website.




> Add registered trademark symbol (®) to website.
> ---
>
> Key: HAWQ-1554
> URL: https://issues.apache.org/jira/browse/HAWQ-1554
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Ed Espino
>Assignee: Ed Espino
>
> Since HAWQ® is a registered trademark, we need to ensure we mark it 
> appropriately to ensure that we can preserve our rights.
> This task tracks the update of the website's home page and download
> page (or section of the website) to add the ® character after the first
> and most prominent mentions of HAWQ in any text (i.e. not inside graphics).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1480) Packing a core file in hawq

2017-10-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202295#comment-16202295
 ] 

ASF GitHub Bot commented on HAWQ-1480:
--

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/123


> Packing a core file in hawq
> ---
>
> Key: HAWQ-1480
> URL: https://issues.apache.org/jira/browse/HAWQ-1480
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> Currently there is no way to packing a core file with its context – 
> executable, application and system shared libraries in hawq. This information 
> can be later unpacked on another system and helps in debugging. It is a 
> useful feature to quickly gather all the data needed from a crash/core 
> generated on the system to analyze it later.
> Another open source project, greenplum, uses a script 
> [https://github.com/greenplum-db/gpdb/blob/master/gpMgmt/sbin/packcore] to 
> collect this information. Tested this script against Hawq's installation and 
> it collects the required information needed for debug.
> Can this be merged into Hawq, if yes, I can submit a pull request and test it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1480) Packing a core file in hawq

2017-10-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202267#comment-16202267
 ] 

ASF GitHub Bot commented on HAWQ-1480:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/123#discussion_r144348667
  
--- Diff: book/master_middleman/source/subnavs/apache-hawq-nav.erb ---
@@ -117,6 +117,9 @@
   
 HAWQ 
Administrative Log Files
   
+  
+HAWQ packcore 
utility
--- End diff --

Thanks @outofmem0ry - merging this in now.


> Packing a core file in hawq
> ---
>
> Key: HAWQ-1480
> URL: https://issues.apache.org/jira/browse/HAWQ-1480
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> Currently there is no way to packing a core file with its context – 
> executable, application and system shared libraries in hawq. This information 
> can be later unpacked on another system and helps in debugging. It is a 
> useful feature to quickly gather all the data needed from a crash/core 
> generated on the system to analyze it later.
> Another open source project, greenplum, uses a script 
> [https://github.com/greenplum-db/gpdb/blob/master/gpMgmt/sbin/packcore] to 
> collect this information. Tested this script against Hawq's installation and 
> it collects the required information needed for debug.
> Can this be merged into Hawq, if yes, I can submit a pull request and test it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1480) Packing a core file in hawq

2017-10-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201502#comment-16201502
 ] 

ASF GitHub Bot commented on HAWQ-1480:
--

Github user outofmem0ry commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/123#discussion_r144200526
  
--- Diff: book/master_middleman/source/subnavs/apache-hawq-nav.erb ---
@@ -117,6 +117,9 @@
   
 HAWQ 
Administrative Log Files
   
+  
+HAWQ packcore 
utility
--- End diff --

@lisakowen changed the heading format to make it similar to naming 
convention of other headings.


> Packing a core file in hawq
> ---
>
> Key: HAWQ-1480
> URL: https://issues.apache.org/jira/browse/HAWQ-1480
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> Currently there is no way to packing a core file with its context – 
> executable, application and system shared libraries in hawq. This information 
> can be later unpacked on another system and helps in debugging. It is a 
> useful feature to quickly gather all the data needed from a crash/core 
> generated on the system to analyze it later.
> Another open source project, greenplum, uses a script 
> [https://github.com/greenplum-db/gpdb/blob/master/gpMgmt/sbin/packcore] to 
> collect this information. Tested this script against Hawq's installation and 
> it collects the required information needed for debug.
> Can this be merged into Hawq, if yes, I can submit a pull request and test it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1480) Packing a core file in hawq

2017-10-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16197841#comment-16197841
 ] 

ASF GitHub Bot commented on HAWQ-1480:
--

Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/123#discussion_r143597015
  
--- Diff: book/master_middleman/source/subnavs/apache-hawq-nav.erb ---
@@ -117,6 +117,9 @@
   
 HAWQ 
Administrative Log Files
   
+  
+HAWQ packcore 
utility
--- End diff --

this is very minor - how about capitalizing the U in "utility"?


> Packing a core file in hawq
> ---
>
> Key: HAWQ-1480
> URL: https://issues.apache.org/jira/browse/HAWQ-1480
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> Currently there is no way to packing a core file with its context – 
> executable, application and system shared libraries in hawq. This information 
> can be later unpacked on another system and helps in debugging. It is a 
> useful feature to quickly gather all the data needed from a crash/core 
> generated on the system to analyze it later.
> Another open source project, greenplum, uses a script 
> [https://github.com/greenplum-db/gpdb/blob/master/gpMgmt/sbin/packcore] to 
> collect this information. Tested this script against Hawq's installation and 
> it collects the required information needed for debug.
> Can this be merged into Hawq, if yes, I can submit a pull request and test it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1480) Packing a core file in hawq

2017-09-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180029#comment-16180029
 ] 

ASF GitHub Bot commented on HAWQ-1480:
--

Github user janebeckman commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/123#discussion_r140936494
  
--- Diff: markdown/admin/packcore.html.md.erb ---
@@ -0,0 +1,51 @@
+---
+title: HAWQ packcore utility
+---
+
+
+
+## Core file
+
+Core file is a disk file that records the image of a process' memory in 
case the process crashes or terminates abruptly. This image can be later used 
to debug the state of process at the time when it was terminated. This 
information can be very useful to debug the cause failure.
+
+## Packcore
+
+Packcore utility helps in packing a core file with its context – 
executable, application and system shared libraries from the current 
environment. This information can be later unpacked on a different system and 
can be used for debugging. Packcore takes a core file, extracts the name of the 
binary which generated the core and executes `ldd` (List Dynamic Dependencies) 
to get the required information into a single tar archive.
+
+### Using packcore 
+
+The packcore utility is located under `${GPHOME}/sbin`. Following are the 
options for packing a core file:
+
+```shell
+$GPHOME/sbin/packcore 
+
+or 
+
+$GPHOME/sbin/packcore -b|--binary  $GPHOME/bin/postgres 
+```
+
+Alternatively, you can navigate to `$GPHOME/sbin` and run the following:
+
+```shell
+./packcore 
+
+or 
+
+./packcore -b|--binary $GPHOME/bin/postgres 
+```
+Once finished the utility will create a tar file named 
`packcore-.tgz`. This file can be later unpacked on another system 
and used for debugging.
--- End diff --

When processing is completed, the utility creates a tar file in the format: 
`packcore-.tgz`. Unpack this file on another system to use it for 
debugging.


> Packing a core file in hawq
> ---
>
> Key: HAWQ-1480
> URL: https://issues.apache.org/jira/browse/HAWQ-1480
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> Currently there is no way to packing a core file with its context – 
> executable, application and system shared libraries in hawq. This information 
> can be later unpacked on another system and helps in debugging. It is a 
> useful feature to quickly gather all the data needed from a crash/core 
> generated on the system to analyze it later.
> Another open source project, greenplum, uses a script 
> [https://github.com/greenplum-db/gpdb/blob/master/gpMgmt/sbin/packcore] to 
> collect this information. Tested this script against Hawq's installation and 
> it collects the required information needed for debug.
> Can this be merged into Hawq, if yes, I can submit a pull request and test it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1480) Packing a core file in hawq

2017-09-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180025#comment-16180025
 ] 

ASF GitHub Bot commented on HAWQ-1480:
--

Github user janebeckman commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/123#discussion_r140936073
  
--- Diff: markdown/admin/packcore.html.md.erb ---
@@ -0,0 +1,51 @@
+---
+title: HAWQ packcore utility
+---
+
+
+
+## Core file
+
+Core file is a disk file that records the image of a process' memory in 
case the process crashes or terminates abruptly. This image can be later used 
to debug the state of process at the time when it was terminated. This 
information can be very useful to debug the cause failure.
+
+## Packcore
+
+Packcore utility helps in packing a core file with its context – 
executable, application and system shared libraries from the current 
environment. This information can be later unpacked on a different system and 
can be used for debugging. Packcore takes a core file, extracts the name of the 
binary which generated the core and executes `ldd` (List Dynamic Dependencies) 
to get the required information into a single tar archive.
+
+### Using packcore 
+
+The packcore utility is located under `${GPHOME}/sbin`. Following are the 
options for packing a core file:
--- End diff --

Run one of the following commands to pack a core file:


> Packing a core file in hawq
> ---
>
> Key: HAWQ-1480
> URL: https://issues.apache.org/jira/browse/HAWQ-1480
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> Currently there is no way to packing a core file with its context – 
> executable, application and system shared libraries in hawq. This information 
> can be later unpacked on another system and helps in debugging. It is a 
> useful feature to quickly gather all the data needed from a crash/core 
> generated on the system to analyze it later.
> Another open source project, greenplum, uses a script 
> [https://github.com/greenplum-db/gpdb/blob/master/gpMgmt/sbin/packcore] to 
> collect this information. Tested this script against Hawq's installation and 
> it collects the required information needed for debug.
> Can this be merged into Hawq, if yes, I can submit a pull request and test it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1480) Packing a core file in hawq

2017-09-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180006#comment-16180006
 ] 

ASF GitHub Bot commented on HAWQ-1480:
--

Github user janebeckman commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/123#discussion_r140933529
  
--- Diff: markdown/admin/packcore.html.md.erb ---
@@ -0,0 +1,51 @@
+---
+title: HAWQ packcore utility
+---
+
+
+
+## Core file
+
+Core file is a disk file that records the image of a process' memory in 
case the process crashes or terminates abruptly. This image can be later used 
to debug the state of process at the time when it was terminated. This 
information can be very useful to debug the cause failure.
--- End diff --

A core file is a disk file that records the image of a process' memory in 
case the process crashes or terminates abruptly. The information in this image 
is useful for debugging the state of a process at the time when it was 
terminated.


> Packing a core file in hawq
> ---
>
> Key: HAWQ-1480
> URL: https://issues.apache.org/jira/browse/HAWQ-1480
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> Currently there is no way to packing a core file with its context – 
> executable, application and system shared libraries in hawq. This information 
> can be later unpacked on another system and helps in debugging. It is a 
> useful feature to quickly gather all the data needed from a crash/core 
> generated on the system to analyze it later.
> Another open source project, greenplum, uses a script 
> [https://github.com/greenplum-db/gpdb/blob/master/gpMgmt/sbin/packcore] to 
> collect this information. Tested this script against Hawq's installation and 
> it collects the required information needed for debug.
> Can this be merged into Hawq, if yes, I can submit a pull request and test it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1480) Packing a core file in hawq

2017-09-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180005#comment-16180005
 ] 

ASF GitHub Bot commented on HAWQ-1480:
--

Github user janebeckman commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/123#discussion_r140933386
  
--- Diff: markdown/admin/packcore.html.md.erb ---
@@ -0,0 +1,51 @@
+---
+title: HAWQ packcore utility
+---
+
+
+
+## Core file
+
+Core file is a disk file that records the image of a process' memory in 
case the process crashes or terminates abruptly. This image can be later used 
to debug the state of process at the time when it was terminated. This 
information can be very useful to debug the cause failure.
+
+## Packcore
+
+Packcore utility helps in packing a core file with its context – 
executable, application and system shared libraries from the current 
environment. This information can be later unpacked on a different system and 
can be used for debugging. Packcore takes a core file, extracts the name of the 
binary which generated the core and executes `ldd` (List Dynamic Dependencies) 
to get the required information into a single tar archive.
--- End diff --

The Packcore utility helps pack a core file with its context, including the 
executable, application, and shared system libraries from the current 
environment. This information can be unpacked for later debugging on a 
different system. Packcore extracts the name of the binary that generated the 
core from the core file, then executes `ldd` (List Dynamic Dependencies) to 
create a single tar archive containing the core file information.


> Packing a core file in hawq
> ---
>
> Key: HAWQ-1480
> URL: https://issues.apache.org/jira/browse/HAWQ-1480
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> Currently there is no way to packing a core file with its context – 
> executable, application and system shared libraries in hawq. This information 
> can be later unpacked on another system and helps in debugging. It is a 
> useful feature to quickly gather all the data needed from a crash/core 
> generated on the system to analyze it later.
> Another open source project, greenplum, uses a script 
> [https://github.com/greenplum-db/gpdb/blob/master/gpMgmt/sbin/packcore] to 
> collect this information. Tested this script against Hawq's installation and 
> it collects the required information needed for debug.
> Can this be merged into Hawq, if yes, I can submit a pull request and test it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1497) docs - refactor the kerberos sections

2017-07-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090299#comment-16090299
 ] 

ASF GitHub Bot commented on HAWQ-1497:
--

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/127


> docs - refactor the kerberos sections
> -
>
> Key: HAWQ-1497
> URL: https://issues.apache.org/jira/browse/HAWQ-1497
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> the kerberos docs do not really distinguish between enabling kerberos at the 
> HDFS filesystem level vs. enabling kerberos user authentication for HAWQ.  
> also missing content for config'ing HAWQ/PXF for secure HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087008#comment-16087008
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127405360
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger resource lookup. For example, to create a new 
administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and click the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** : Enter the HAWQ Ranger lookup role you identified 
or created in Step 2 above.  
+**HAWQ User 

[jira] [Commented] (HAWQ-1497) docs - refactor the kerberos sections

2017-07-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086990#comment-16086990
 ] 

ASF GitHub Bot commented on HAWQ-1497:
--

Github user radarwave commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/127#discussion_r127403320
  
--- Diff: markdown/clientaccess/disable-kerberos.html.md.erb ---
@@ -21,43 +21,49 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Follow these steps to disable Kerberos security for HAWQ and PXF for 
manual installations.
+HAWQ supports Kerberos at both the HDFS and/or user authentication levels. 
You will perform different disable procedures for each.
 
-**Note:** If you install or manage your cluster using Ambari, then the 
HAWQ Ambari plug-in automatically disables security for HAWQ and PXF when you 
disable security for Hadoop. The following instructions are only necessary for 
manual installations, or when Hadoop security is disabled outside of Ambari.
 
-1.  Disable Kerberos on the Hadoop cluster on which you use HAWQ.
-2.  Disable security for HAWQ:
-1.  Login to the HAWQ database master server as the `gpadmin` user:
+## Disable Kerberized HDFS for HAWQ/PXF
 
-``` bash
-$ ssh hawq_master_fqdn
-```
+You will perform different procedures to disable HAWQ/PXF access to a 
previously-kerberized HDFS depending upon whether you manage your cluster from 
the command line or use Ambari to manage your cluster.
+
+### Procedure for Ambari-Managed 
Clusters
+
+If you manage your cluster using Ambari, you will disable Kerberos 
authentication for your cluster as described in the [How To Disable 
Kerberos](https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-user-guide/content/how_to_disable_kerberos.html)
 Hortonworks documentation. Ambari will guide you through the de-kerberization 
process, including removing/updating any authentication-related configuration 
in your cluster.
 
-2.  Run the following command to set up HAWQ environment variables:
+### Procedure for 
Command-Line-Managed Clusters
+
+If you manage your cluster from the command line, follow these 
instructions to disable HDFS Kerberos security for HAWQ and PXF.
+
+1.  Disable Kerberos on the Hadoop cluster on which you use HAWQ.
+2.  Disable security for HAWQ:
+1.  Login to the HAWQ database master server as the `gpadmin` user and 
set up your HAWQ environment:
 
 ``` bash
-$ source /usr/local/hawq/greenplum_path.sh
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
 ```
 
-3.  Start HAWQ if necessary:
+2.  Start HAWQ if necessary:
 
 ``` bash
-$ hawq start -a
+gpadmin@master$ hawq start cluster -a
 ```
 
-4.  Run the following command to disable security:
+3.  Update HAWQ configuration to disable security:
 
 ``` bash
-$ hawq config --masteronly -c enable_secure_filesystem -v “off”
+gpadmin@master$ hawq config --masteronly -c 
enable_secure_filesystem -v “off”
--- End diff --

'--masteronly' is deprecated. We should not add this option.


> docs - refactor the kerberos sections
> -
>
> Key: HAWQ-1497
> URL: https://issues.apache.org/jira/browse/HAWQ-1497
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> the kerberos docs do not really distinguish between enabling kerberos at the 
> HDFS filesystem level vs. enabling kerberos user authentication for HAWQ.  
> also missing content for config'ing HAWQ/PXF for secure HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1497) docs - refactor the kerberos sections

2017-07-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086989#comment-16086989
 ] 

ASF GitHub Bot commented on HAWQ-1497:
--

Github user radarwave commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/127#discussion_r127402619
  
--- Diff: markdown/clientaccess/disable-kerberos.html.md.erb ---
@@ -21,43 +21,49 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Follow these steps to disable Kerberos security for HAWQ and PXF for 
manual installations.
+HAWQ supports Kerberos at both the HDFS and/or user authentication levels. 
You will perform different disable procedures for each.
 
-**Note:** If you install or manage your cluster using Ambari, then the 
HAWQ Ambari plug-in automatically disables security for HAWQ and PXF when you 
disable security for Hadoop. The following instructions are only necessary for 
manual installations, or when Hadoop security is disabled outside of Ambari.
 
-1.  Disable Kerberos on the Hadoop cluster on which you use HAWQ.
-2.  Disable security for HAWQ:
-1.  Login to the HAWQ database master server as the `gpadmin` user:
+## Disable Kerberized HDFS for HAWQ/PXF
 
-``` bash
-$ ssh hawq_master_fqdn
-```
+You will perform different procedures to disable HAWQ/PXF access to a 
previously-kerberized HDFS depending upon whether you manage your cluster from 
the command line or use Ambari to manage your cluster.
+
+### Procedure for Ambari-Managed 
Clusters
+
+If you manage your cluster using Ambari, you will disable Kerberos 
authentication for your cluster as described in the [How To Disable 
Kerberos](https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-user-guide/content/how_to_disable_kerberos.html)
 Hortonworks documentation. Ambari will guide you through the de-kerberization 
process, including removing/updating any authentication-related configuration 
in your cluster.
 
-2.  Run the following command to set up HAWQ environment variables:
+### Procedure for 
Command-Line-Managed Clusters
+
+If you manage your cluster from the command line, follow these 
instructions to disable HDFS Kerberos security for HAWQ and PXF.
+
+1.  Disable Kerberos on the Hadoop cluster on which you use HAWQ.
+2.  Disable security for HAWQ:
+1.  Login to the HAWQ database master server as the `gpadmin` user and 
set up your HAWQ environment:
 
 ``` bash
-$ source /usr/local/hawq/greenplum_path.sh
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
 ```
 
-3.  Start HAWQ if necessary:
+2.  Start HAWQ if necessary:
 
 ``` bash
-$ hawq start -a
+gpadmin@master$ hawq start cluster -a
--- End diff --

‘hawq start cluster’ do not need '-a' option.


> docs - refactor the kerberos sections
> -
>
> Key: HAWQ-1497
> URL: https://issues.apache.org/jira/browse/HAWQ-1497
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> the kerberos docs do not really distinguish between enabling kerberos at the 
> HDFS filesystem level vs. enabling kerberos user authentication for HAWQ.  
> also missing content for config'ing HAWQ/PXF for secure HDFS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085733#comment-16085733
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127225477
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger resource lookup. For example, to create a new 
administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and click the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** : Enter the HAWQ Ranger lookup role you identified 
or created in Step 2 above.  
+**HAWQ User 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085361#comment-16085361
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user interma commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127157041
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger resource lookup. For example, to create a new 
administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and click the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** : Enter the HAWQ Ranger lookup role you identified 
or created in Step 2 above.  
+**HAWQ User 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084738#comment-16084738
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127074846
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084741#comment-16084741
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127072711
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
--- End diff --

Small edit:  Identify an existing **HAWQ administrative role** or create a 
new ...


> document hawq/ranger kerberos support
> -
>
> Key: HAWQ-1479
> URL: https://issues.apache.org/jira/browse/HAWQ-1479
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add some doc content addressing hawq/ranger/rps kerberos config and any other 
> considerations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084742#comment-16084742
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127073559
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084736#comment-16084736
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127073872
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
--- End diff --

I think for consistency we should 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084737#comment-16084737
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127073604
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084739#comment-16084739
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127075938
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084744#comment-16084744
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127078185
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084745#comment-16084745
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127074583
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084740#comment-16084740
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127075036
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084743#comment-16084743
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127075166
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
+
+6. Update the applicable **Config Properties** fields:
+
+**HAWQ User Name*** - The HAWQ Ranger lookup role you identified or 
created in Step 2 above.  
+**HAWQ User Password*** - The password you assigned 

[jira] [Commented] (HAWQ-1479) document hawq/ranger kerberos support

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084735#comment-16084735
 ] 

ASF GitHub Bot commented on HAWQ-1479:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/128#discussion_r127073500
  
--- Diff: markdown/ranger/ranger-kerberos.html.md.erb ---
@@ -0,0 +1,209 @@
+---
+title: HAWQ Ranger Kerberos Integration
+---
+
+
+
+When you have enabled Ranger Authorization for HAWQ, your HAWQ 
installation includes the Ranger Administrative UI and HAWQ Ranger Plug-in 
Service.
+
+Specific HAWQ Ranger configuration is required when Kerberos 
authentication is enabled for HAWQ or for Ranger. You must configure Kerberos 
support for:
+
+- HAWQ resource lookup by the Ranger Administration host during HAWQ 
policy definition
+- HAWQ Ranger Plug-in Service communication with the Ranger Administration 
host for policy refresh
+
+Use the following procedures to configure Kerberos support for your 
Ranger-authorized HAWQ cluster.
+
+## Prerequisites 
+
+Before you configure Kerberos for your Ranger-authorized HAWQ cluster, 
ensure that you have:
+
+- Installed Java 1.7.0\_17 or later on all nodes in your cluster. Java 
1.7.0_17 is required to use Kerberos-authenticated JDBC on Red Hat Enterprise 
Linux 6.x or 7.x.
+- (Non-OpenJDK Java installations) Installed the Java Cryptography 
Extension (JCE) on all nodes in your cluster. 
+- If you manage your cluster with Ambari, you installed the JCE on 
each node before you enabled Kerberos with the Ambari **Kerberos Security 
Wizard**. 
+- If you manage your cluster from the command line, you must manually 
install the extension on these systems.
+- Noted the host name or IP address of your Ranger Administration host 
(\) and HAWQ master (\) nodes.
+- Identified an existing Kerberos Key Distribution Center (KDC) or set up 
your KDC as described in [Install and Configure a Kerberos KDC 
Server](../clientaccess/kerberos.html#task_setup_kdc).
+- Note the host name or IP address of your KDC (\).
+- Note the name of the Kerberos \ in which your cluster 
resides.
+- Enabled Ranger Authorization for HAWQ. See [Configuring HAWQ to use 
Ranger Policy Management](ranger-integration-config.html).
+
+
+## Configure Ranger for Kerberized HAWQ
+
+When you define HAWQ Ranger authorization policies, the Ranger 
Administration Host uses JDBC to connect to HAWQ during policy definition to 
look up policy resource names. When Kerberos user authentication is enabled for 
HAWQ, you must configure this connection for Kerberos.
+
+To configure Ranger access to a HAWQ cluster enabled with Kerberos user 
authentication, you must:
+
+- Identify an existing HAWQ administrative role or create a new HAWQ 
administrative role for Ranger lookup of HAWQ resources
+- Create a Kerberos principal for the lookup role
+- Update the Ranger HAWQ service definition
+
+### Procedure 
+
+Perform the following procedure to enable the Ranger Administration Host 
to look up resources in your kerberized HAWQ cluster. You will perform 
operations on the HAWQ \, \, and \ 
nodes.
+
+1. Log in to the HAWQ master node and set up your environment:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ . /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify an existing or create a new HAWQ administrative role for 
Ranger resource lookup. For example, to create a new administrative role:
+
+``` shell
+gpadmin@master$ psql -c 'CREATE ROLE "rangerlookup_hawq" with LOGIN 
SUPERUSER;' 
+```
+   
+You may choose a different name for the Ranger lookup role.
+
+3. Log in to the KDC server system and generate a principal for the HAWQ 
`rangerlookup_hawq` role. Substitute your Kerberos \. For example:
+
+``` shell
+$ ssh root@
+root@kdc-server$ kadmin.local -q "addprinc -pw changeme 
rangerlookup_hawq@REALM.DOMAIN"
+```
+
+You do not need to generate a keytab file for the `rangerlookup_hawq` 
principal because you will provide the password in the HAWQ service definition 
of the Ranger Admin UI.
+
+4. Start the Ranger Admin UI in a supported web browser. The default URL 
is \:6080. 
+
+5. Locate the HAWQ service definition and press the **Edit** button. 
--- End diff --

press -> click


> document hawq/ranger kerberos support
> -
>
> Key: HAWQ-1479
> URL: https://issues.apache.org/jira/browse/HAWQ-1479
>   

[jira] [Commented] (HAWQ-1491) docs - add usage info for HiveVectorizedORC profile

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067457#comment-16067457
 ] 

ASF GitHub Bot commented on HAWQ-1491:
--

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/126


> docs - add usage info for HiveVectorizedORC profile
> ---
>
> Key: HAWQ-1491
> URL: https://issues.apache.org/jira/browse/HAWQ-1491
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add usage info and an example for the new HiveVectorizedORC profile to the 
> Hive plug-in page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1491) docs - add usage info for HiveVectorizedORC profile

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066931#comment-16066931
 ] 

ASF GitHub Bot commented on HAWQ-1491:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/126#discussion_r124606922
  
--- Diff: markdown/pxf/HivePXF.html.md.erb ---
@@ -565,6 +577,44 @@ In the following example, you will create a Hive table 
stored in ORC format and
 Time: 425.416 ms
 ```
 
+### Example: Using the HiveVectorizedORC 
Profile
+
+In the following example, you will use the `HiveVectorizedORC` profile to 
query the `sales_info_ORC` Hive table you created in the previous example.
+
+**Note**: The `HiveVectorizedORC` profile does not support the timestamp 
data type and complex types.
--- End diff --

Just to avoid any potential confusion, let's change this to "**or** 
complext types."


> docs - add usage info for HiveVectorizedORC profile
> ---
>
> Key: HAWQ-1491
> URL: https://issues.apache.org/jira/browse/HAWQ-1491
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add usage info and an example for the new HiveVectorizedORC profile to the 
> Hive plug-in page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066788#comment-16066788
 ] 

ASF GitHub Bot commented on HAWQ-1435:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/124#discussion_r124581749
  
--- Diff: markdown/pxf/JdbcPXF.html.md.erb ---
@@ -0,0 +1,213 @@
+---
+title: Accessing External SQL Databases with JDBC (Beta)
+---
+
+
+
+Some of your data may already reside in an external SQL database. The PXF 
JDBC plug-in reads data stored in SQL databases including MySQL, ORACLE, 
PostgreSQL, and Hive.
+
+This section describes how to use PXF with JDBC, including an example of 
creating and querying an external table that accesses data in a MySQL database 
table.
+
+## Prerequisites
+
+Before accessing external SQL databases using HAWQ and PXF, ensure that:
+
+-   The JDBC plug-in is installed on all cluster nodes. See [Installing 
PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
+-   The JDBC driver JAR files for the external SQL database are installed 
on all cluster nodes.
+-   The file locations of external SQL database JDBC JAR files are added 
to `pxf-public.classpath`. If you manage your HAWQ cluster with Ambari, add the 
JARS via the Ambari UI. If you managed your cluster from the command line, edit 
the `/etc/pxf/conf/pxf-public.classpath` file directly.
+
+
+## Querying External SQL Data
+The PXF JDBC plug-in supports the single profile named `Jdbc`.
+
+Use the following syntax to create a HAWQ external table representing 
external SQL database tables you access via JDBC: 
+
+``` sql
+CREATE [READABLE | WRITABLE] EXTERNAL TABLE 
+(   [, ...] | LIKE  )
+LOCATION 
('pxf://[:]/..
+?PROFILE=Jdbc[&=[...]]')
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
+```
+
+JDBC-plug-in-specific keywords and values used in the [CREATE EXTERNAL 
TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described in the 
table below.
+
+| Keyword  | Value |
+|---|-|
+| \| The name of the PXF external table column. The PXF 
\ must exactly match the \ used in the external SQL 
table.|
+| \| The data type of the PXF external table column. The 
PXF \ must be equivalent to the data type used for \ 
in the SQL table.|
+| \| The PXF host. While \ may identify any PXF agent 
node, use the HDFS NameNode as it is guaranteed to be available in a running 
HDFS cluster. If HDFS High Availability is enabled, \ must identify the 
HDFS NameService. |
+| \| The PXF port. If \ is omitted, PXF assumes \ 
identifies a High Availability HDFS Nameservice and connects to the port number 
designated by the `pxf_service_port` server configuration parameter value. 
Default is 51200. |
+| \| The schema name. The default schema name is 
`default`. |
+| \| The database name. The default database name 
is determined by the external SQL server. |
+| \| The table name. |
+| PROFILE| The `PROFILE` keyword must specify `Jdbc`. |
+| \  | The custom options supported by the `Jdbc` profile 
are discussed later in this section.|
+| FORMAT 'CUSTOM' | The JDBC `CUSTOM` `FORMAT` supports only the built-in 
`'pxfwritable_import'` `FORMATTER` property. |
+
+*Note*: When creating PXF external tables, you cannot use the `HEADER` 
option in your `FORMAT` specification.
--- End diff --

Extremely minor, but "Note:" here should be bolded:  **Note:**


> docs - add usage info for pxf jdbc plug-in
> --
>
> Key: HAWQ-1435
> URL: https://issues.apache.org/jira/browse/HAWQ-1435
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> create usage info for the new jdbc plug-in.  there is some good info in the 
> pxf-jdbc README.md. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066790#comment-16066790
 ] 

ASF GitHub Bot commented on HAWQ-1435:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/124#discussion_r124582363
  
--- Diff: markdown/pxf/JdbcPXF.html.md.erb ---
@@ -0,0 +1,213 @@
+---
+title: Accessing External SQL Databases with JDBC (Beta)
+---
+
+
+
+Some of your data may already reside in an external SQL database. The PXF 
JDBC plug-in reads data stored in SQL databases including MySQL, ORACLE, 
PostgreSQL, and Hive.
+
+This section describes how to use PXF with JDBC, including an example of 
creating and querying an external table that accesses data in a MySQL database 
table.
+
+## Prerequisites
+
+Before accessing external SQL databases using HAWQ and PXF, ensure that:
+
+-   The JDBC plug-in is installed on all cluster nodes. See [Installing 
PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
+-   The JDBC driver JAR files for the external SQL database are installed 
on all cluster nodes.
+-   The file locations of external SQL database JDBC JAR files are added 
to `pxf-public.classpath`. If you manage your HAWQ cluster with Ambari, add the 
JARS via the Ambari UI. If you managed your cluster from the command line, edit 
the `/etc/pxf/conf/pxf-public.classpath` file directly.
+
+
+## Querying External SQL Data
+The PXF JDBC plug-in supports the single profile named `Jdbc`.
+
+Use the following syntax to create a HAWQ external table representing 
external SQL database tables you access via JDBC: 
+
+``` sql
+CREATE [READABLE | WRITABLE] EXTERNAL TABLE 
+(   [, ...] | LIKE  )
+LOCATION 
('pxf://[:]/..
+?PROFILE=Jdbc[&=[...]]')
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
+```
+
+JDBC-plug-in-specific keywords and values used in the [CREATE EXTERNAL 
TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described in the 
table below.
+
+| Keyword  | Value |
+|---|-|
+| \| The name of the PXF external table column. The PXF 
\ must exactly match the \ used in the external SQL 
table.|
+| \| The data type of the PXF external table column. The 
PXF \ must be equivalent to the data type used for \ 
in the SQL table.|
+| \| The PXF host. While \ may identify any PXF agent 
node, use the HDFS NameNode as it is guaranteed to be available in a running 
HDFS cluster. If HDFS High Availability is enabled, \ must identify the 
HDFS NameService. |
+| \| The PXF port. If \ is omitted, PXF assumes \ 
identifies a High Availability HDFS Nameservice and connects to the port number 
designated by the `pxf_service_port` server configuration parameter value. 
Default is 51200. |
+| \| The schema name. The default schema name is 
`default`. |
+| \| The database name. The default database name 
is determined by the external SQL server. |
+| \| The table name. |
+| PROFILE| The `PROFILE` keyword must specify `Jdbc`. |
+| \  | The custom options supported by the `Jdbc` profile 
are discussed later in this section.|
+| FORMAT 'CUSTOM' | The JDBC `CUSTOM` `FORMAT` supports only the built-in 
`'pxfwritable_import'` `FORMATTER` property. |
+
+*Note*: When creating PXF external tables, you cannot use the `HEADER` 
option in your `FORMAT` specification.
+
+
+### JDBC Custom Options
+
+You may include one or more custom options in the `LOCATION` URI. Preface 
each option with an ampersand `&`. 
+
+The JDBC plug-in `Jdbc` profile supports the following \s:
--- End diff --

Aren't most of these custom options required in order to setup a JDBC 
connection?  If so, docs should just indicate that up-front, as otherwise it 
seems like these are all optional.


> docs - add usage info for pxf jdbc plug-in
> --
>
> Key: HAWQ-1435
> URL: https://issues.apache.org/jira/browse/HAWQ-1435
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> create usage info for the new jdbc plug-in.  there is some good info in the 
> pxf-jdbc README.md. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066789#comment-16066789
 ] 

ASF GitHub Bot commented on HAWQ-1435:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/124#discussion_r124582946
  
--- Diff: markdown/pxf/JdbcPXF.html.md.erb ---
@@ -0,0 +1,213 @@
+---
+title: Accessing External SQL Databases with JDBC (Beta)
+---
+
+
+
+Some of your data may already reside in an external SQL database. The PXF 
JDBC plug-in reads data stored in SQL databases including MySQL, ORACLE, 
PostgreSQL, and Hive.
+
+This section describes how to use PXF with JDBC, including an example of 
creating and querying an external table that accesses data in a MySQL database 
table.
+
+## Prerequisites
+
+Before accessing external SQL databases using HAWQ and PXF, ensure that:
+
+-   The JDBC plug-in is installed on all cluster nodes. See [Installing 
PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
+-   The JDBC driver JAR files for the external SQL database are installed 
on all cluster nodes.
+-   The file locations of external SQL database JDBC JAR files are added 
to `pxf-public.classpath`. If you manage your HAWQ cluster with Ambari, add the 
JARS via the Ambari UI. If you managed your cluster from the command line, edit 
the `/etc/pxf/conf/pxf-public.classpath` file directly.
+
+
+## Querying External SQL Data
+The PXF JDBC plug-in supports the single profile named `Jdbc`.
+
+Use the following syntax to create a HAWQ external table representing 
external SQL database tables you access via JDBC: 
+
+``` sql
+CREATE [READABLE | WRITABLE] EXTERNAL TABLE 
+(   [, ...] | LIKE  )
+LOCATION 
('pxf://[:]/..
+?PROFILE=Jdbc[&=[...]]')
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
+```
+
+JDBC-plug-in-specific keywords and values used in the [CREATE EXTERNAL 
TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described in the 
table below.
+
+| Keyword  | Value |
+|---|-|
+| \| The name of the PXF external table column. The PXF 
\ must exactly match the \ used in the external SQL 
table.|
+| \| The data type of the PXF external table column. The 
PXF \ must be equivalent to the data type used for \ 
in the SQL table.|
+| \| The PXF host. While \ may identify any PXF agent 
node, use the HDFS NameNode as it is guaranteed to be available in a running 
HDFS cluster. If HDFS High Availability is enabled, \ must identify the 
HDFS NameService. |
+| \| The PXF port. If \ is omitted, PXF assumes \ 
identifies a High Availability HDFS Nameservice and connects to the port number 
designated by the `pxf_service_port` server configuration parameter value. 
Default is 51200. |
+| \| The schema name. The default schema name is 
`default`. |
+| \| The database name. The default database name 
is determined by the external SQL server. |
+| \| The table name. |
+| PROFILE| The `PROFILE` keyword must specify `Jdbc`. |
+| \  | The custom options supported by the `Jdbc` profile 
are discussed later in this section.|
+| FORMAT 'CUSTOM' | The JDBC `CUSTOM` `FORMAT` supports only the built-in 
`'pxfwritable_import'` `FORMATTER` property. |
+
+*Note*: When creating PXF external tables, you cannot use the `HEADER` 
option in your `FORMAT` specification.
+
+
+### JDBC Custom Options
+
+You may include one or more custom options in the `LOCATION` URI. Preface 
each option with an ampersand `&`. 
+
+The JDBC plug-in `Jdbc` profile supports the following \s:
+
+| Option Name   | Description
+|---||
+| JDBC_DRIVER | The JDBC driver class name. |
+| DB_URL | The URL to the database; includes the hostname, port, and 
database name. |
+| USER | The database user name. |
+| PASS | The database password for USER. |
+| PARTITION_BY | The partition column, \:\. 
The JDBC plug-in supports `date`, `int`, and `enum` \s. Use the  
`-MM-dd` format for the `date` \. A null `PARTITION_BY` 
defaults to a single fragment. |
+| RANGE | The query range, \[:\]. \ 
may be empty for an `int` \. |
--- End diff --

Should probably clarify that RANGE and INTERVAL are only used with 
PARTITION_BY?


> docs - add usage info for pxf jdbc plug-in
> 

[jira] [Commented] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16066791#comment-16066791
 ] 

ASF GitHub Bot commented on HAWQ-1435:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/124#discussion_r124583252
  
--- Diff: markdown/pxf/JdbcPXF.html.md.erb ---
@@ -0,0 +1,213 @@
+---
+title: Accessing External SQL Databases with JDBC (Beta)
+---
+
+
+
+Some of your data may already reside in an external SQL database. The PXF 
JDBC plug-in reads data stored in SQL databases including MySQL, ORACLE, 
PostgreSQL, and Hive.
+
+This section describes how to use PXF with JDBC, including an example of 
creating and querying an external table that accesses data in a MySQL database 
table.
+
+## Prerequisites
+
+Before accessing external SQL databases using HAWQ and PXF, ensure that:
+
+-   The JDBC plug-in is installed on all cluster nodes. See [Installing 
PXF Plug-ins](InstallPXFPlugins.html) for PXF plug-in installation information.
+-   The JDBC driver JAR files for the external SQL database are installed 
on all cluster nodes.
+-   The file locations of external SQL database JDBC JAR files are added 
to `pxf-public.classpath`. If you manage your HAWQ cluster with Ambari, add the 
JARS via the Ambari UI. If you managed your cluster from the command line, edit 
the `/etc/pxf/conf/pxf-public.classpath` file directly.
+
+
+## Querying External SQL Data
+The PXF JDBC plug-in supports the single profile named `Jdbc`.
+
+Use the following syntax to create a HAWQ external table representing 
external SQL database tables you access via JDBC: 
+
+``` sql
+CREATE [READABLE | WRITABLE] EXTERNAL TABLE 
+(   [, ...] | LIKE  )
+LOCATION 
('pxf://[:]/..
+?PROFILE=Jdbc[&=[...]]')
+FORMAT 'CUSTOM' (FORMATTER='pxfwritable_import');
+```
+
+JDBC-plug-in-specific keywords and values used in the [CREATE EXTERNAL 
TABLE](../reference/sql/CREATE-EXTERNAL-TABLE.html) call are described in the 
table below.
+
+| Keyword  | Value |
+|---|-|
+| \| The name of the PXF external table column. The PXF 
\ must exactly match the \ used in the external SQL 
table.|
+| \| The data type of the PXF external table column. The 
PXF \ must be equivalent to the data type used for \ 
in the SQL table.|
+| \| The PXF host. While \ may identify any PXF agent 
node, use the HDFS NameNode as it is guaranteed to be available in a running 
HDFS cluster. If HDFS High Availability is enabled, \ must identify the 
HDFS NameService. |
+| \| The PXF port. If \ is omitted, PXF assumes \ 
identifies a High Availability HDFS Nameservice and connects to the port number 
designated by the `pxf_service_port` server configuration parameter value. 
Default is 51200. |
+| \| The schema name. The default schema name is 
`default`. |
+| \| The database name. The default database name 
is determined by the external SQL server. |
+| \| The table name. |
+| PROFILE| The `PROFILE` keyword must specify `Jdbc`. |
+| \  | The custom options supported by the `Jdbc` profile 
are discussed later in this section.|
+| FORMAT 'CUSTOM' | The JDBC `CUSTOM` `FORMAT` supports only the built-in 
`'pxfwritable_import'` `FORMATTER` property. |
+
+*Note*: When creating PXF external tables, you cannot use the `HEADER` 
option in your `FORMAT` specification.
+
+
+### JDBC Custom Options
+
+You may include one or more custom options in the `LOCATION` URI. Preface 
each option with an ampersand `&`. 
+
+The JDBC plug-in `Jdbc` profile supports the following \s:
+
+| Option Name   | Description
+|---||
+| JDBC_DRIVER | The JDBC driver class name. |
+| DB_URL | The URL to the database; includes the hostname, port, and 
database name. |
+| USER | The database user name. |
+| PASS | The database password for USER. |
+| PARTITION_BY | The partition column, \:\. 
The JDBC plug-in supports `date`, `int`, and `enum` \s. Use the  
`-MM-dd` format for the `date` \. A null `PARTITION_BY` 
defaults to a single fragment. |
+| RANGE | The query range, \[:\]. \ 
may be empty for an `int` \. |
+| INTERVAL | The interval, \[:\], of one 
fragment.  `INTERVAL` may be empty for an `enum` \. 
\ may be 

[jira] [Commented] (HAWQ-1491) docs - add usage info for HiveVectorizedORC profile

2017-06-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16065669#comment-16065669
 ] 

ASF GitHub Bot commented on HAWQ-1491:
--

Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/126#discussion_r124423336
  
--- Diff: markdown/pxf/ReadWritePXF.html.md.erb ---
@@ -105,6 +105,18 @@ Note: The DELIMITER 
parameter is mandatory.
 org.apache.hawq.pxf.service.io.GPDBWritable
 
 
+
+HiveVectorizedORC
+Optimized block read of a Hive table where each partition is stored as 
an ORC file.
--- End diff --

People might get confused with HDFS block, so we can maybe use bulk/batch 
read.


> docs - add usage info for HiveVectorizedORC profile
> ---
>
> Key: HAWQ-1491
> URL: https://issues.apache.org/jira/browse/HAWQ-1491
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add usage info and an example for the new HiveVectorizedORC profile to the 
> Hive plug-in page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1491) docs - add usage info for HiveVectorizedORC profile

2017-06-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16065667#comment-16065667
 ] 

ASF GitHub Bot commented on HAWQ-1491:
--

Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/126#discussion_r124423164
  
--- Diff: markdown/pxf/HivePXF.html.md.erb ---
@@ -565,6 +577,44 @@ In the following example, you will create a Hive table 
stored in ORC format and
 Time: 425.416 ms
 ```
 
+### Example: Using the HiveVectorizedORC 
Profile
+
+In the following example, you will use the `HiveVectorizedORC` profile to 
query the `sales_info_ORC` Hive table you created in the previous example.
+
+**Note**: The `HiveVectorizedORC` profile does not support the timestamp 
data type and complex types.
+
+1. Start the `psql` subsystem:
+
+``` shell
+$ psql -d postgres
+```
+
+2. Use the PXF `HiveVectorizedORC` profile to create a queryable HAWQ 
external table from the Hive table named `sales_info_ORC` that you created in 
Step 1 of the previous example. The `FORMAT` clause must specify `'CUSTOM'`. 
The `HiveVectorizedORC` `CUSTOM` format supports only the built-in 
`'pxfwritable_import'` `formatter`.
--- End diff --

queryable - maybe readable?


> docs - add usage info for HiveVectorizedORC profile
> ---
>
> Key: HAWQ-1491
> URL: https://issues.apache.org/jira/browse/HAWQ-1491
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add usage info and an example for the new HiveVectorizedORC profile to the 
> Hive plug-in page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1491) docs - add usage info for HiveVectorizedORC profile

2017-06-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16065665#comment-16065665
 ] 

ASF GitHub Bot commented on HAWQ-1491:
--

Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/126#discussion_r124423088
  
--- Diff: markdown/pxf/HivePXF.html.md.erb ---
@@ -495,9 +500,16 @@ Use the `HiveORC` profile to access ORC format data. 
The `HiveORC` profile provi
 - `=`, `>`, `<`, `>=`, `<=`, `IS NULL`, and `IS NOT NULL` operators 
and comparisons between the `float8` and `float4` types
 - `IN` operator on arrays of `int2`, `int4`, `int8`, `boolean`, and 
`text`
 
-- Complex type support - You can access Hive tables composed of array, 
map, struct, and union data types. PXF serializes each of these complex types 
to `text`.
+When choosing an ORC-supporting profile, consider the following:
+
+- The `HiveORC` profile supports complex types. You can access Hive tables 
composed of array, map, struct, and union data types. PXF serializes each of 
these complex types to `text`.  
+
+The `HiveVectorizedORC` profile does not support complex types.
+
+- The `HiveVectorizedORC` profile reads 1024 rows of data, while the 
`HiveORC` profile reads only a single row at a time.
--- End diff --

profile reads 1024 rows of data at once


> docs - add usage info for HiveVectorizedORC profile
> ---
>
> Key: HAWQ-1491
> URL: https://issues.apache.org/jira/browse/HAWQ-1491
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add usage info and an example for the new HiveVectorizedORC profile to the 
> Hive plug-in page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1491) docs - add usage info for HiveVectorizedORC profile

2017-06-27 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16065576#comment-16065576
 ] 

ASF GitHub Bot commented on HAWQ-1491:
--

GitHub user lisakowen opened a pull request:

https://github.com/apache/incubator-hawq-docs/pull/126

HAWQ-1491 - create usage docs for HiveVectorizedORC profile

update hawq docs for new HiveVectorizedORC profile.
- add example to hive plug-in page
- include the profile and accessor/fragmenter/resolver classes in the 
appropriate tables in the other docs


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lisakowen/incubator-hawq-docs 
feature/pxf-hivevectorizedorc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-docs/pull/126.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #126


commit dfd692cb355e9505e82669c311c413c06ae8518e
Author: Lisa Owen 
Date:   2017-06-27T19:56:45Z

HAWQ-1491 - create usage docs for HiveVectorizedORC profile




> docs - add usage info for HiveVectorizedORC profile
> ---
>
> Key: HAWQ-1491
> URL: https://issues.apache.org/jira/browse/HAWQ-1491
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add usage info and an example for the new HiveVectorizedORC profile to the 
> Hive plug-in page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1435) docs - add usage info for pxf jdbc plug-in

2017-06-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047191#comment-16047191
 ] 

ASF GitHub Bot commented on HAWQ-1435:
--

GitHub user lisakowen opened a pull request:

https://github.com/apache/incubator-hawq-docs/pull/124

HAWQ-1435 document new pxf jdbc plug-in

document the community-contributed PXF JDBC plug-in.  include a simple 
mysql example.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lisakowen/incubator-hawq-docs feature/pxf-jdbc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-docs/pull/124.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #124


commit a008125b2864c3acbc3b630030cb614a5ea2679f
Author: Lisa Owen 
Date:   2017-04-19T00:13:57Z

document new pxf jdbc plug-in




> docs - add usage info for pxf jdbc plug-in
> --
>
> Key: HAWQ-1435
> URL: https://issues.apache.org/jira/browse/HAWQ-1435
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> create usage info for the new jdbc plug-in.  there is some good info in the 
> pxf-jdbc README.md. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1480) Packing a core file in hawq

2017-06-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045724#comment-16045724
 ] 

ASF GitHub Bot commented on HAWQ-1480:
--

GitHub user outofmem0ry opened a pull request:

https://github.com/apache/incubator-hawq-docs/pull/123

HAWQ-1480 - Added documentation for packcore utility



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/outofmem0ry/incubator-hawq-docs packcore-doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-docs/pull/123.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #123


commit de43867780a4f32592d7b16b13ed9253b52b8fe2
Author: Shubham Sharma 
Date:   2017-06-10T21:57:25Z

HAWQ-1480 - Added documentation for packcore utility




> Packing a core file in hawq
> ---
>
> Key: HAWQ-1480
> URL: https://issues.apache.org/jira/browse/HAWQ-1480
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
>
> Currently there is no way to packing a core file with its context – 
> executable, application and system shared libraries in hawq. This information 
> can be later unpacked on another system and helps in debugging. It is a 
> useful feature to quickly gather all the data needed from a crash/core 
> generated on the system to analyze it later.
> Another open source project, greenplum, uses a script 
> [https://github.com/greenplum-db/gpdb/blob/master/gpMgmt/sbin/packcore] to 
> collect this information. Tested this script against Hawq's installation and 
> it collects the required information needed for debug.
> Can this be merged into Hawq, if yes, I can submit a pull request and test it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1473) document ranger plug-in service high availability

2017-06-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16033384#comment-16033384
 ] 

ASF GitHub Bot commented on HAWQ-1473:
--

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/120


> document ranger plug-in service high availability
> -
>
> Key: HAWQ-1473
> URL: https://issues.apache.org/jira/browse/HAWQ-1473
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add RPS high availability information to the docs.  include config info as 
> well as failover scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1473) document ranger plug-in service high availability

2017-05-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16032142#comment-16032142
 ] 

ASF GitHub Bot commented on HAWQ-1473:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/120#discussion_r119489636
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -129,6 +129,64 @@ Once the connection between HAWQ and Ranger is 
configured, you may choose to set
 5. Click **Save** to save your changes.
 6. Select **Service Actions > Restart All** and confirm that you want to 
restart the HAWQ cluster.
 
+## Step 3: (Optional) Register a Standby Ranger 
Plug-in Service
+
+The HAWQ Ranger Plug-in Service runs on the HAWQ master node. If this 
service goes down, all HAWQ database operations will fail. Configure a highly 
available HAWQ Ranger Plug-in Service to eliminate possible downtime should 
this situation occur.
+
+Ranger Admin high availability and HAWQ Ranger Plug-in Service high 
availability are independent; you can configure HAWQ Ranger Plug-in Service HA 
without configuring HA for Ranger Admin.
+
+### Prerequisites 
+
+Before you configure HAWQ Ranger authentication in high availability mode, 
ensure that you have:
+
+- Installed or upgraded to a version of HAWQ that includes support for 
HAWQ Ranger Authentication.
+
+- (Optional) Configured Ranger Admin for high availability.
+
+- Configured a HAWQ standby master node for your HAWQ cluster.
+
+You must configure a standby master for your HAWQ deployment before 
enabling HAWQ Ranger high availability mode. If you have not configured your 
HAWQ standby master, follow the instructions in [Adding a HAWQ Standby 
Master](../admin/ambari-admin.html#amb-add-standby) (if you manage your HAWQ 
cluster with Ambari) or [Configuring Master 
Mirroring](../admin/MasterMirroring.html#standby_master_configure) (for a 
command-line-managed HAWQ cluster).
+
+- Registered the HAWQ Ranger Plug-in Service on your HAWQ master node.
+
+The HAWQ Ranger Plug-in Service runs on the HAWQ master node. If you 
have not yet enabled the Ranger Plug-in Service, refer to [Install Ranger 
Connectivity to HAWQ](ranger-integration-config.html#jar) for registration 
instructions. (Optional) If you have configured Ranger Admin HA, make sure to 
identify the Ranger Admin HA proxy when you enable the plug-in.
+
+
+### Configuring the Standby Ranger Plug-in Service 
+
+The standby Ranger Plug-in Service runs on the HAWQ standby master node, 
utilizing the same port number as that when the service runs on the master 
node. To enable HAWQ Ranger high availability, you must register the standby 
Ranger Plug-in Service on the standby master node, and then restart the standby.
--- End diff --

This paragraph should be merged with the intro paragraph directly under 
**Step 3**.  Right now it repeats some of that info.


> document ranger plug-in service high availability
> -
>
> Key: HAWQ-1473
> URL: https://issues.apache.org/jira/browse/HAWQ-1473
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add RPS high availability information to the docs.  include config info as 
> well as failover scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1473) document ranger plug-in service high availability

2017-05-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16032138#comment-16032138
 ] 

ASF GitHub Bot commented on HAWQ-1473:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/120#discussion_r119487987
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -129,6 +129,64 @@ Once the connection between HAWQ and Ranger is 
configured, you may choose to set
 5. Click **Save** to save your changes.
 6. Select **Service Actions > Restart All** and confirm that you want to 
restart the HAWQ cluster.
 
+## Step 3: (Optional) Register a Standby Ranger 
Plug-in Service
+
+The HAWQ Ranger Plug-in Service runs on the HAWQ master node. If this 
service goes down, all HAWQ database operations will fail. Configure a highly 
available HAWQ Ranger Plug-in Service to eliminate possible downtime should 
this situation occur.
+
+Ranger Admin high availability and HAWQ Ranger Plug-in Service high 
availability are independent; you can configure HAWQ Ranger Plug-in Service HA 
without configuring HA for Ranger Admin.
--- End diff --

Might want to add that configuring HA for both is advised?  Also, this 
section uses the abbreviated term "Ranger Admin" throughout, which seems 
confusing to me.  Should probably stick with "Ranger Administration Host" to 
stay consistent with the first part of the doc.


> document ranger plug-in service high availability
> -
>
> Key: HAWQ-1473
> URL: https://issues.apache.org/jira/browse/HAWQ-1473
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add RPS high availability information to the docs.  include config info as 
> well as failover scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1473) document ranger plug-in service high availability

2017-05-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16032139#comment-16032139
 ] 

ASF GitHub Bot commented on HAWQ-1473:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/120#discussion_r119488319
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -129,6 +129,64 @@ Once the connection between HAWQ and Ranger is 
configured, you may choose to set
 5. Click **Save** to save your changes.
 6. Select **Service Actions > Restart All** and confirm that you want to 
restart the HAWQ cluster.
 
+## Step 3: (Optional) Register a Standby Ranger 
Plug-in Service
+
+The HAWQ Ranger Plug-in Service runs on the HAWQ master node. If this 
service goes down, all HAWQ database operations will fail. Configure a highly 
available HAWQ Ranger Plug-in Service to eliminate possible downtime should 
this situation occur.
+
+Ranger Admin high availability and HAWQ Ranger Plug-in Service high 
availability are independent; you can configure HAWQ Ranger Plug-in Service HA 
without configuring HA for Ranger Admin.
+
+### Prerequisites 
+
+Before you configure HAWQ Ranger authentication in high availability mode, 
ensure that you have:
+
+- Installed or upgraded to a version of HAWQ that includes support for 
HAWQ Ranger Authentication.
--- End diff --

Wondering if this first bullet is really needed...


> document ranger plug-in service high availability
> -
>
> Key: HAWQ-1473
> URL: https://issues.apache.org/jira/browse/HAWQ-1473
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add RPS high availability information to the docs.  include config info as 
> well as failover scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1473) document ranger plug-in service high availability

2017-05-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16032143#comment-16032143
 ] 

ASF GitHub Bot commented on HAWQ-1473:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/120#discussion_r119490263
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -129,6 +129,64 @@ Once the connection between HAWQ and Ranger is 
configured, you may choose to set
 5. Click **Save** to save your changes.
 6. Select **Service Actions > Restart All** and confirm that you want to 
restart the HAWQ cluster.
 
+## Step 3: (Optional) Register a Standby Ranger 
Plug-in Service
+
+The HAWQ Ranger Plug-in Service runs on the HAWQ master node. If this 
service goes down, all HAWQ database operations will fail. Configure a highly 
available HAWQ Ranger Plug-in Service to eliminate possible downtime should 
this situation occur.
+
+Ranger Admin high availability and HAWQ Ranger Plug-in Service high 
availability are independent; you can configure HAWQ Ranger Plug-in Service HA 
without configuring HA for Ranger Admin.
+
+### Prerequisites 
+
+Before you configure HAWQ Ranger authentication in high availability mode, 
ensure that you have:
+
+- Installed or upgraded to a version of HAWQ that includes support for 
HAWQ Ranger Authentication.
+
+- (Optional) Configured Ranger Admin for high availability.
+
+- Configured a HAWQ standby master node for your HAWQ cluster.
+
+You must configure a standby master for your HAWQ deployment before 
enabling HAWQ Ranger high availability mode. If you have not configured your 
HAWQ standby master, follow the instructions in [Adding a HAWQ Standby 
Master](../admin/ambari-admin.html#amb-add-standby) (if you manage your HAWQ 
cluster with Ambari) or [Configuring Master 
Mirroring](../admin/MasterMirroring.html#standby_master_configure) (for a 
command-line-managed HAWQ cluster).
+
+- Registered the HAWQ Ranger Plug-in Service on your HAWQ master node.
+
+The HAWQ Ranger Plug-in Service runs on the HAWQ master node. If you 
have not yet enabled the Ranger Plug-in Service, refer to [Install Ranger 
Connectivity to HAWQ](ranger-integration-config.html#jar) for registration 
instructions. (Optional) If you have configured Ranger Admin HA, make sure to 
identify the Ranger Admin HA proxy when you enable the plug-in.
+
+
+### Configuring the Standby Ranger Plug-in Service 
+
+The standby Ranger Plug-in Service runs on the HAWQ standby master node, 
utilizing the same port number as that when the service runs on the master 
node. To enable HAWQ Ranger high availability, you must register the standby 
Ranger Plug-in Service on the standby master node, and then restart the standby.
+
+**Note**: If you configured and registered the master HAWQ Ranger Plug-in 
Service before you initialized your HAWQ standby master node, you do not need 
to perform the steps in this section.
--- End diff --

This note is confusing.  The last bullet under prerequisites specifically 
states that they should have already registered the master plug-in and standby 
master.  If it's true that setting up the master plug-in first and then 
configuring the standby master automatically registers the standby plug-in, 
then this note should probably appear much earlier (before the prerequisites). 


> document ranger plug-in service high availability
> -
>
> Key: HAWQ-1473
> URL: https://issues.apache.org/jira/browse/HAWQ-1473
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add RPS high availability information to the docs.  include config info as 
> well as failover scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1473) document ranger plug-in service high availability

2017-05-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16032141#comment-16032141
 ] 

ASF GitHub Bot commented on HAWQ-1473:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/120#discussion_r119492058
  
--- Diff: markdown/reference/guc/parameter_definitions.html.md.erb ---
@@ -2147,6 +2149,14 @@ Identifies the port on which the HAWQ Ranger Plug-in 
Service runs. The `hawq_rps
 
|-|-|-|
 | valid port number | 8432 | master, reload |
 
+## hawq\_rps\_check\_local\_interval
+
+When HAWQ Ranger authentication high availability mode is enabled and the 
Ranger Plug-in Service is active on the standby master node, HAWQ attempts to 
switch back to the service located on the master node as soon as it becomes 
available. The HAWQ master periodically attempts to re-establish contact with 
the service on the local node, using `hawq_rps_check_local_interval` as the 
polling time interval (in seconds) for this contact.
--- End diff --

This wording is a bit unclear as to whether it's talking about just the 
standby ranger service being used (which is the intent) vs whether the entire 
master node is down and the standby plug-in is being used.  I think it would be 
better to repeat the wording used in `ranger-ha.html`, which starts "Should the 
HAWQ master node fail to communicate with the local Ranger Plug-in Service..."


> document ranger plug-in service high availability
> -
>
> Key: HAWQ-1473
> URL: https://issues.apache.org/jira/browse/HAWQ-1473
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add RPS high availability information to the docs.  include config info as 
> well as failover scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1473) document ranger plug-in service high availability

2017-05-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16032140#comment-16032140
 ] 

ASF GitHub Bot commented on HAWQ-1473:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/120#discussion_r119489263
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -129,6 +129,64 @@ Once the connection between HAWQ and Ranger is 
configured, you may choose to set
 5. Click **Save** to save your changes.
 6. Select **Service Actions > Restart All** and confirm that you want to 
restart the HAWQ cluster.
 
+## Step 3: (Optional) Register a Standby Ranger 
Plug-in Service
+
+The HAWQ Ranger Plug-in Service runs on the HAWQ master node. If this 
service goes down, all HAWQ database operations will fail. Configure a highly 
available HAWQ Ranger Plug-in Service to eliminate possible downtime should 
this situation occur.
+
+Ranger Admin high availability and HAWQ Ranger Plug-in Service high 
availability are independent; you can configure HAWQ Ranger Plug-in Service HA 
without configuring HA for Ranger Admin.
+
+### Prerequisites 
+
+Before you configure HAWQ Ranger authentication in high availability mode, 
ensure that you have:
+
+- Installed or upgraded to a version of HAWQ that includes support for 
HAWQ Ranger Authentication.
+
+- (Optional) Configured Ranger Admin for high availability.
+
+- Configured a HAWQ standby master node for your HAWQ cluster.
+
+You must configure a standby master for your HAWQ deployment before 
enabling HAWQ Ranger high availability mode. If you have not configured your 
HAWQ standby master, follow the instructions in [Adding a HAWQ Standby 
Master](../admin/ambari-admin.html#amb-add-standby) (if you manage your HAWQ 
cluster with Ambari) or [Configuring Master 
Mirroring](../admin/MasterMirroring.html#standby_master_configure) (for a 
command-line-managed HAWQ cluster).
+
+- Registered the HAWQ Ranger Plug-in Service on your HAWQ master node.
+
+The HAWQ Ranger Plug-in Service runs on the HAWQ master node. If you 
have not yet enabled the Ranger Plug-in Service, refer to [Install Ranger 
Connectivity to HAWQ](ranger-integration-config.html#jar) for registration 
instructions. (Optional) If you have configured Ranger Admin HA, make sure to 
identify the Ranger Admin HA proxy when you enable the plug-in.
+
+
+### Configuring the Standby Ranger Plug-in Service 
--- End diff --

This heading title should just be "Procedure".  


> document ranger plug-in service high availability
> -
>
> Key: HAWQ-1473
> URL: https://issues.apache.org/jira/browse/HAWQ-1473
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add RPS high availability information to the docs.  include config info as 
> well as failover scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1474) docs - instructions to create/use a minimal psql client pkg

2017-05-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030289#comment-16030289
 ] 

ASF GitHub Bot commented on HAWQ-1474:
--

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/121


> docs - instructions to create/use a minimal psql client pkg
> ---
>
> Key: HAWQ-1474
> URL: https://issues.apache.org/jira/browse/HAWQ-1474
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add instructions to create, install, and use a minimal psql client pkg.  one 
> would install this package on a like linux client system outside of the hawq 
> cluster.
> this will basically provide instructions to:
> - package up the psql binary and libraries and an auto-generated 
> environment-setting file
> - install this on a client system
> - run the client



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1473) document ranger plug-in service high availability

2017-05-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027146#comment-16027146
 ] 

ASF GitHub Bot commented on HAWQ-1473:
--

Github user ictmalili commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/120#discussion_r118810915
  
--- Diff: markdown/admin/MasterMirroring.html.md.erb ---
@@ -103,6 +103,8 @@ Upon activation of the standby master, HAWQ 
reconstructs the state of the master
```

The newly-activated master's status should be **Active**. If you 
configured a new standby master, its status is **Passive**. When a standby 
master is not configured, the command displays `-No entries found`, the message 
indicating that no standby master instance is configured.
+   
+6. If you have enabled HAWQ Ranger Authentication in high availability 
mode in your cluster, you must manually update the Ranger HAWQ service 
definition to identify the new master node. Refer to [Failover to HAWQ Standby 
Master](../ranger/ranger-ha.html#rps_ha_cfg_masterfailover) in the HAWQ Ranger 
documentation for additional information.
 
 6. Query the `gp_segment_configuration` table to verify that segments have 
registered themselves to the new master:
--- End diff --

The index number for next item should be incremented.


> document ranger plug-in service high availability
> -
>
> Key: HAWQ-1473
> URL: https://issues.apache.org/jira/browse/HAWQ-1473
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add RPS high availability information to the docs.  include config info as 
> well as failover scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1473) document ranger plug-in service high availability

2017-05-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027143#comment-16027143
 ] 

ASF GitHub Bot commented on HAWQ-1473:
--

Github user ictmalili commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/120#discussion_r118811043
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -129,6 +129,62 @@ Once the connection between HAWQ and Ranger is 
configured, you may choose to set
 5. Click **Save** to save your changes.
 6. Select **Service Actions > Restart All** and confirm that you want to 
restart the HAWQ cluster.
 
+## Step 3: (Optional) Register a Standby Ranger 
Plug-in Service
+
+The HAWQ Ranger Plug-in Service runs on the HAWQ master node. If this 
service goes down, all HAWQ database operations will fail. Configure a highly 
available HAWQ Ranger Plug-in Service to eliminate possible downtime should 
this situation occur.
+
+### Prerequisites 
+
+Before you configure HAWQ Ranger authentication in high availability mode, 
ensure that you have:
+
+- Installed or upgraded to a version of HAWQ that includes support for 
HAWQ Ranger Authentication.
+
+- Configured Ranger Admin for high availability.
+
+- Configured a HAWQ standby master node for your HAWQ cluster.
+
+You must configure a standby master for your HAWQ deployment before 
enabling HAWQ Ranger high availability mode. If you have not configured your 
HAWQ standby master, follow the instructions in [Adding a HAWQ Standby 
Master](../admin/ambari-admin.html#amb-add-standby) (if you manage your HAWQ 
cluster with Ambari) or [Configuring Master 
Mirroring](../admin/MasterMirroring.html#standby_master_configure) (for a 
command-line-managed HAWQ cluster).
+
+- Registered the HAWQ Ranger Plug-in Service on your HAWQ master node.
+
+The HAWQ Ranger Plug-in Service runs on the HAWQ master node. If you 
have not yet enabled the Ranger Plug-in Service, refer to [Install Ranger 
Connectivity to HAWQ](ranger-integration-config.html#jar) for registration 
instructions. Make sure to identify the Ranger Admin HA proxy when you enable 
the plug-in.
--- End diff --

This is not required. 


> document ranger plug-in service high availability
> -
>
> Key: HAWQ-1473
> URL: https://issues.apache.org/jira/browse/HAWQ-1473
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add RPS high availability information to the docs.  include config info as 
> well as failover scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1473) document ranger plug-in service high availability

2017-05-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027144#comment-16027144
 ] 

ASF GitHub Bot commented on HAWQ-1473:
--

Github user ictmalili commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/120#discussion_r118811032
  
--- Diff: markdown/ranger/ranger-integration-config.html.md.erb ---
@@ -129,6 +129,62 @@ Once the connection between HAWQ and Ranger is 
configured, you may choose to set
 5. Click **Save** to save your changes.
 6. Select **Service Actions > Restart All** and confirm that you want to 
restart the HAWQ cluster.
 
+## Step 3: (Optional) Register a Standby Ranger 
Plug-in Service
+
+The HAWQ Ranger Plug-in Service runs on the HAWQ master node. If this 
service goes down, all HAWQ database operations will fail. Configure a highly 
available HAWQ Ranger Plug-in Service to eliminate possible downtime should 
this situation occur.
+
+### Prerequisites 
+
+Before you configure HAWQ Ranger authentication in high availability mode, 
ensure that you have:
+
+- Installed or upgraded to a version of HAWQ that includes support for 
HAWQ Ranger Authentication.
+
+- Configured Ranger Admin for high availability.
--- End diff --

Actually Ranger HA and RPS HA are two separated and independent parts. If 
the user does not configure Ranger HA, he can also configure RPS HA in HAWQ 
standby master.


> document ranger plug-in service high availability
> -
>
> Key: HAWQ-1473
> URL: https://issues.apache.org/jira/browse/HAWQ-1473
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add RPS high availability information to the docs.  include config info as 
> well as failover scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1474) docs - instructions to create/use a minimal psql client pkg

2017-05-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026519#comment-16026519
 ] 

ASF GitHub Bot commented on HAWQ-1474:
--

GitHub user lisakowen opened a pull request:

https://github.com/apache/incubator-hawq-docs/pull/121

HAWQ-1474 - document minimal psql client pkg create/install/run

document how to create a minimal psql client "package" including 
instructions on installing and using the package.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lisakowen/incubator-hawq-docs 
feature/hawq-psql-client

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-docs/pull/121.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #121


commit d8a2c5f4ff1bc3c56a239f90e3fb02dd577c73af
Author: Lisa Owen 
Date:   2017-05-23T22:33:33Z

document minimal psql client pkg create/install/run




> docs - instructions to create/use a minimal psql client pkg
> ---
>
> Key: HAWQ-1474
> URL: https://issues.apache.org/jira/browse/HAWQ-1474
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> add instructions to create, install, and use a minimal psql client pkg.  one 
> would install this package on a like linux client system outside of the hawq 
> cluster.
> this will basically provide instructions to:
> - package up the psql binary and libraries and an auto-generated 
> environment-setting file
> - install this on a client system
> - run the client



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1293) docs - tag the deprecated GUCs as such

2017-04-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982149#comment-15982149
 ] 

ASF GitHub Bot commented on HAWQ-1293:
--

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/91


> docs - tag the deprecated GUCs as such
> --
>
> Key: HAWQ-1293
> URL: https://issues.apache.org/jira/browse/HAWQ-1293
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>Priority: Minor
>
> DEPRECATED GUC: should be in doc and mention that they are deprecated; should 
> show in hawq config
> guc.c identifies the following documented GUCs as deprecated:
> - work_mem
> - gp_interconnect_setup_timeout
> the following deprecated GUCS are not currently documented:
> - gp_eager_hashtable_release
> - max_work_mem
> - gp_hashagg_compress_spill_files
> tag the documented deprecated GUCs as such.  deprecated GUCs that are not 
> documented will continue to not be documented!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1296) HAWQ "Getting Started" guide

2017-04-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982133#comment-15982133
 ] 

ASF GitHub Bot commented on HAWQ-1296:
--

Github user lisakowen closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/98


> HAWQ "Getting Started" guide
> 
>
> Key: HAWQ-1296
> URL: https://issues.apache.org/jira/browse/HAWQ-1296
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>Priority: Minor
>
> create a "HAWQ in 15 minutes" sort of guide/tutorial that introduces basic 
> HAWQ concepts including the runtime environment, admin, HAWQ-managed 
> databases and tables, and PXF access to external data.  the guide should 
> include relevant exercises that inform the user in each of these areas.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1432) document HAWQ's directory structure in HDFS

2017-04-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982063#comment-15982063
 ] 

ASF GitHub Bot commented on HAWQ-1432:
--

Github user lisakowen closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/118


> document HAWQ's directory structure in HDFS
> ---
>
> Key: HAWQ-1432
> URL: https://issues.apache.org/jira/browse/HAWQ-1432
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>Priority: Minor
> Fix For: 2.3.0.0-incubating
>
>
> add some documentation describing the HAWQ directory structure in HDFS and 
> how to locate the files for a specific table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1432) document HAWQ's directory structure in HDFS

2017-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15968217#comment-15968217
 ] 

ASF GitHub Bot commented on HAWQ-1432:
--

GitHub user lisakowen opened a pull request:

https://github.com/apache/incubator-hawq-docs/pull/118

HAWQ-1432 - new topic for locating a HAWQ table's HDFS files

added a topic for locating the HDFS files associated with a specific HAWQ 
table.  input requested on the title and subnav placement. included this topic 
in the "Defining Database Objects" section for lack of a better location.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lisakowen/incubator-hawq-docs 
feature/locate-table-hdfs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-docs/pull/118.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #118


commit d00233122b34af584b45e670058d3d6e9dfd2005
Author: Lisa Owen 
Date:   2017-04-13T00:59:15Z

add new topic for locating the HDFS files of a HAWQ tbl




> document HAWQ's directory structure in HDFS
> ---
>
> Key: HAWQ-1432
> URL: https://issues.apache.org/jira/browse/HAWQ-1432
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>Priority: Minor
>
> add some documentation describing the HAWQ directory structure in HDFS and 
> how to locate the files for a specific table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-2) Build failed due to file missing during initial upload

2017-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15967840#comment-15967840
 ] 

ASF GitHub Bot commented on HAWQ-2:
---

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-madlib/pull/114


> Build failed due to file missing during initial upload
> --
>
> Key: HAWQ-2
> URL: https://issues.apache.org/jira/browse/HAWQ-2
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> Detail info on link: https://hub.docker.com/r/mayjojo/hawq-dev/
> Errors reported when doing "make install":
> cd /hawq/tools/bin/pythonSrc/PyGreSQL-4.0 && CC="gcc" 
> CFLAGS="-I/opt/hawq-dep/include -I/opt/hawq-dep/python-2.6.2/include 
> -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wendif-labels 
> -Wformat-security -fno-strict-aliasing -fwrapv -I/opt/hawq-dep/include" 
> LDFLAGS="-L../../src/interfaces/libpq -L../../src/port -L../../src/port 
> -L/opt/hawq-dep/lib -L/opt/hawq-dep/lib64 -L/opt/hawq-dep/python-2.6.2/lib 
> -Wl,--as-needed -Wl,-rpath,'/opt/hawq/lib',--enable-new-dtags" python 
> setup.py build
> /bin/sh: line 0: cd: /hawq/tools/bin/pythonSrc/PyGreSQL-4.0: No such file or 
> directory



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-2) Build failed due to file missing during initial upload

2017-04-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966589#comment-15966589
 ] 

ASF GitHub Bot commented on HAWQ-2:
---

GitHub user njayaram2 opened a pull request:

https://github.com/apache/incubator-madlib/pull/114

Bugfix: Install check failure on gpdb5 and HAWQ-2.2

MADLIB-1088

- Fixes concurrent delete issue with GPDB 5 on install check. This
also fixes the elastic net failure on cross validation, whose root
cause was drop and create table within the same query string.
- Fixes elastic net failure with IGD optimizer. Accessing warmup
lambdas was incorrect.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/njayaram2/incubator-madlib 
bugfix/elastic_net/igd

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-madlib/pull/114.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #114


commit 8430111ae2ac7b7d2c53f5a178d920b3afb74856
Author: Nandish Jayaram 
Date:   2017-04-11T22:45:33Z

potential bug fix in elastic net igd

commit 0e86538f25e96afed283f635627cbd267fd9450e
Author: Nandish Jayaram 
Date:   2017-04-12T20:43:26Z

Bugfix: Install check for elastic net fails on gpdb5

MADLIB-1088

- Fixes concurrent delete issue with GPDB 5 on install check. This
also fixes the elastic net failure on cross validation, whose root
cause was drop and create table within the same query string.
- Fixes elastic net failure with IGD optimizer. Accessing warmup
lambdas was incorrect.




> Build failed due to file missing during initial upload
> --
>
> Key: HAWQ-2
> URL: https://issues.apache.org/jira/browse/HAWQ-2
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> Detail info on link: https://hub.docker.com/r/mayjojo/hawq-dev/
> Errors reported when doing "make install":
> cd /hawq/tools/bin/pythonSrc/PyGreSQL-4.0 && CC="gcc" 
> CFLAGS="-I/opt/hawq-dep/include -I/opt/hawq-dep/python-2.6.2/include 
> -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wendif-labels 
> -Wformat-security -fno-strict-aliasing -fwrapv -I/opt/hawq-dep/include" 
> LDFLAGS="-L../../src/interfaces/libpq -L../../src/port -L../../src/port 
> -L/opt/hawq-dep/lib -L/opt/hawq-dep/lib64 -L/opt/hawq-dep/python-2.6.2/lib 
> -Wl,--as-needed -Wl,-rpath,'/opt/hawq/lib',--enable-new-dtags" python 
> setup.py build
> /bin/sh: line 0: cd: /hawq/tools/bin/pythonSrc/PyGreSQL-4.0: No such file or 
> directory



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1424) docs - create ALTER SEQUENCE reference page

2017-04-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15961075#comment-15961075
 ] 

ASF GitHub Bot commented on HAWQ-1424:
--

Github user lisakowen closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/117


> docs - create ALTER SEQUENCE reference page
> ---
>
> Key: HAWQ-1424
> URL: https://issues.apache.org/jira/browse/HAWQ-1424
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> docs are missing a reference page for the SQL "ALTER SEQUENCE" command.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1394) docs - PXF ORC support no longer beta

2017-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949309#comment-15949309
 ] 

ASF GitHub Bot commented on HAWQ-1394:
--

Github user lisakowen closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/104


> docs - PXF ORC support no longer beta
> -
>
> Key: HAWQ-1394
> URL: https://issues.apache.org/jira/browse/HAWQ-1394
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
> Fix For: 2.2.0.0-incubating
>
>
> pxf orc file format support is no longer beta.  also integrate into the 
> mainline docs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1394) docs - PXF ORC support no longer beta

2017-03-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933422#comment-15933422
 ] 

ASF GitHub Bot commented on HAWQ-1394:
--

GitHub user lisakowen opened a pull request:

https://github.com/apache/incubator-hawq-docs/pull/104

HAWQ-1394 document HiveORC profile

include HiveORC profile info on the "Accessing Hive Data" page.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lisakowen/incubator-hawq-docs 
feature/pxforc-mainline

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-docs/pull/104.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #104


commit 0a4aa42ab15beebfe6b926636f72ff9dcf1b5597
Author: Lisa Owen 
Date:   2017-03-20T19:39:57Z

document HiveORC profile




> docs - PXF ORC support no longer beta
> -
>
> Key: HAWQ-1394
> URL: https://issues.apache.org/jira/browse/HAWQ-1394
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> pxf orc file format support is no longer beta.  also integrate into the 
> mainline docs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1382) Specify necessary setup for hawq_rm_stmt_nvseg and hawq_rm_stmt_vseg_memory

2017-03-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929144#comment-15929144
 ] 

ASF GitHub Bot commented on HAWQ-1382:
--

Github user janebeckman closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/103


> Specify necessary setup for hawq_rm_stmt_nvseg and  hawq_rm_stmt_vseg_memory
> 
>
> Key: HAWQ-1382
> URL: https://issues.apache.org/jira/browse/HAWQ-1382
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Jane Beckman
>Assignee: David Yozie
> Fix For: 2.2.0.0-incubating
>
>
> You must set hawq_rm_stmt_nvseg before setting hawq_rm_stmt_vseg_memory
> To enable statement level, users must set  hawq_rm_stmt_nvseg to a number 
> larger than 0, run desired query, then set it back.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1382) Specify necessary setup for hawq_rm_stmt_nvseg and hawq_rm_stmt_vseg_memory

2017-03-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925309#comment-15925309
 ] 

ASF GitHub Bot commented on HAWQ-1382:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/103#discussion_r106057640
  
--- Diff: 
markdown/bestpractices/managing_resources_bestpractices.html.md.erb ---
@@ -110,11 +110,14 @@ ALTER RESOURCE queue1 WITH 
(VSEG_RESOURCE_QUOTA='mem:2gb');
 If there are only occasional queries on Parquet tables with a large page 
size, use a statement level specification instead of altering the resource 
queue. For example:
 
 ```sql
-SET HAWQ_RM_STMT_NVSEG=10;
-SET HAWQ_RM_STMT_VSEG_MEMORY='2gb';
-query1;
-SET HAWQ_RM_STMT_NVSEG=0;
+postgres=# SET hawq_rm_stmt_nvseg=10;
+postgres=# SET hawq_rm_stmt_vseg_memory='2gb';
+postgres=# query1;
+postgres=# SET hawq_rm_stmt_nvseg=0;
+postgres=# SET
 ```
+***Note:*** To enable the statement level, you must set 
`hawq_rm_stmt_nvseg` to a value greater than 0 before setting 
`hawq_rm_stmt_vseg_memory`. After running a query or completing the session, 
the value of `hawq_rm_stmt_nvseg` should be reset to 0,  as shown in the 
example above.
--- End diff --

Change "To enable the statement level" to "To set the parameters at the 
statement level"


> Specify necessary setup for hawq_rm_stmt_nvseg and  hawq_rm_stmt_vseg_memory
> 
>
> Key: HAWQ-1382
> URL: https://issues.apache.org/jira/browse/HAWQ-1382
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Jane Beckman
>Assignee: David Yozie
> Fix For: 2.2.0.0-incubating, 2.1.0.0-incubating
>
>
> You must set hawq_rm_stmt_nvseg before setting hawq_rm_stmt_vseg_memory
> To enable statement level, users must set  hawq_rm_stmt_nvseg to a number 
> larger than 0, run desired query, then set it back.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1382) Specify necessary setup for hawq_rm_stmt_nvseg and hawq_rm_stmt_vseg_memory

2017-03-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925306#comment-15925306
 ] 

ASF GitHub Bot commented on HAWQ-1382:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/103#discussion_r106058647
  
--- Diff: 
markdown/bestpractices/managing_resources_bestpractices.html.md.erb ---
@@ -110,11 +110,14 @@ ALTER RESOURCE queue1 WITH 
(VSEG_RESOURCE_QUOTA='mem:2gb');
 If there are only occasional queries on Parquet tables with a large page 
size, use a statement level specification instead of altering the resource 
queue. For example:
 
 ```sql
-SET HAWQ_RM_STMT_NVSEG=10;
-SET HAWQ_RM_STMT_VSEG_MEMORY='2gb';
-query1;
-SET HAWQ_RM_STMT_NVSEG=0;
+postgres=# SET hawq_rm_stmt_nvseg=10;
+postgres=# SET hawq_rm_stmt_vseg_memory='2gb';
+postgres=# query1;
+postgres=# SET hawq_rm_stmt_nvseg=0;
+postgres=# SET
--- End diff --

I'm pretty sure this SET command isn't actually a command to type at the 
postgres=# prompt, but an output from psqsl.  See the example at the end of 
this PR (each SET  command results in psql showing SET).  You should 
make both code blocks similar in this regard.


> Specify necessary setup for hawq_rm_stmt_nvseg and  hawq_rm_stmt_vseg_memory
> 
>
> Key: HAWQ-1382
> URL: https://issues.apache.org/jira/browse/HAWQ-1382
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Jane Beckman
>Assignee: David Yozie
> Fix For: 2.2.0.0-incubating, 2.1.0.0-incubating
>
>
> You must set hawq_rm_stmt_nvseg before setting hawq_rm_stmt_vseg_memory
> To enable statement level, users must set  hawq_rm_stmt_nvseg to a number 
> larger than 0, run desired query, then set it back.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1382) Specify necessary setup for hawq_rm_stmt_nvseg and hawq_rm_stmt_vseg_memory

2017-03-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925307#comment-15925307
 ] 

ASF GitHub Bot commented on HAWQ-1382:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/103#discussion_r106058017
  
--- Diff: markdown/resourcemgmt/ConfigureResourceManagement.html.md.erb ---
@@ -100,22 +100,25 @@ In some cases, you may want to specify additional 
resource quotas on the query s
 
 The following configuration properties allow a user to control resource 
quotas without altering corresponding resource queues.
 
--   
[hawq\_rm\_stmt\_vseg\_memory](../reference/guc/parameter_definitions.html)
 -   [hawq\_rm\_stmt\_nvseg](../reference/guc/parameter_definitions.html)
+-   
[hawq\_rm\_stmt\_vseg\_memory](../reference/guc/parameter_definitions.html)
 
 However, the changed resource quota for the virtual segment cannot exceed 
the resource queue’s maximum capacity in HAWQ.
 
+***Note:*** The value of `hawq_rm_stmt_vseg_memory` depends on 
`hawq_rm_stmt_nvseg`, which defines the number of virtual segments used for the 
query. You must set `hawq_rm_stmt_nvseg` to a value greater than 0 before 
setting the memory quota for each segment with  `hawq_rm_stmt_vseg_memory`. 
After running a query or completing the session, the value of 
`hawq_rm_stmt_nvseg` should be reset to 0 to disable the statement level 
resource quota.
--- End diff --

Change "the value of `hawq_rm_stmt_nvseg` should be reset to 0 " to "reset 
the value of `hawq_rm_stmt_nvseg` to 0"


> Specify necessary setup for hawq_rm_stmt_nvseg and  hawq_rm_stmt_vseg_memory
> 
>
> Key: HAWQ-1382
> URL: https://issues.apache.org/jira/browse/HAWQ-1382
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Jane Beckman
>Assignee: David Yozie
> Fix For: 2.2.0.0-incubating, 2.1.0.0-incubating
>
>
> You must set hawq_rm_stmt_nvseg before setting hawq_rm_stmt_vseg_memory
> To enable statement level, users must set  hawq_rm_stmt_nvseg to a number 
> larger than 0, run desired query, then set it back.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1382) Specify necessary setup for hawq_rm_stmt_nvseg and hawq_rm_stmt_vseg_memory

2017-03-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925308#comment-15925308
 ] 

ASF GitHub Bot commented on HAWQ-1382:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/103#discussion_r106058705
  
--- Diff: markdown/resourcemgmt/ConfigureResourceManagement.html.md.erb ---
@@ -100,22 +100,25 @@ In some cases, you may want to specify additional 
resource quotas on the query s
 
 The following configuration properties allow a user to control resource 
quotas without altering corresponding resource queues.
 
--   
[hawq\_rm\_stmt\_vseg\_memory](../reference/guc/parameter_definitions.html)
 -   [hawq\_rm\_stmt\_nvseg](../reference/guc/parameter_definitions.html)
+-   
[hawq\_rm\_stmt\_vseg\_memory](../reference/guc/parameter_definitions.html)
 
 However, the changed resource quota for the virtual segment cannot exceed 
the resource queue’s maximum capacity in HAWQ.
 
+***Note:*** The value of `hawq_rm_stmt_vseg_memory` depends on 
`hawq_rm_stmt_nvseg`, which defines the number of virtual segments used for the 
query. You must set `hawq_rm_stmt_nvseg` to a value greater than 0 before 
setting the memory quota for each segment with  `hawq_rm_stmt_vseg_memory`. 
After running a query or completing the session, the value of 
`hawq_rm_stmt_nvseg` should be reset to 0 to disable the statement level 
resource quota.
+
 In the following example, when executing the next query statement, the 
HAWQ resource manager will attempt to allocate 10 virtual segments and each 
segment has a 256MB memory quota.
 
+
 ``` sql
+postgres=# SET hawq_rm_stmt_nvseg=10;
+SET
 postgres=# SET hawq_rm_stmt_vseg_memory='256mb';
 SET
-postgres=# SET hawq_rm_stmt_nvseg=10;
+postgres=# SELECT  FROM  WHERE 
--- End diff --

Should at least terminate this example with a semicolon;


> Specify necessary setup for hawq_rm_stmt_nvseg and  hawq_rm_stmt_vseg_memory
> 
>
> Key: HAWQ-1382
> URL: https://issues.apache.org/jira/browse/HAWQ-1382
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Jane Beckman
>Assignee: David Yozie
> Fix For: 2.2.0.0-incubating, 2.1.0.0-incubating
>
>
> You must set hawq_rm_stmt_nvseg before setting hawq_rm_stmt_vseg_memory
> To enable statement level, users must set  hawq_rm_stmt_nvseg to a number 
> larger than 0, run desired query, then set it back.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907657#comment-15907657
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user lisakowen closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/101


> docs - pl/pgsql page cleanup
> 
>
> Key: HAWQ-1383
> URL: https://issues.apache.org/jira/browse/HAWQ-1383
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> various updates to pl/pgsql page:
> - move polymorphic types discussion to a more general section
> - remove dblink reference
> - use actual pl/pgsql examples, not SQL examples
> - other miscellanous clean-up and clarifications



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1372) doc ambari hawq config change procedure that does not require cluster restart

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905739#comment-15905739
 ] 

ASF GitHub Bot commented on HAWQ-1372:
--

Github user lisakowen closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/97


> doc ambari hawq config change procedure that does not require cluster restart
> -
>
> Key: HAWQ-1372
> URL: https://issues.apache.org/jira/browse/HAWQ-1372
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>Priority: Minor
>
> document the workaround for updating hawq configuration via ambari (for  
> ambari-managed clusters) in cases where a complete cluster restart cannot be 
> tolerated:
> update config via ambari, do not restart
> update config via "hawq config -c xxx -v xxx"
> hawq stop cluster --reload



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905661#comment-15905661
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105483266
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,283 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+The PL/pgSQL language addresses some of these limitations. When creating 
functions with PL/pgSQL, you can group computation blocks and queries inside 
the database server, combining the power of a procedural language and the ease 
of use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   Re-using prepared queries avoids multiple rounds of query parsing
+ 
 
-With PL/pgSQL, you can group a block of computation and a series of 
queries inside the database server, thus having the power of a procedural 
language and the ease of use of SQL, but with considerable savings of 
client/server communication overhead.
+## PL/pgSQL Function Syntax
 
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be 
marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
+PL/pgSQL is a block-structured language. The complete text of a function 
definition must be a block, which is defined as:
 
-This can result in a considerable performance increase as compared to an 
application that does not use stored functions.
+``` sql
+[  ]
+[ DECLARE
+declarations ]
+BEGIN
+statements
+END [ label ];
+```
--- End diff --

got it


> docs - pl/pgsql page cleanup
> 
>
> Key: HAWQ-1383
> URL: https://issues.apache.org/jira/browse/HAWQ-1383
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> various updates to pl/pgsql page:
> - move polymorphic types discussion to a more general section
> - remove dblink reference
> - use actual pl/pgsql examples, not SQL examples
> - other miscellanous clean-up and clarifications



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905660#comment-15905660
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105483070
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,283 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+The PL/pgSQL language addresses some of these limitations. When creating 
functions with PL/pgSQL, you can group computation blocks and queries inside 
the database server, combining the power of a procedural language and the ease 
of use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   Re-using prepared queries avoids multiple rounds of query parsing
+ 
 
-With PL/pgSQL, you can group a block of computation and a series of 
queries inside the database server, thus having the power of a procedural 
language and the ease of use of SQL, but with considerable savings of 
client/server communication overhead.
+## PL/pgSQL Function Syntax
 
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be 
marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
+PL/pgSQL is a block-structured language. The complete text of a function 
definition must be a block, which is defined as:
 
-This can result in a considerable performance increase as compared to an 
application that does not use stored functions.
+``` sql
+[  ]
+[ DECLARE
+declarations ]
+BEGIN
+statements
+END [ label ];
+```
--- End diff --

thanks for reviewing, @sansanichfb!  i used the block definition identified 
in the postgres pl/pgsql docs.  i will add a blurb about exceptions and error 
handling.  (i didn't want to duplicate all of the postgres info on this page, 
my goal was to provide some introductory info and examples to get the user up 
and running with using pl/pgsql in common use scenarios.)


> docs - pl/pgsql page cleanup
> 
>
> Key: HAWQ-1383
> URL: https://issues.apache.org/jira/browse/HAWQ-1383
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
> 

[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905621#comment-15905621
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105477434
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,283 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+The PL/pgSQL language addresses some of these limitations. When creating 
functions with PL/pgSQL, you can group computation blocks and queries inside 
the database server, combining the power of a procedural language and the ease 
of use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   Re-using prepared queries avoids multiple rounds of query parsing
+ 
 
-With PL/pgSQL, you can group a block of computation and a series of 
queries inside the database server, thus having the power of a procedural 
language and the ease of use of SQL, but with considerable savings of 
client/server communication overhead.
+## PL/pgSQL Function Syntax
 
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be 
marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
+PL/pgSQL is a block-structured language. The complete text of a function 
definition must be a block, which is defined as:
 
-This can result in a considerable performance increase as compared to an 
application that does not use stored functions.
+``` sql
+[  ]
+[ DECLARE
+declarations ]
+BEGIN
+statements
+END [ label ];
+```
--- End diff --

Maybe add EXCEPTION block as well.


> docs - pl/pgsql page cleanup
> 
>
> Key: HAWQ-1383
> URL: https://issues.apache.org/jira/browse/HAWQ-1383
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> various updates to pl/pgsql page:
> - move polymorphic types discussion to a more general section
> - remove dblink reference
> - use actual pl/pgsql examples, not SQL examples
> - other miscellanous clean-up and clarifications



--
This message was sent by Atlassian 

[jira] [Commented] (HAWQ-1376) docs - better describe the pxf host and port settings

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905614#comment-15905614
 ] 

ASF GitHub Bot commented on HAWQ-1376:
--

Github user lisakowen closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/99


> docs - better describe the pxf host and port settings
> -
>
> Key: HAWQ-1376
> URL: https://issues.apache.org/jira/browse/HAWQ-1376
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> the pxf host and port settings as described some places in the docs as hdfs 
> namenode and port.  this is confusing - the host does not have to be the 
> namenode and the port should be the pxf port.
> clarify the docs in this area.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905607#comment-15905607
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105474539
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,278 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+PL/pgSQL does not have these limitations. When creating functions with the 
PL/pgSQL language, you can group computation blocks and queries inside the 
database server, combining the power of a procedural language and the ease of 
use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   You avoid multiple rounds of query parsing
+ 
 
-With PL/pgSQL, you can group a block of computation and a series of 
queries inside the database server, thus having the power of a procedural 
language and the ease of use of SQL, but with considerable savings of 
client/server communication overhead.
+## PL/pgSQL Function Syntax
 
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be 
marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
+PL/pgSQL is a block-structured language. The complete text of a function 
definition must be a block, which is defined as:
 
-This can result in a considerable performance increase as compared to an 
application that does not use stored functions.
+``` sql
+[  ]
+[ DECLARE
+declarations ]
+BEGIN
+statements
+END [ label ];
+```
 
-PL/pgSQL supports all the data types, operators, and functions of SQL.
+Each declaration and each statement within a block is terminated by a 
semicolon. A block that appears within another block must have a semicolon 
after `END`, as shown above; however the final `END` that concludes a function 
body does not require a semicolon.
+
+You can specify all key words and identifiers in mixed upper and lower 
case. Identifiers are implicitly converted to lowercase unless double-quoted.
+
+PL/pgSQL supports two types of comments. A double dash (`--`) starts a 
comment that extends to the end of the line. A `/*` starts a block comment that 
extends to the next occurrence of `*/`. 

[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905594#comment-15905594
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105472315
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,278 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+PL/pgSQL does not have these limitations. When creating functions with the 
PL/pgSQL language, you can group computation blocks and queries inside the 
database server, combining the power of a procedural language and the ease of 
use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
--- End diff --

good point.  i will reword.


> docs - pl/pgsql page cleanup
> 
>
> Key: HAWQ-1383
> URL: https://issues.apache.org/jira/browse/HAWQ-1383
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> various updates to pl/pgsql page:
> - move polymorphic types discussion to a more general section
> - remove dblink reference
> - use actual pl/pgsql examples, not SQL examples
> - other miscellanous clean-up and clarifications



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905602#comment-15905602
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user lisakowen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105472939
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,278 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+PL/pgSQL does not have these limitations. When creating functions with the 
PL/pgSQL language, you can group computation blocks and queries inside the 
database server, combining the power of a procedural language and the ease of 
use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   You avoid multiple rounds of query parsing
+ 
 
-With PL/pgSQL, you can group a block of computation and a series of 
queries inside the database server, thus having the power of a procedural 
language and the ease of use of SQL, but with considerable savings of 
client/server communication overhead.
+## PL/pgSQL Function Syntax
 
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be 
marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
+PL/pgSQL is a block-structured language. The complete text of a function 
definition must be a block, which is defined as:
 
-This can result in a considerable performance increase as compared to an 
application that does not use stored functions.
+``` sql
+[  ]
+[ DECLARE
+declarations ]
+BEGIN
+statements
+END [ label ];
+```
 
-PL/pgSQL supports all the data types, operators, and functions of SQL.
+Each declaration and each statement within a block is terminated by a 
semicolon. A block that appears within another block must have a semicolon 
after `END`, as shown above; however the final `END` that concludes a function 
body does not require a semicolon.
+
+You can specify all key words and identifiers in mixed upper and lower 
case. Identifiers are implicitly converted to lowercase unless double-quoted.
+
+PL/pgSQL supports two types of comments. A double dash (`--`) starts a 
comment that extends to the end of the line. A `/*` starts a block comment that 
extends to the next occurrence of `*/`. 

[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905466#comment-15905466
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105437227
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,278 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+PL/pgSQL does not have these limitations. When creating functions with the 
PL/pgSQL language, you can group computation blocks and queries inside the 
database server, combining the power of a procedural language and the ease of 
use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   You avoid multiple rounds of query parsing
+ 
 
-With PL/pgSQL, you can group a block of computation and a series of 
queries inside the database server, thus having the power of a procedural 
language and the ease of use of SQL, but with considerable savings of 
client/server communication overhead.
+## PL/pgSQL Function Syntax
 
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be 
marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
+PL/pgSQL is a block-structured language. The complete text of a function 
definition must be a block, which is defined as:
 
-This can result in a considerable performance increase as compared to an 
application that does not use stored functions.
+``` sql
+[  ]
+[ DECLARE
+declarations ]
+BEGIN
+statements
+END [ label ];
+```
 
-PL/pgSQL supports all the data types, operators, and functions of SQL.
+Each declaration and each statement within a block is terminated by a 
semicolon. A block that appears within another block must have a semicolon 
after `END`, as shown above; however the final `END` that concludes a function 
body does not require a semicolon.
+
+You can specify all key words and identifiers in mixed upper and lower 
case. Identifiers are implicitly converted to lowercase unless double-quoted.
--- End diff --

edit:  "unless **they are** double-quoted."


> docs - pl/pgsql page cleanup
> 
>
> Key: HAWQ-1383
> URL: 

[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905463#comment-15905463
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105444363
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,278 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+PL/pgSQL does not have these limitations. When creating functions with the 
PL/pgSQL language, you can group computation blocks and queries inside the 
database server, combining the power of a procedural language and the ease of 
use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   You avoid multiple rounds of query parsing
+ 
 
-With PL/pgSQL, you can group a block of computation and a series of 
queries inside the database server, thus having the power of a procedural 
language and the ease of use of SQL, but with considerable savings of 
client/server communication overhead.
+## PL/pgSQL Function Syntax
 
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be 
marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
+PL/pgSQL is a block-structured language. The complete text of a function 
definition must be a block, which is defined as:
 
-This can result in a considerable performance increase as compared to an 
application that does not use stored functions.
+``` sql
+[  ]
+[ DECLARE
+declarations ]
+BEGIN
+statements
+END [ label ];
+```
 
-PL/pgSQL supports all the data types, operators, and functions of SQL.
+Each declaration and each statement within a block is terminated by a 
semicolon. A block that appears within another block must have a semicolon 
after `END`, as shown above; however the final `END` that concludes a function 
body does not require a semicolon.
+
+You can specify all key words and identifiers in mixed upper and lower 
case. Identifiers are implicitly converted to lowercase unless double-quoted.
+
+PL/pgSQL supports two types of comments. A double dash (`--`) starts a 
comment that extends to the end of the line. A `/*` starts a block comment that 
extends to the next occurrence of `*/`. 

[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905464#comment-15905464
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105438584
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,278 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+PL/pgSQL does not have these limitations. When creating functions with the 
PL/pgSQL language, you can group computation blocks and queries inside the 
database server, combining the power of a procedural language and the ease of 
use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   You avoid multiple rounds of query parsing
+ 
 
-With PL/pgSQL, you can group a block of computation and a series of 
queries inside the database server, thus having the power of a procedural 
language and the ease of use of SQL, but with considerable savings of 
client/server communication overhead.
+## PL/pgSQL Function Syntax
 
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be 
marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
+PL/pgSQL is a block-structured language. The complete text of a function 
definition must be a block, which is defined as:
 
-This can result in a considerable performance increase as compared to an 
application that does not use stored functions.
+``` sql
+[  ]
+[ DECLARE
+declarations ]
+BEGIN
+statements
+END [ label ];
+```
 
-PL/pgSQL supports all the data types, operators, and functions of SQL.
+Each declaration and each statement within a block is terminated by a 
semicolon. A block that appears within another block must have a semicolon 
after `END`, as shown above; however the final `END` that concludes a function 
body does not require a semicolon.
+
+You can specify all key words and identifiers in mixed upper and lower 
case. Identifiers are implicitly converted to lowercase unless double-quoted.
+
+PL/pgSQL supports two types of comments. A double dash (`--`) starts a 
comment that extends to the end of the line. A `/*` starts a block comment that 
extends to the next occurrence of `*/`. 

[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905465#comment-15905465
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105448115
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,278 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+PL/pgSQL does not have these limitations. When creating functions with the 
PL/pgSQL language, you can group computation blocks and queries inside the 
database server, combining the power of a procedural language and the ease of 
use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   You avoid multiple rounds of query parsing
+ 
 
-With PL/pgSQL, you can group a block of computation and a series of 
queries inside the database server, thus having the power of a procedural 
language and the ease of use of SQL, but with considerable savings of 
client/server communication overhead.
+## PL/pgSQL Function Syntax
 
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be 
marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
+PL/pgSQL is a block-structured language. The complete text of a function 
definition must be a block, which is defined as:
 
-This can result in a considerable performance increase as compared to an 
application that does not use stored functions.
+``` sql
+[  ]
+[ DECLARE
+declarations ]
+BEGIN
+statements
+END [ label ];
+```
 
-PL/pgSQL supports all the data types, operators, and functions of SQL.
+Each declaration and each statement within a block is terminated by a 
semicolon. A block that appears within another block must have a semicolon 
after `END`, as shown above; however the final `END` that concludes a function 
body does not require a semicolon.
+
+You can specify all key words and identifiers in mixed upper and lower 
case. Identifiers are implicitly converted to lowercase unless double-quoted.
+
+PL/pgSQL supports two types of comments. A double dash (`--`) starts a 
comment that extends to the end of the line. A `/*` starts a block comment that 
extends to the next occurrence of `*/`. 

[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905462#comment-15905462
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105447834
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,278 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+PL/pgSQL does not have these limitations. When creating functions with the 
PL/pgSQL language, you can group computation blocks and queries inside the 
database server, combining the power of a procedural language and the ease of 
use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   You avoid multiple rounds of query parsing
+ 
 
-With PL/pgSQL, you can group a block of computation and a series of 
queries inside the database server, thus having the power of a procedural 
language and the ease of use of SQL, but with considerable savings of 
client/server communication overhead.
+## PL/pgSQL Function Syntax
 
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be 
marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
+PL/pgSQL is a block-structured language. The complete text of a function 
definition must be a block, which is defined as:
 
-This can result in a considerable performance increase as compared to an 
application that does not use stored functions.
+``` sql
+[  ]
+[ DECLARE
+declarations ]
+BEGIN
+statements
+END [ label ];
+```
 
-PL/pgSQL supports all the data types, operators, and functions of SQL.
+Each declaration and each statement within a block is terminated by a 
semicolon. A block that appears within another block must have a semicolon 
after `END`, as shown above; however the final `END` that concludes a function 
body does not require a semicolon.
+
+You can specify all key words and identifiers in mixed upper and lower 
case. Identifiers are implicitly converted to lowercase unless double-quoted.
+
+PL/pgSQL supports two types of comments. A double dash (`--`) starts a 
comment that extends to the end of the line. A `/*` starts a block comment that 
extends to the next occurrence of `*/`. 

[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905456#comment-15905456
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105445401
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,278 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+PL/pgSQL does not have these limitations. When creating functions with the 
PL/pgSQL language, you can group computation blocks and queries inside the 
database server, combining the power of a procedural language and the ease of 
use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   You avoid multiple rounds of query parsing
+ 
 
-With PL/pgSQL, you can group a block of computation and a series of 
queries inside the database server, thus having the power of a procedural 
language and the ease of use of SQL, but with considerable savings of 
client/server communication overhead.
+## PL/pgSQL Function Syntax
 
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be 
marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
+PL/pgSQL is a block-structured language. The complete text of a function 
definition must be a block, which is defined as:
 
-This can result in a considerable performance increase as compared to an 
application that does not use stored functions.
+``` sql
+[  ]
+[ DECLARE
+declarations ]
+BEGIN
+statements
+END [ label ];
+```
 
-PL/pgSQL supports all the data types, operators, and functions of SQL.
+Each declaration and each statement within a block is terminated by a 
semicolon. A block that appears within another block must have a semicolon 
after `END`, as shown above; however the final `END` that concludes a function 
body does not require a semicolon.
+
+You can specify all key words and identifiers in mixed upper and lower 
case. Identifiers are implicitly converted to lowercase unless double-quoted.
+
+PL/pgSQL supports two types of comments. A double dash (`--`) starts a 
comment that extends to the end of the line. A `/*` starts a block comment that 
extends to the next occurrence of `*/`. 

[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905458#comment-15905458
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105444745
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,278 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+PL/pgSQL does not have these limitations. When creating functions with the 
PL/pgSQL language, you can group computation blocks and queries inside the 
database server, combining the power of a procedural language and the ease of 
use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   You avoid multiple rounds of query parsing
+ 
 
-With PL/pgSQL, you can group a block of computation and a series of 
queries inside the database server, thus having the power of a procedural 
language and the ease of use of SQL, but with considerable savings of 
client/server communication overhead.
+## PL/pgSQL Function Syntax
 
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be 
marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
+PL/pgSQL is a block-structured language. The complete text of a function 
definition must be a block, which is defined as:
 
-This can result in a considerable performance increase as compared to an 
application that does not use stored functions.
+``` sql
+[  ]
+[ DECLARE
+declarations ]
+BEGIN
+statements
+END [ label ];
+```
 
-PL/pgSQL supports all the data types, operators, and functions of SQL.
+Each declaration and each statement within a block is terminated by a 
semicolon. A block that appears within another block must have a semicolon 
after `END`, as shown above; however the final `END` that concludes a function 
body does not require a semicolon.
+
+You can specify all key words and identifiers in mixed upper and lower 
case. Identifiers are implicitly converted to lowercase unless double-quoted.
+
+PL/pgSQL supports two types of comments. A double dash (`--`) starts a 
comment that extends to the end of the line. A `/*` starts a block comment that 
extends to the next occurrence of `*/`. 

[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905455#comment-15905455
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105436546
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,278 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+PL/pgSQL does not have these limitations. When creating functions with the 
PL/pgSQL language, you can group computation blocks and queries inside the 
database server, combining the power of a procedural language and the ease of 
use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
--- End diff --

I see what you mean here, but am concerned about the blanket statement 
"Pl/pgsql does not have these limitations."  You're comparing db-executed 
functions with a client/server architecture, but the above statement is 
contrasting with a very general statement about SQL itself.  I don't think 
pgsql gets you around any inherent limitations of SQL as you've stated them.

I guess my concern comes from the recent limitations added regarding cursor 
support in PL languages.  It's natural to assume you can open a cursor and move 
through a query directly in the language, but really the cursor operations (and 
other SQL) are still exec'd on the db.


> docs - pl/pgsql page cleanup
> 
>
> Key: HAWQ-1383
> URL: https://issues.apache.org/jira/browse/HAWQ-1383
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> various updates to pl/pgsql page:
> - move polymorphic types discussion to a more general section
> - remove dblink reference
> - use actual pl/pgsql examples, not SQL examples
> - other miscellanous clean-up and clarifications



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905457#comment-15905457
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105445495
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,278 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+PL/pgSQL does not have these limitations. When creating functions with the 
PL/pgSQL language, you can group computation blocks and queries inside the 
database server, combining the power of a procedural language and the ease of 
use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   You avoid multiple rounds of query parsing
+ 
 
-With PL/pgSQL, you can group a block of computation and a series of 
queries inside the database server, thus having the power of a procedural 
language and the ease of use of SQL, but with considerable savings of 
client/server communication overhead.
+## PL/pgSQL Function Syntax
 
--   Extra round trips between client and server are eliminated
--   Intermediate results that the client does not need do not have to be 
marshaled or transferred between server and client
--   Multiple rounds of query parsing can be avoided
+PL/pgSQL is a block-structured language. The complete text of a function 
definition must be a block, which is defined as:
 
-This can result in a considerable performance increase as compared to an 
application that does not use stored functions.
+``` sql
+[  ]
+[ DECLARE
+declarations ]
+BEGIN
+statements
+END [ label ];
+```
 
-PL/pgSQL supports all the data types, operators, and functions of SQL.
+Each declaration and each statement within a block is terminated by a 
semicolon. A block that appears within another block must have a semicolon 
after `END`, as shown above; however the final `END` that concludes a function 
body does not require a semicolon.
+
+You can specify all key words and identifiers in mixed upper and lower 
case. Identifiers are implicitly converted to lowercase unless double-quoted.
+
+PL/pgSQL supports two types of comments. A double dash (`--`) starts a 
comment that extends to the end of the line. A `/*` starts a block comment that 
extends to the next occurrence of `*/`. 

[jira] [Commented] (HAWQ-1383) docs - pl/pgsql page cleanup

2017-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905460#comment-15905460
 ] 

ASF GitHub Bot commented on HAWQ-1383:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/101#discussion_r105436857
  
--- Diff: markdown/plext/using_plpgsql.html.md.erb ---
@@ -19,143 +19,278 @@ software distributed under the License is distributed 
on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
+--> 
 
-SQL is the language of most other relational databases use as query 
language. It is portable and easy to learn. But every SQL statement must be 
executed individually by the database server. 
+PL/pgSQL is a trusted procedural language that is automatically installed 
and registered in all HAWQ databases. With PL/pgSQL, you can:
 
-PL/pgSQL is a loadable procedural language. PL/SQL can do the following:
+-   Create functions
+-   Add control structures to the SQL language
+-   Perform complex computations
+-   Use all of the data types, functions, and operators defined in SQL
 
--   create functions
--   add control structures to the SQL language
--   perform complex computations
--   inherit all user-defined types, functions, and operators
--   be trusted by the server
+SQL is the language most relational databases use as a query language. 
While it is portable and easy to learn, every SQL statement is individually 
executed by the database server. Your client application sends each query to 
the database server, waits for it to be processed, receives and processes the 
results, does some computation, then sends further queries to the server. This 
back-and-forth requires interprocess communication and incurs network overhead 
if your client is on a different host than the HAWQ master.
 
-You can use functions created with PL/pgSQL with any database that 
supports built-in functions. For example, it is possible to create complex 
conditional computation functions and later use them to define operators or use 
them in index expressions.
+PL/pgSQL does not have these limitations. When creating functions with the 
PL/pgSQL language, you can group computation blocks and queries inside the 
database server, combining the power of a procedural language and the ease of 
use of SQL, but with considerable savings of client/server communication 
overhead. With PL/pgSQL:
 
-Every SQL statement must be executed individually by the database server. 
Your client application must send each query to the database server, wait for 
it to be processed, receive and process the results, do some computation, then 
send further queries to the server. This requires interprocess communication 
and incurs network overhead if your client is on a different machine than the 
database server.
+-   Extra round trips between client and server are eliminated
+-   Intermediate, and perhaps unneeded, results do not have to be 
marshaled or transferred between the server and client
+-   You avoid multiple rounds of query parsing
--- End diff --

Maybe edit to "Re-using prepared queries avoids multiple..."


> docs - pl/pgsql page cleanup
> 
>
> Key: HAWQ-1383
> URL: https://issues.apache.org/jira/browse/HAWQ-1383
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> various updates to pl/pgsql page:
> - move polymorphic types discussion to a more general section
> - remove dblink reference
> - use actual pl/pgsql examples, not SQL examples
> - other miscellanous clean-up and clarifications



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1372) doc ambari hawq config change procedure that does not require cluster restart

2017-03-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904263#comment-15904263
 ] 

ASF GitHub Bot commented on HAWQ-1372:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/97#discussion_r105317729
  
--- Diff: markdown/admin/ambari-admin.html.md.erb ---
@@ -368,6 +348,49 @@ All of the listed steps are mandatory. This ensures 
that HAWQ service remains fu
 4. Restart HAWQ service to propagate the configuration change to all 
Ambari agents.
 
 This will synchronize the password on the host machines with the 
password that you specified in Ambari.
+
+## Updating HAWQ Configuration Without 
Cluster Restart
+There may be circumstances, such as during dynamic cluster expansion, when 
you must update HAWQ configuration parameters but cannot tolerate cluster 
downtime in your Ambari-managed cluster. In these situations, perform the 
following workaround to update and reload HAWQ configuration changes without a 
complete cluster restart.
+
+**Note**: Use this procedure for setting or updating only those HAWQ 
server configuration parameters with a set classification of reload.
+
+1.  Use the Ambari console UI to set and save the desired HAWQ 
configuration parameter(s) via the **HAWQ** service **Settings**, **Configs 
>Settings**, and/or **Configs >Advanced** tabs. Make sure to note the 
configuration parameter names and new values. You may need to hover the mouse 
over the configuration field to identify the HAWQ parameter name.
+
+**Note**: After updating the configuration parameters, do *not* 
restart the HAWQ service. 
+
+1.  Use HAWQ command line utilities to update the same HAWQ configuration 
parameters:
+1. Log in to the HAWQ master host as a HAWQ administrator and source 
greenplum_path.sh:
+
+``` shell
+$ ssh gpadmin@
+gpadmin@master$ source /usr/local/hawq/greenplum_path.sh
+```
+
+2. Identify the current value of the HAWQ configuration parameter:
+
+``` shell
+gpadmin@master$ hawq config -s 
+```
+
+3. Set the configuration parameter to the new value:
+
+``` shell
+gpadmin@master$ hawq config -c  -v 
+```
+
+Perform Steps 2 and 3 for each configuration parameter you set or 
updated via Ambari.
+
+3. Reload the HAWQ configuration; this operation does not restart the 
cluster:
+
+``` shell
+gpadmin@master$ hawq stop cluster --reload
+```
+
+4. Verify that each parameter was updated:
+
+```shell
+gpadmin@master$ hawq config -s 
+```
--- End diff --

I guess I was misunderstanding something from the earlier workaround:  
"When the HAWQ service is ready to be restarted via Ambari, Ambari will refresh 
the new configurations."  I guess I'm not entirely sure what that means either. 
 So when exactly is the change "complete"?


> doc ambari hawq config change procedure that does not require cluster restart
> -
>
> Key: HAWQ-1372
> URL: https://issues.apache.org/jira/browse/HAWQ-1372
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>Priority: Minor
>
> document the workaround for updating hawq configuration via ambari (for  
> ambari-managed clusters) in cases where a complete cluster restart cannot be 
> tolerated:
> update config via ambari, do not restart
> update config via "hawq config -c xxx -v xxx"
> hawq stop cluster --reload



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   3   4   5   6   7   8   9   10   >