[jira] [Reopened] (HAWQ-925) Set default locale, timezone & datastyle before running sql command/file

2016-07-14 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reopened HAWQ-925:
---
  Assignee: Paul Guo  (was: Lei Chang)

The patch is being code reviewing.

> Set default locale, timezone & datastyle before running sql command/file
> 
>
> Key: HAWQ-925
> URL: https://issues.apache.org/jira/browse/HAWQ-925
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.0.0.0-incubating
>
>
> So that sql output could be consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-890) .gitignore files generated by python build

2016-07-14 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378865#comment-15378865
 ] 

Paul Guo commented on HAWQ-890:
---

Put them in .gitignore so that "git status" does not show them. A small but not 
blocking issue.

> .gitignore files generated by python build
> --
>
> Key: HAWQ-890
> URL: https://issues.apache.org/jira/browse/HAWQ-890
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: backlog
>
>
>   ../../tools/bin/pythonSrc/PSI-0.3b2_gp/build/
>   ../../tools/bin/pythonSrc/PSI-0.3b2_gp/psi/_version.pyc
>   ../../tools/bin/pythonSrc/lockfile-0.9.1/build/
>   ../../tools/bin/pythonSrc/pychecker-0.8.18/build/
>   ../../tools/bin/pythonSrc/pycrypto-2.0.1/build/
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/build/
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/__init__.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/case.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/collector.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/compatibility.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/loader.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/main.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/result.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/runner.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/signals.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/suite.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/util.pyc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-917) Refactor feature tests for data type check with new googletest framework

2016-07-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378863#comment-15378863
 ] 

ASF GitHub Bot commented on HAWQ-917:
-

Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/787#discussion_r70921905
  
--- Diff: src/test/feature/type/ans/int8.ans ---
@@ -175,131 +179,131 @@ SELECT '' AS to_char_4, to_char( (q1 * -1), 
'S'), to_char( (q2 *
 SELECT '' AS to_char_5,  to_char(q2, 'MI') FROM 
INT8_TBL  ;
  to_char_5 |  to_char  
 ---+---
-   |   123
|   456
|  4567890123456789
+   |   123
|  4567890123456789
| -4567890123456789
 (5 rows)
 
 SELECT '' AS to_char_6,  to_char(q2, 'FMS')FROM 
INT8_TBL  ;
  to_char_6 |  to_char  
 ---+---
-   | +123
| +456
-   | -4567890123456789
| +4567890123456789
+   | +123
| +4567890123456789
+   | -4567890123456789
 (5 rows)
 
 SELECT '' AS to_char_7,  to_char(q2, 'FMTHPR') FROM 
INT8_TBL ;
  to_char_7 |  to_char   
 ---+
-   | 123RD
-   | <4567890123456789>
+   | 456TH
| 4567890123456789TH
+   | 123RD
| 4567890123456789TH
-   | 456TH
+   | <4567890123456789>
 (5 rows)
 
 SELECT '' AS to_char_8,  to_char(q2, 'SGth')   FROM 
INT8_TBL ;
  to_char_8 |   to_char   
 ---+-
-   | + 123rd
-   | -4567890123456789
+   | + 456th
| +4567890123456789th
+   | + 123rd
| +4567890123456789th
-   | + 456th
+   | -4567890123456789
 (5 rows)
 
 SELECT '' AS to_char_9,  to_char(q2, '0999')   FROM 
INT8_TBL ;
  to_char_9 |  to_char  
 ---+---
-   |  0123
|  0456
|  4567890123456789
+   |  0123
|  4567890123456789
| -4567890123456789
 (5 rows)
 
 SELECT '' AS to_char_10, to_char(q2, 'S0999')  FROM 
INT8_TBL ;
  to_char_10 |  to_char  
 +---
-| +0123
 | +0456
-| -4567890123456789
 | +4567890123456789
+| +0123
 | +4567890123456789
+| -4567890123456789
 (5 rows)
 
 SELECT '' AS to_char_11, to_char(q2, 'FM0999') FROM 
INT8_TBL ;
  to_char_11 |  to_char  
 +---
-| 0123
 | 0456
 | 4567890123456789
+| 0123
 | 4567890123456789
 | -4567890123456789
 (5 rows)
 
 SELECT '' AS to_char_12, to_char(q2, 'FM.000') FROM 
INT8_TBL ;
  to_char_12 |to_char
 +---
-| 123.000
 | 456.000
-| -4567890123456789.000
 | 4567890123456789.000
+| 123.000
 | 4567890123456789.000
+| -4567890123456789.000
 (5 rows)
 
 SELECT '' AS to_char_13, to_char(q2, 'L.000')  FROM 
INT8_TBL ;
  to_char_13 |to_char 
 +
-|123.000
-|456.000
-|   4567890123456789.000
-|   4567890123456789.000
-|  -4567890123456789.000
+| $  456.000
+| $ 4567890123456789.000
+| $  123.000
+| $ 4567890123456789.000
+| $-4567890123456789.000
--- End diff --

I'm leaving the to another JIRA HAWQ-925
Set default locale, timezone & datastyle before running sql command/file.

That is for the whole feature test framework.


> Refactor feature tests for data type check with new googletest framework
> 
>
> Key: HAWQ-917
> URL: https://issues.apache.org/jira/browse/HAWQ-917
> Project: Apache HAWQ
>  

[GitHub] incubator-hawq pull request #787: HAWQ-917. Refactor feature tests for data ...

2016-07-14 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/787#discussion_r70921905
  
--- Diff: src/test/feature/type/ans/int8.ans ---
@@ -175,131 +179,131 @@ SELECT '' AS to_char_4, to_char( (q1 * -1), 
'S'), to_char( (q2 *
 SELECT '' AS to_char_5,  to_char(q2, 'MI') FROM 
INT8_TBL  ;
  to_char_5 |  to_char  
 ---+---
-   |   123
|   456
|  4567890123456789
+   |   123
|  4567890123456789
| -4567890123456789
 (5 rows)
 
 SELECT '' AS to_char_6,  to_char(q2, 'FMS')FROM 
INT8_TBL  ;
  to_char_6 |  to_char  
 ---+---
-   | +123
| +456
-   | -4567890123456789
| +4567890123456789
+   | +123
| +4567890123456789
+   | -4567890123456789
 (5 rows)
 
 SELECT '' AS to_char_7,  to_char(q2, 'FMTHPR') FROM 
INT8_TBL ;
  to_char_7 |  to_char   
 ---+
-   | 123RD
-   | <4567890123456789>
+   | 456TH
| 4567890123456789TH
+   | 123RD
| 4567890123456789TH
-   | 456TH
+   | <4567890123456789>
 (5 rows)
 
 SELECT '' AS to_char_8,  to_char(q2, 'SGth')   FROM 
INT8_TBL ;
  to_char_8 |   to_char   
 ---+-
-   | + 123rd
-   | -4567890123456789
+   | + 456th
| +4567890123456789th
+   | + 123rd
| +4567890123456789th
-   | + 456th
+   | -4567890123456789
 (5 rows)
 
 SELECT '' AS to_char_9,  to_char(q2, '0999')   FROM 
INT8_TBL ;
  to_char_9 |  to_char  
 ---+---
-   |  0123
|  0456
|  4567890123456789
+   |  0123
|  4567890123456789
| -4567890123456789
 (5 rows)
 
 SELECT '' AS to_char_10, to_char(q2, 'S0999')  FROM 
INT8_TBL ;
  to_char_10 |  to_char  
 +---
-| +0123
 | +0456
-| -4567890123456789
 | +4567890123456789
+| +0123
 | +4567890123456789
+| -4567890123456789
 (5 rows)
 
 SELECT '' AS to_char_11, to_char(q2, 'FM0999') FROM 
INT8_TBL ;
  to_char_11 |  to_char  
 +---
-| 0123
 | 0456
 | 4567890123456789
+| 0123
 | 4567890123456789
 | -4567890123456789
 (5 rows)
 
 SELECT '' AS to_char_12, to_char(q2, 'FM.000') FROM 
INT8_TBL ;
  to_char_12 |to_char
 +---
-| 123.000
 | 456.000
-| -4567890123456789.000
 | 4567890123456789.000
+| 123.000
 | 4567890123456789.000
+| -4567890123456789.000
 (5 rows)
 
 SELECT '' AS to_char_13, to_char(q2, 'L.000')  FROM 
INT8_TBL ;
  to_char_13 |to_char 
 +
-|123.000
-|456.000
-|   4567890123456789.000
-|   4567890123456789.000
-|  -4567890123456789.000
+| $  456.000
+| $ 4567890123456789.000
+| $  123.000
+| $ 4567890123456789.000
+| $-4567890123456789.000
--- End diff --

I'm leaving the to another JIRA HAWQ-925
Set default locale, timezone & datastyle before running sql command/file.

That is for the whole feature test framework.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-760) Hawq register

2016-07-14 Thread Lili Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378798#comment-15378798
 ] 

Lili Ma commented on HAWQ-760:
--

[~GodenYao] I noticed you have kindly help close this JIRA.

Actually, this is an umbrella JIRA, some sub-tasks of it has been finished and 
already code delivered.  But some sub-tasks are not development and postponed. 

> Hawq register
> -
>
> Key: HAWQ-760
> URL: https://issues.apache.org/jira/browse/HAWQ-760
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Command Line Tools
>Reporter: Yangcheng Luo
>Assignee: Lili Ma
> Fix For: 2.0.0.0-incubating
>
>
> Users sometimes want to register data files generated by other system like 
> hive into hawq. We should add register function to support registering 
> file(s) generated by other system like hive into hawq. So users could 
> integrate their external file(s) into hawq conveniently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-256) Integrate Security with Apache Ranger

2016-07-14 Thread Lili Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378792#comment-15378792
 ] 

Lili Ma commented on HAWQ-256:
--

[~bosco] Thanks. Things are getting more clear now.

So for the interaction between HAWQ and Ranger, I think there are mainly two 
parts:

1. Set policy.  When HAWQ users invoke GRANT SQL in HAWQ, need pass that 
command to Ranger to set the policy.

2.Check Authorization.  When HAWQ user want to operate on some objects, need 
contact Ranger to check whether the user has the privilege. 

Both these two parts of interaction rely on Ranger Plugin. 

What we need do next is detailing down the interface for interaction and 
designing the HAWQ own side implementation.  

Please suggest if I missed something.  Thanks

> Integrate Security with Apache Ranger
> -
>
> Key: HAWQ-256
> URL: https://issues.apache.org/jira/browse/HAWQ-256
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Security
>Reporter: Michael Andre Pearce (IG)
>Assignee: Lili Ma
> Fix For: backlog
>
>
> Integrate security with Apache Ranger for a unified Hadoop security solution. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-927) Send Projection Info Data from HAWQ to PXF

2016-07-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378652#comment-15378652
 ] 

ASF GitHub Bot commented on HAWQ-927:
-

Github user kavinderd commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/796#discussion_r70905516
  
--- Diff: src/backend/access/external/pxfheaders.c ---
@@ -158,6 +165,29 @@ static void add_tuple_desc_httpheader(CHURL_HEADERS 
headers, Relation rel)
pfree(formatter.data);
 }
 
+static void add_projection_desc_httpheader(CHURL_HEADERS headers, 
ProjectionInfo *projInfo) {
+   int i;
+   char long_number[32];
--- End diff --

Since we are converting `list_length(projInfo->pi_targetlist)`, we need an 
array thats 32 chars long to match the bit length of `int`. 


> Send Projection Info Data from HAWQ to PXF
> --
>
> Key: HAWQ-927
> URL: https://issues.apache.org/jira/browse/HAWQ-927
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: backlog
>
>
> To achieve column projection at the level of PXF or the underlying readers we 
> need to first send this data as a Header/Param to PXF. Currently, PXF has no 
> knowledge whether a query requires all columns or a subset of columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-927) Send Projection Info Data from HAWQ to PXF

2016-07-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378646#comment-15378646
 ] 

ASF GitHub Bot commented on HAWQ-927:
-

Github user kavinderd commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/796#discussion_r70904822
  
--- Diff: src/include/access/fileam.h ---
@@ -63,6 +63,17 @@ typedef struct ExternalInsertDescData
 
 typedef ExternalInsertDescData *ExternalInsertDesc;
 
+/*
+ * ExternalSelectDescData is used for storing state related
+ * to selecting data from an external table.
+ */
+typedef struct ExternalSelectDescData
--- End diff --

That's an option but since we plan on passing other data like the type of 
query, etc to PXF I think it will be more maintainable to have this 
encapsulating struct so that we don't need to keep changing the method 
signatures.

What are your thoughts?


> Send Projection Info Data from HAWQ to PXF
> --
>
> Key: HAWQ-927
> URL: https://issues.apache.org/jira/browse/HAWQ-927
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: backlog
>
>
> To achieve column projection at the level of PXF or the underlying readers we 
> need to first send this data as a Header/Param to PXF. Currently, PXF has no 
> knowledge whether a query requires all columns or a subset of columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #796: HAWQ-927. Pass ProjectionInfo data to PXF

2016-07-14 Thread kavinderd
Github user kavinderd commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/796#discussion_r70904822
  
--- Diff: src/include/access/fileam.h ---
@@ -63,6 +63,17 @@ typedef struct ExternalInsertDescData
 
 typedef ExternalInsertDescData *ExternalInsertDesc;
 
+/*
+ * ExternalSelectDescData is used for storing state related
+ * to selecting data from an external table.
+ */
+typedef struct ExternalSelectDescData
--- End diff --

That's an option but since we plan on passing other data like the type of 
query, etc to PXF I think it will be more maintainable to have this 
encapsulating struct so that we don't need to keep changing the method 
signatures.

What are your thoughts?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-927) Send Projection Info Data from HAWQ to PXF

2016-07-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378599#comment-15378599
 ] 

ASF GitHub Bot commented on HAWQ-927:
-

Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/796#discussion_r70901193
  
--- Diff: src/include/access/fileam.h ---
@@ -63,6 +63,17 @@ typedef struct ExternalInsertDescData
 
 typedef ExternalInsertDescData *ExternalInsertDesc;
 
+/*
+ * ExternalSelectDescData is used for storing state related
+ * to selecting data from an external table.
+ */
+typedef struct ExternalSelectDescData
--- End diff --

Why do we introduce new struct if we can just use ProjectionInfo?


> Send Projection Info Data from HAWQ to PXF
> --
>
> Key: HAWQ-927
> URL: https://issues.apache.org/jira/browse/HAWQ-927
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: backlog
>
>
> To achieve column projection at the level of PXF or the underlying readers we 
> need to first send this data as a Header/Param to PXF. Currently, PXF has no 
> knowledge whether a query requires all columns or a subset of columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #796: HAWQ-927. Pass ProjectionInfo data to PXF

2016-07-14 Thread kavinderd
GitHub user kavinderd opened a pull request:

https://github.com/apache/incubator-hawq/pull/796

HAWQ-927. Pass ProjectionInfo data to PXF

This commit makes the necessary modifications to the HAWQ side of
the codebase to add a list of indices of projected columns

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kavinderd/incubator-hawq HAWQ-927

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/796.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #796


commit 68c6dd7ea6487b8362627fd0a4aa8c25a355abd3
Author: Kavinder Dhaliwal 
Date:   2016-07-12T01:20:16Z

HAWQ-927. Pass ProjectionInfo data to PXF

This commit makes the necessary modifications to the HAWQ side of
the codebase to add a list of indices of projected columns




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HAWQ-927) Send Projection Info Data from HAWQ to PXF

2016-07-14 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-927:
--

 Summary: Send Projection Info Data from HAWQ to PXF
 Key: HAWQ-927
 URL: https://issues.apache.org/jira/browse/HAWQ-927
 Project: Apache HAWQ
  Issue Type: Sub-task
  Components: External Tables
Reporter: Kavinder Dhaliwal
Assignee: Lei Chang


To achieve column projection at the level of PXF or the underlying readers we 
need to first send this data as a Header/Param to PXF. Currently, PXF has no 
knowledge whether a query requires all columns or a subset of columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-927) Send Projection Info Data from HAWQ to PXF

2016-07-14 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-927:
--

Assignee: Kavinder Dhaliwal  (was: Lei Chang)

> Send Projection Info Data from HAWQ to PXF
> --
>
> Key: HAWQ-927
> URL: https://issues.apache.org/jira/browse/HAWQ-927
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: backlog
>
>
> To achieve column projection at the level of PXF or the underlying readers we 
> need to first send this data as a Header/Param to PXF. Currently, PXF has no 
> knowledge whether a query requires all columns or a subset of columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-583) Extend PXF to allow plugins to support returning partial content of SELECT(column projection)

2016-07-14 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-583:
--

Assignee: Kavinder Dhaliwal  (was: Shivram Mani)

> Extend PXF to allow plugins to support returning partial content of 
> SELECT(column projection)
> -
>
> Key: HAWQ-583
> URL: https://issues.apache.org/jira/browse/HAWQ-583
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Kavinder Dhaliwal
> Fix For: backlog
>
>
> Currently PXF supports being able to push down the predicate WHERE logic to 
> the external system to reduce the amount data needed to be retrieved.
> SELECT a, b FROM external_pxf_source WHERE z < 3 AND x > 6
> As such we can filter the rows returned, but currently still would have to 
> return all the fields / complete row.
> This proposal is so that we can return only the columns in SELECT part.
> For data sources where it is columnar storage or selectable such as remote 
> database that PXF can read or connect to this has advantages in the data that 
> needs to be accessed or even transferred.
> As like with the push down Filter it should be optional so that plugins that 
> provide support can use it but others that do not, continue to work as they 
> do.
> The proposal would be for
> 1) create an interface created for plugins to optionally implement, where the 
> columns needed to be returned are given to the plugin.
> 2) update pxf api for hawq to send columns defined in SELECT, for pxf to 
> invoke the plugin interface and pass this information onto if provided
> 3) update pxf integration within hawq itself so that hawq passes this 
> additonal  information to pxf.
> This Ticket is off the back of discussion on HAWQ-492.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-765) Remove CTranslatorDXLToQuery Deadcode

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao closed HAWQ-765.
--
   Resolution: Fixed
Fix Version/s: (was: backlog)
   2.0.0.0-incubating

> Remove CTranslatorDXLToQuery Deadcode
> -
>
> Key: HAWQ-765
> URL: https://issues.apache.org/jira/browse/HAWQ-765
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Optimizer
>Reporter: Haisheng Yuan
>Assignee: Amr El-Helw
> Fix For: 2.0.0.0-incubating
>
>
> When we did not have optimization modules in Orca and just DXL, we used 
> CTranslatorDXLToQuery to test correctness of translation of Query to DXL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-764) Remove CTranslatorPlStmtToDXL Deadcode

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao closed HAWQ-764.
--
   Resolution: Fixed
Fix Version/s: 2.0.0.0-incubating

> Remove CTranslatorPlStmtToDXL Deadcode
> --
>
> Key: HAWQ-764
> URL: https://issues.apache.org/jira/browse/HAWQ-764
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Optimizer
>Reporter: Haisheng Yuan
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> When we did not have optimization modules in Orca and just DXL, we used 
> CTranslatorPlStmtToDXL to test correctness of translation of DXL to PlStmt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-829) Register Hive generated parquet file into HAWQ

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-829:
---
Fix Version/s: backlog

> Register Hive generated parquet file into HAWQ
> --
>
> Key: HAWQ-829
> URL: https://issues.apache.org/jira/browse/HAWQ-829
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Reporter: Lili Ma
>Assignee: Lei Chang
> Fix For: backlog
>
> Attachments: HAWQTypeMappingtoParquetType.pdf
>
>
> As a user, I can register the parquet files generated by Hive to HAWQ, so 
> that I can access Hive generated file from HAWQ directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-622) fix libhdfs3 readme

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-622:
---
Priority: Blocker  (was: Major)

> fix libhdfs3 readme
> ---
>
> Key: HAWQ-622
> URL: https://issues.apache.org/jira/browse/HAWQ-622
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: Lei Chang
>Assignee: Lei Chang
>Priority: Blocker
> Fix For: 2.0.0.0-incubating
>
>
> ==
> Libhdfs3 is developed by Pivotal and used in HAWQ, which
> is a massive parallel database engine in Pivotal Hadoop
> Distribution.
> ==
> https://github.com/apache/incubator-hawq/blob/bc0904ab02bb3e8c3e3596ce139b3ea6b52e2685/depends/libhdfs3/README.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-790) Remove CTranslatorPlStmtToDXL Deadcode

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao resolved HAWQ-790.

Resolution: Duplicate

> Remove CTranslatorPlStmtToDXL Deadcode
> --
>
> Key: HAWQ-790
> URL: https://issues.apache.org/jira/browse/HAWQ-790
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Ivan Weng
>Assignee: Lei Chang
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-790) Remove CTranslatorPlStmtToDXL Deadcode

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-790:
---
Fix Version/s: (was: backlog)
   2.0.0.0-incubating

> Remove CTranslatorPlStmtToDXL Deadcode
> --
>
> Key: HAWQ-790
> URL: https://issues.apache.org/jira/browse/HAWQ-790
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Ivan Weng
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-794) Add back snappy to related system tables in the future

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-794:
---
Fix Version/s: backlog

> Add back snappy to related system tables in the future
> --
>
> Key: HAWQ-794
> URL: https://issues.apache.org/jira/browse/HAWQ-794
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Storage
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: backlog
>
>
> See HAWQ-793 for the context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-823) Amazon S3 External Table Support

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-823:
---
Fix Version/s: backlog

> Amazon S3 External Table Support
> 
>
> Key: HAWQ-823
> URL: https://issues.apache.org/jira/browse/HAWQ-823
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: External Tables
>Reporter: Kyle R Dunn
>Assignee: Lei Chang
> Fix For: backlog
>
>
> As a cloud user, I'd like to be able to create readable external tables with 
> data in Amazon S3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-788) Explicitly initialize GPOPT and its dependencies

2016-07-14 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378502#comment-15378502
 ] 

Goden Yao commented on HAWQ-788:


[~ivan_wang] can you add more details in description?

> Explicitly initialize GPOPT and its dependencies
> 
>
> Key: HAWQ-788
> URL: https://issues.apache.org/jira/browse/HAWQ-788
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Ivan Weng
>Assignee: Lei Chang
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-790) Remove CTranslatorPlStmtToDXL Deadcode

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-790:
---
Fix Version/s: backlog

> Remove CTranslatorPlStmtToDXL Deadcode
> --
>
> Key: HAWQ-790
> URL: https://issues.apache.org/jira/browse/HAWQ-790
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Ivan Weng
>Assignee: Lei Chang
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-788) Explicitly initialize GPOPT and its dependencies

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-788:
---
Fix Version/s: backlog

> Explicitly initialize GPOPT and its dependencies
> 
>
> Key: HAWQ-788
> URL: https://issues.apache.org/jira/browse/HAWQ-788
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Ivan Weng
>Assignee: Lei Chang
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-830) Wrong result in CTE query due to CTE is treated as init plan by planner and evaluated multiple times

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-830:
---
Fix Version/s: backlog

> Wrong result in CTE query due to CTE is treated as init plan by planner and 
> evaluated multiple times
> 
>
> Key: HAWQ-830
> URL: https://issues.apache.org/jira/browse/HAWQ-830
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Optimizer
>Affects Versions: 2.0.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: backlog
>
>
> In CTE query, if the CTE itself is referenced multiple times, it should be 
> evaluated only once and then be used multiple time. However, it is treated as 
> init plan and evaluated multiple times in hawq 1.x and 2.0. This has two 
> issues here:
> 1. If the query in CTE is "volatile" (i.e., select volatile function) or has 
> side effect (create/drop object in database), it may generate wrong result
> 2. The performance of the query is not so efficient since the query in CTE is 
> evaluated multiple times.
> Here is the steps to reproduce:
> 1) in hawq, CTE is treated as init plan and evaluated 2 times. Thus, the 
> result is incorrect
> {noformat}
> WITH r AS (SELECT random())
> SELECT r1.*, r2.*
> FROM r AS r1, r AS r2;
>   random   |  random
> ---+---
>  0.519145511090755 | 0.751198637764901
> (1 row)
> EXPLAIN
> WITH r AS (SELECT random())
> SELECT r1.*, r2.*
> FROM r AS r1, r AS r2;
>   QUERY PLAN
> --
>  Nested Loop  (cost=0.04..0.77 rows=20 width=16)
>->  Result  (cost=0.01..0.02 rows=1 width=0)
>  InitPlan
>->  Result  (cost=0.00..0.01 rows=1 width=0)
>->  Materialize  (cost=0.03..0.09 rows=6 width=8)
>  ->  Result  (cost=0.01..0.02 rows=1 width=0)
>InitPlan
>  ->  Result  (cost=0.00..0.01 rows=1 width=0)
>  Settings:  default_hash_table_bucket_number=6
>  Optimizer status: legacy query optimizer
> (10 rows)
> {noformat}
> 2) in postgres, CTE is treated as CTE scan and evaluated 1 time. Thus, the 
> result is i
> {noformat}
> WITH r AS (SELECT random())
> SELECT r1.*, r2.*
> FROM r AS r1, r AS r2;
>   random   |  random
> ---+---
>  0.989214501809329 | 0.989214501809329
> (1 row)
> EXPLAIN
> WITH r AS (SELECT random())
> SELECT r1.*, r2.*
> FROM r AS r1, r AS r2;
> QUERY PLAN
> --
>  Nested Loop  (cost=0.01..0.06 rows=1 width=16)
>CTE r
>  ->  Result  (cost=0.00..0.01 rows=1 width=0)
>->  CTE Scan on r r1  (cost=0.00..0.02 rows=1 width=8)
>->  CTE Scan on r r2  (cost=0.00..0.02 rows=1 width=8)
> (5 rows){noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-840) Partition Sort Support

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-840:
---
Fix Version/s: backlog

> Partition Sort Support
> --
>
> Key: HAWQ-840
> URL: https://issues.apache.org/jira/browse/HAWQ-840
> Project: Apache HAWQ
>  Issue Type: Wish
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Support the partition sort as a new method for sorting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-843) HAWQ 2.0 new error handling mechanism implementation

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-843:
---
Fix Version/s: backlog

> HAWQ 2.0 new error handling mechanism implementation
> 
>
> Key: HAWQ-843
> URL: https://issues.apache.org/jira/browse/HAWQ-843
> Project: Apache HAWQ
>  Issue Type: Wish
>Reporter: Lili Ma
>Assignee: Lei Chang
> Fix For: backlog
>
>
> As a HAWQ user, I want other QEs of the same query still keep alive when one 
> QE fails, so that I can reuse the alive QEs to execute the following queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-844) Please remove your private branch from apache hawq project on github

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao resolved HAWQ-844.

Resolution: Fixed

resolve this one as keep cleaning private branch is an ongoing effort and 
should be discussed in mailing list.

> Please remove your private branch from apache hawq project on github
> 
>
> Key: HAWQ-844
> URL: https://issues.apache.org/jira/browse/HAWQ-844
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: Ming LI
>Assignee: Lei Chang
> Fix For: backlog
>
>
> We are planning the new release on apache hawq, it is better to keep the git 
> branch list clean, however there are a lot private branch on 
> https://github.com/apache/incubator-hawq/branches, which make user/developer 
> confusing. 
> Could you please remove those private branch from the public hawq repo?  I 
> don't know who have privilege to remove it, maybe the branch owner can have.
> FYI, All developer who want to checkin your code change, the better way is as 
> below: 
> 1) Login your github account, and clone hawq repo to your private repo. 
> 2) Create a git branch locally, commit your code changes in this new branch
> 3) Push this new branch to your private repo on github
> 4) Go to  https://github.com/apache/incubator-hawq to create pull request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-844) Please remove your private branch from apache hawq project on github

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-844:
---
Fix Version/s: backlog

> Please remove your private branch from apache hawq project on github
> 
>
> Key: HAWQ-844
> URL: https://issues.apache.org/jira/browse/HAWQ-844
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: Ming LI
>Assignee: Lei Chang
> Fix For: backlog
>
>
> We are planning the new release on apache hawq, it is better to keep the git 
> branch list clean, however there are a lot private branch on 
> https://github.com/apache/incubator-hawq/branches, which make user/developer 
> confusing. 
> Could you please remove those private branch from the public hawq repo?  I 
> don't know who have privilege to remove it, maybe the branch owner can have.
> FYI, All developer who want to checkin your code change, the better way is as 
> below: 
> 1) Login your github account, and clone hawq repo to your private repo. 
> 2) Create a git branch locally, commit your code changes in this new branch
> 3) Push this new branch to your private repo on github
> 4) Go to  https://github.com/apache/incubator-hawq to create pull request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-845) Parameter kerberos principal name for HAWQ

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-845:
---
Description: 
Currently HAWQ only accepts the principle 'postgres' for kerberos settings.
This is because it is hardcoded in gpcheckhdfs, we should ensure that it can be 
parameterized.

Also, it's better to change the default principal name postgres to gpadmin. It 
will avoid the need of changing the the hdfs directory during securing the 
cluster to postgres and will avoid the need of maintaining postgres user. 

  was:
Currently HAWQ only accept the principle 'postgres' for kerberos settings.
This is because there its hardcoded in gpcheckhdfs, we should ensure that it 
can be parameterized.

Also, its better to change the default principal name postgres to gpadmin. It 
will avoid the need of changing the the hdfs directory during securing the 
cluster to postgres and will avoid the need of maintaing postgres user. 


> Parameter kerberos principal name for HAWQ
> --
>
> Key: HAWQ-845
> URL: https://issues.apache.org/jira/browse/HAWQ-845
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: bhuvnesh chaudhary
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Currently HAWQ only accepts the principle 'postgres' for kerberos settings.
> This is because it is hardcoded in gpcheckhdfs, we should ensure that it can 
> be parameterized.
> Also, it's better to change the default principal name postgres to gpadmin. 
> It will avoid the need of changing the the hdfs directory during securing the 
> cluster to postgres and will avoid the need of maintaining postgres user. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-845) Parameterize kerberos principal name for HAWQ

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-845:
---
Priority: Minor  (was: Major)

> Parameterize kerberos principal name for HAWQ
> -
>
> Key: HAWQ-845
> URL: https://issues.apache.org/jira/browse/HAWQ-845
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: bhuvnesh chaudhary
>Assignee: Lei Chang
>Priority: Minor
> Fix For: backlog
>
>
> Currently HAWQ only accepts the principle 'postgres' for kerberos settings.
> This is because it is hardcoded in gpcheckhdfs, we should ensure that it can 
> be parameterized.
> Also, it's better to change the default principal name postgres to gpadmin. 
> It will avoid the need of changing the the hdfs directory during securing the 
> cluster to postgres and will avoid the need of maintaining postgres user. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-845) Parameterize kerberos principal name for HAWQ

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-845:
---
Issue Type: Improvement  (was: Bug)

> Parameterize kerberos principal name for HAWQ
> -
>
> Key: HAWQ-845
> URL: https://issues.apache.org/jira/browse/HAWQ-845
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: bhuvnesh chaudhary
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Currently HAWQ only accepts the principle 'postgres' for kerberos settings.
> This is because it is hardcoded in gpcheckhdfs, we should ensure that it can 
> be parameterized.
> Also, it's better to change the default principal name postgres to gpadmin. 
> It will avoid the need of changing the the hdfs directory during securing the 
> cluster to postgres and will avoid the need of maintaining postgres user. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-845) Parameterize kerberos principal name for HAWQ

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-845:
---
Fix Version/s: backlog

> Parameterize kerberos principal name for HAWQ
> -
>
> Key: HAWQ-845
> URL: https://issues.apache.org/jira/browse/HAWQ-845
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: bhuvnesh chaudhary
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Currently HAWQ only accepts the principle 'postgres' for kerberos settings.
> This is because it is hardcoded in gpcheckhdfs, we should ensure that it can 
> be parameterized.
> Also, it's better to change the default principal name postgres to gpadmin. 
> It will avoid the need of changing the the hdfs directory during securing the 
> cluster to postgres and will avoid the need of maintaining postgres user. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-848) Writable external table: gpfdist report "Failed initialization (url.c:1671)"

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao closed HAWQ-848.
--
   Resolution: Fixed
Fix Version/s: 2.0.0.0-incubating

> Writable external table: gpfdist report "Failed initialization (url.c:1671)"
> 
>
> Key: HAWQ-848
> URL: https://issues.apache.org/jira/browse/HAWQ-848
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: hongwu
>Assignee: hongwu
> Fix For: 2.0.0.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-858) Fix parser to understand case / when expression in group by

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-858:
---
Fix Version/s: backlog

> Fix parser to understand case / when expression in group by
> ---
>
> Key: HAWQ-858
> URL: https://issues.apache.org/jira/browse/HAWQ-858
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Parser
>Reporter: Venkatesh
>Assignee: Lei Chang
> Fix For: backlog
>
>
> [~lei_chang] please port this parser changes into HAWQ
> https://github.com/greenplum-db/gpdb4/commit/30b33a10f4b0a4468a9ed80cf3779fd12f176abf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-866) Need an switch for the orc support

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-866:
---
Fix Version/s: backlog

> Need an switch for the orc support
> --
>
> Key: HAWQ-866
> URL: https://issues.apache.org/jira/browse/HAWQ-866
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: backlog
>
>
> It appears that it would be better to add a configure switch (with-orc?) so 
> that users could determine whether to build it or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-798) Add orc library compiling inside HAWQ build

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-798:
---
Fix Version/s: backlog

> Add orc library compiling inside HAWQ build
> ---
>
> Key: HAWQ-798
> URL: https://issues.apache.org/jira/browse/HAWQ-798
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Build, Storage
>Reporter: hongwu
>Assignee: hongwu
> Fix For: backlog
>
>
> Compiling orc library in compiling HAWQ for fdw code build usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-864) Support ORC as a native file format

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-864:
---
Fix Version/s: backlog

> Support ORC as a native file format
> ---
>
> Key: HAWQ-864
> URL: https://issues.apache.org/jira/browse/HAWQ-864
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: backlog
>
>
> ORC (Optimized Row Columnar) is a very popular open source format adopted in 
> some major
> components in Hadoop eco­system. It is also used by a lot of users. The 
> advantages of
> supporting ORC storage in HAWQ are in two folds: firstly, it makes HAWQ more 
> Hadoop native
> which interacts with other components more easily; secondly, ORC stores some 
> meta info for
> query optimization, thus, it might potentially outperform two native formats 
> (i.e., AO, Parquet) if it
> is available.
> The implementation can be based on the framework proposed in HAWQ-786.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-870) Allocate target's tuple table slot in PortalHeapMemory during split partition

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-870:
---
Fix Version/s: backlog

> Allocate target's tuple table slot in PortalHeapMemory during split partition
> -
>
> Key: HAWQ-870
> URL: https://issues.apache.org/jira/browse/HAWQ-870
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Reporter: Venkatesh
>Assignee: Lei Chang
> Fix For: backlog
>
>
> This is a nice fix from QP team on GPDB. Please port this fix into HAWQ. Th
> GPDB Commit: 
> https://github.com/greenplum-db/gpdb/commit/c0e1f00c2532d1e2ef8d3b409dc8fee901a7cfe2
> PR: https://github.com/greenplum-db/gpdb/pull/866



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-873) Improve checking time for travis CI

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao closed HAWQ-873.
--
   Resolution: Fixed
Fix Version/s: 2.0.0.0-incubating

> Improve checking time for travis CI
> ---
>
> Key: HAWQ-873
> URL: https://issues.apache.org/jira/browse/HAWQ-873
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: hongwu
>Assignee: hongwu
> Fix For: 2.0.0.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-874) Should modify document about compression method

2016-07-14 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378484#comment-15378484
 ] 

Goden Yao commented on HAWQ-874:


[~dyozie] can you check if this is completed?

> Should modify document about compression method
> ---
>
> Key: HAWQ-874
> URL: https://issues.apache.org/jira/browse/HAWQ-874
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Lili Ma
>Assignee: David Yozie
> Fix For: backlog
>
>
> Current we have removed quicklz support for row-oriented table, and add 
> snappy support for row-oriented table.  
> We should do corresponding document modification.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-874) Should modify document about compression method

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-874:
---
Assignee: David Yozie  (was: Lei Chang)

> Should modify document about compression method
> ---
>
> Key: HAWQ-874
> URL: https://issues.apache.org/jira/browse/HAWQ-874
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Lili Ma
>Assignee: David Yozie
> Fix For: backlog
>
>
> Current we have removed quicklz support for row-oriented table, and add 
> snappy support for row-oriented table.  
> We should do corresponding document modification.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-874) Should modify document about compression method

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-874:
---
Fix Version/s: backlog

> Should modify document about compression method
> ---
>
> Key: HAWQ-874
> URL: https://issues.apache.org/jira/browse/HAWQ-874
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Lili Ma
>Assignee: David Yozie
> Fix For: backlog
>
>
> Current we have removed quicklz support for row-oriented table, and add 
> snappy support for row-oriented table.  
> We should do corresponding document modification.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-879) Verify the options specified when creating table

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-879:
---
Fix Version/s: 2.0.1.0-incubating

> Verify the options specified when creating table
> 
>
> Key: HAWQ-879
> URL: https://issues.apache.org/jira/browse/HAWQ-879
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Parser, Tests
>Reporter: Lili Ma
>Assignee: Jiali Yao
> Fix For: 2.0.1.0-incubating
>
>
> When creating table, there are a lot of options can be specified, including 
> appendonly, orientation, compresstype, compresslevel, pagesize, rowgroupsize, 
> blocksize, etc.  We need to verify all the combinations of different options 
> and check whether the result output is valid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-880) Output of 'hawq stop --reload' is not correct

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao closed HAWQ-880.
--
   Resolution: Fixed
Fix Version/s: 2.0.0.0-incubating

> Output of 'hawq stop --reload' is not correct
> -
>
> Key: HAWQ-880
> URL: https://issues.apache.org/jira/browse/HAWQ-880
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> hawq stop has an option to reload configuration changes. However when you 
> reload the log information displayed suggests that the cluster is stopped.
> We should log 'reload' instead of 'stop' while running 'hawq stop 
> -u/--reload'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-910) "hawq register": before registration, need check the consistency between the file and HAWQ table

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-910:
---
Fix Version/s: backlog

> "hawq register": before registration, need check the consistency between the 
> file and HAWQ table
> 
>
> Key: HAWQ-910
> URL: https://issues.apache.org/jira/browse/HAWQ-910
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Reporter: Lili Ma
>Assignee: Lei Chang
> Fix For: backlog
>
>
> As a user,
> I can be notified that the uploading file is not consistent to the table I 
> want to register to during registration
> so that I can do corresponding modifications as early as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-884) Subquery scan return no tuple in query with CTE

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-884:
---
Fix Version/s: backlog

> Subquery scan return no tuple in query with CTE
> ---
>
> Key: HAWQ-884
> URL: https://issues.apache.org/jira/browse/HAWQ-884
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.0.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: backlog
>
>
> Here is the CTE query that return no tuple, while it should return 1 tuple 
> actually.
> {noformat}
> WITH t1 AS ( SELECT 1 c1, 2 c2 UNION ALL SELECT 3 c1, 4 c2 ), 
>   t2 AS ( SELECT 3 c3 )
> SELECT * FROM t1
> WHERE EXISTS (SELECT * FROM t2 WHERE c3=c1);  
>   
> 
> -- Actual
> c1 | c2
> +
> (0 rows)
> -- Expected
>  c1 | c2
> +
>   3 |  4
> (1 row)
> {noformat}
> The root cause is that during query planning, it is correct that t2 in the 
> CTE (common table expression) clause is treated as subquery scan and then 
> materialized.
> However, during query execution, it generate no tuple when t2 is evaluated. 
> Thus, the join of t1 with t2 generate no tuple.
> We can see this in query execution statistics using explain analyze while 
> running the query with optimizer = off.
> {noformat}
>->  Materialize  (cost=0.00..0.01 rows=1 width=0)
>  Rows out:  0 rows with 0.167 ms to end of 3 scans, start offset by 
> 0.265 ms.
>  ->  Limit  (cost=0.00..0.00 rows=1 width=0)
>Rows out:  0 rows with 0.003 ms to end, start offset by 0.257 
> ms.
>->  Subquery Scan t2  (cost=0.00..0.01 rows=1 width=0)
>  Rows out:  0 rows with 0.002 ms to end, start offset by 
> 0.258 ms.
>  ->  Result  (cost=0.00..0.01 rows=1 width=0)
>One-Time Filter: 3 = $0
>Rows out:  0 rows with 0.001 ms to end, start 
> offset by 0.258 ms.
> {noformat}
> Here is details:
> 1) hawq 2.0 with optimizer off (planner): subquery scan generate no tuple
> {noformat}
> show optimizer;
>  optimizer
> ---
>  off
> (1 row)
> WITH t1 AS ( SELECT 1 c1, 2 c2 UNION ALL SELECT 3 c1, 4 c2 ),
>  t2 AS ( SELECT 3 c3 )
> SELECT * FROM t1
> WHERE EXISTS (SELECT * FROM t2 WHERE c3=c1);
>  c1 | c2
> +
> (0 rows)
> EXPLAIN ANALYZE
> WITH t1 AS ( SELECT 1 c1, 2 c2 UNION ALL SELECT 3 c1, 4 c2 ),
>  t2 AS ( SELECT 3 c3 )
> SELECT * FROM t1
> WHERE EXISTS (SELECT * FROM t2 WHERE c3=c1);
> 
> QUERY PLAN
> ---
>  Nested Loop  (cost=0.05..0.29 rows=72 width=8)
>Rows out:  Avg 0.0 rows x 0 workers.  Max/Last(/) 0/0 rows with 
> 0.237/0.237 ms to end.
>->  Limit  (cost=0.00..0.00 rows=1 width=0)
>  Rows out:  Avg 0.0 rows x 0 workers.  Max/Last(/) 0/0 rows with 
> 0.003/0.003 ms to end.
>  ->  Subquery Scan t2  (cost=0.00..0.01 rows=6 width=0)
>Rows out:  Avg 0.0 rows x 0 workers.  Max/Last(/) 0/0 rows 
> with 0.002/0.002 ms to end.
>->  Result  (cost=0.00..0.01 rows=1 width=0)
>  One-Time Filter: 3 = $0
>  Rows out:  Avg 0.0 rows x 0 workers.  Max/Last(/) 0/0 
> rows with 0/0 ms to end.
>->  Materialize  (cost=0.05..0.17 rows=12 width=8)
>  Rows out:  Avg 1.0 rows x 1 workers.  Max/Last(/) 1/1 rows with 
> 0.129/0.129 ms to end, start offset by 0.135/0.135 ms.
>  ->  Append  (cost=0.00..0.04 rows=2 width=0)
>Rows out:  Avg 2.0 rows x 1 workers.  Max/Last(/) 2/2 rows 
> with 0.002/0.002 ms to first row, 0.004/0.004 ms to end, start offset by 
> 0.255/0.255 ms.
>->  Result  (cost=0.00..0.01 rows=1 width=0)
>  Rows out:  Avg 1.0 rows x 1 workers.  Max/Last(/) 1/1 
> rows with 0.002/0.002 ms to end, start offset by 0.255/0.255 ms.
>->  Result  (cost=0.00..0.01 rows=1 width=0)
>  Rows out:  Avg 1.0 rows x 1 workers.  Max/Last(/) 1/1 
> rows with 0/0 ms to end, start offset by 0.261/0.261 ms.
>  Slice statistics:
>(slice0)Executor memory: 61K bytes.
>  Statement statistics:
>Memory used: 128000K bytes
>  Settings:  default_hash_table_bucket_number=6; optimizer=off
>  Optimizer status: legacy query optimizer
>  Data locality statistics:
>no data locality information in this query
>  Total runtime: 0.372 ms
> (26 

[jira] [Commented] (HAWQ-890) .gitignore files generated by python build

2016-07-14 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378475#comment-15378475
 ] 

Goden Yao commented on HAWQ-890:


Can you clarify what is the work in this JIRA (title and description), do you 
want to clean up the files or what?

> .gitignore files generated by python build
> --
>
> Key: HAWQ-890
> URL: https://issues.apache.org/jira/browse/HAWQ-890
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: backlog
>
>
>   ../../tools/bin/pythonSrc/PSI-0.3b2_gp/build/
>   ../../tools/bin/pythonSrc/PSI-0.3b2_gp/psi/_version.pyc
>   ../../tools/bin/pythonSrc/lockfile-0.9.1/build/
>   ../../tools/bin/pythonSrc/pychecker-0.8.18/build/
>   ../../tools/bin/pythonSrc/pycrypto-2.0.1/build/
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/build/
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/__init__.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/case.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/collector.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/compatibility.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/loader.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/main.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/result.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/runner.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/signals.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/suite.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/util.pyc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-890) .gitignore files generated by python build

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-890:
---
Fix Version/s: backlog

> .gitignore files generated by python build
> --
>
> Key: HAWQ-890
> URL: https://issues.apache.org/jira/browse/HAWQ-890
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: backlog
>
>
>   ../../tools/bin/pythonSrc/PSI-0.3b2_gp/build/
>   ../../tools/bin/pythonSrc/PSI-0.3b2_gp/psi/_version.pyc
>   ../../tools/bin/pythonSrc/lockfile-0.9.1/build/
>   ../../tools/bin/pythonSrc/pychecker-0.8.18/build/
>   ../../tools/bin/pythonSrc/pycrypto-2.0.1/build/
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/build/
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/__init__.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/case.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/collector.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/compatibility.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/loader.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/main.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/result.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/runner.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/signals.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/suite.pyc
>   ../../tools/bin/pythonSrc/unittest2-0.5.1/unittest2/util.pyc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-883) hawq check "hawq_re_memory_overcommit_max" error

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-883:
---
Fix Version/s: backlog

> hawq check "hawq_re_memory_overcommit_max" error
> 
>
> Key: HAWQ-883
> URL: https://issues.apache.org/jira/browse/HAWQ-883
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: liuguo
>Assignee: Lei Chang
> Fix For: backlog
>
>
> [ERROR]:-host(kmaster): HAWQ master host memory size '3824' is less than the 
> 'hawq_re_memory_overcommit_max' size '8192'
> When I set 'hawq_re_memory_overcommit_max=3000',then get an error:
> [ERROR]:-host(kmaster): HAWQ master's hawq_re_memory_overcommit_max GUC value 
> is 3000, expected 8192



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-895) Investigate migration to 3-digit Semantic versioning

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-895:
---
Fix Version/s: backlog

> Investigate migration to 3-digit Semantic versioning
> 
>
> Key: HAWQ-895
> URL: https://issues.apache.org/jira/browse/HAWQ-895
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Core
>Reporter: Vineet Goel
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Current HAWQ code is tied to 4-digit versioning which is related to the 
> library compatibility and inherited from old Postgres. We should investigate 
> the impact of switching to 3-digit Semantic versioning (http://semver.org)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-897) Add feature test for create table distribution with new test framework

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-897:
---
Fix Version/s: 2.0.1.0-incubating

> Add feature test for create table distribution with new test framework
> --
>
> Key: HAWQ-897
> URL: https://issues.apache.org/jira/browse/HAWQ-897
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: 2.0.1.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-894) Add feature test for polymorphism with new test framework

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-894:
---
Fix Version/s: 2.0.1.0-incubating

> Add feature test for polymorphism with new test framework
> -
>
> Key: HAWQ-894
> URL: https://issues.apache.org/jira/browse/HAWQ-894
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Lin Wen
>Assignee: Yi Jin
> Fix For: 2.0.1.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-896) Add feature test for create table with new test framework

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-896:
---
Fix Version/s: 2.0.1.0-incubating

> Add feature test for create table with new test framework
> -
>
> Key: HAWQ-896
> URL: https://issues.apache.org/jira/browse/HAWQ-896
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: 2.0.1.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-899) Add feature test for nested null case with new test framework

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-899:
---
Fix Version/s: 2.0.1.0-incubating

> Add feature test for nested null case with new test framework
> -
>
> Key: HAWQ-899
> URL: https://issues.apache.org/jira/browse/HAWQ-899
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: 2.0.1.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-898) Add feature test for COPY with new test framework

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-898:
---
Fix Version/s: 2.0.1.0-incubating

> Add feature test for COPY with new test framework 
> --
>
> Key: HAWQ-898
> URL: https://issues.apache.org/jira/browse/HAWQ-898
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: 2.0.1.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-900) Add dependency in PL/R rpm build spec file plr.spec

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-900:
---
Fix Version/s: 2.0.1.0-incubating

> Add dependency in PL/R rpm build spec file plr.spec
> ---
>
> Key: HAWQ-900
> URL: https://issues.apache.org/jira/browse/HAWQ-900
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: 2.0.1.0-incubating
>
>
> Building of plr depends on R-devel, while using of plr depends on R. In 
> theory they depends on hawq also but we do not seem to be mandatory to have a 
> hawq rpm for hawq installation, so the dependencies could be R stuffs only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-908) Add feature test for goh_toast with new test framework

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-908:
---
Fix Version/s: 2.0.1.0-incubating

> Add feature test for goh_toast with new test framework
> --
>
> Key: HAWQ-908
> URL: https://issues.apache.org/jira/browse/HAWQ-908
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Ivan Weng
>Assignee: Ivan Weng
> Fix For: 2.0.1.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-904) CLI help output for hawq config is different depending on which help option is used

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-904:
---
Fix Version/s: backlog

> CLI help output for hawq config is different depending on which help option 
> is used
> ---
>
> Key: HAWQ-904
> URL: https://issues.apache.org/jira/browse/HAWQ-904
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Severine Tymon
>Assignee: Radar Lei
>Priority: Minor
> Fix For: backlog
>
>
> hawq config and hawq config --help output the following:
> [gpadmin@centos7-namenode hawq]$ hawq --version
> HAWQ version 2.0.1.0 build dev
> [gpadmin@centos7-namenode hawq]$ hawq config
> usage: hawq config [--options]
> The "options" are:
>-c --change Changes a configuration parameter setting.
>-s --show   Shows the value for a specified configuration 
> parameter.
>-l --list   Lists all configuration parameters.
>-q --quiet  Run in quiet mode.
>-v --verboseDisplays detailed status.
>-r --remove HAWQ GUC name to be removed.
>--skipvalidationSkip the system validation checks.
>--ignore-bad-hosts  Skips copying configuration files on host on which SSH 
> fails
> See 'hawq --help' for more information on other commands.
> [gpadmin@centos7-namenode hawq]$ hawq config --help
> usage: hawq config [--options]
> The "options" are:
>-c --change Changes a configuration parameter setting.
>-s --show   Shows the value for a specified configuration 
> parameter.
>-l --list   Lists all configuration parameters.
>-q --quiet  Run in quiet mode.
>-v --verboseDisplays detailed status.
>-r --remove HAWQ GUC name to be removed.
>--skipvalidationSkip the system validation checks.
>--ignore-bad-hosts  Skips copying configuration files on host on which SSH 
> fails
> See 'hawq --help' for more information on other commands.
> **while hawq config -h outputs the following:
> [gpadmin@centos7-namenode hawq]$ hawq config -h
> Usage: HAWQ config options.
> Options:
>   -h, --helpshow this help message and exit
>   -c CHANGE, --change=CHANGE
> Change HAWQ Property.
>   -r REMOVE, --remove=REMOVE
> Remove HAWQ Property.
>   -s SHOW, --show=SHOW  Change HAWQ Property name.
>   -l, --listList all HAWQ Properties.
>   --skipvalidation  
>   --ignore-bad-hostsSkips copying configuration files on host on which SSH
> fails
>   -q, --quiet   
>   -v PROPERTY_VALUE, --value=PROPERTY_VALUE
> Set HAWQ Property value.
>   -d HAWQ_HOME  HAWQ home directory.
> The latter (hawq config -h) seems more up-to-date. In particular, the first 
> output contains errors (-v should be used to supply the value of a changed 
> parameter, not switch to verbose mode.) There are some minor issues in the 
> latter output too though. `CHANGE`, `REMOVE`, and `SHOW` placeholders should 
> be replaced with  or HAWQ_PROPERTY



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-906) Add feature test for validator function with new test framework

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-906:
---
Fix Version/s: 2.0.1.0-incubating

> Add feature test for validator function with new test framework
> ---
>
> Key: HAWQ-906
> URL: https://issues.apache.org/jira/browse/HAWQ-906
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Ivan Weng
>Assignee: Ivan Weng
> Fix For: 2.0.1.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-907) Add feature test for caqlinmem with new test framework

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-907:
---
Fix Version/s: 2.0.1.0-incubating

> Add feature test for caqlinmem with new test framework
> --
>
> Key: HAWQ-907
> URL: https://issues.apache.org/jira/browse/HAWQ-907
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Ivan Weng
>Assignee: Ivan Weng
> Fix For: 2.0.1.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-905) Add feature test for temp table with new test framework

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-905:
---
Fix Version/s: 2.0.1.0-incubating

> Add feature test for temp table with new test framework
> ---
>
> Key: HAWQ-905
> URL: https://issues.apache.org/jira/browse/HAWQ-905
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Ivan Weng
>Assignee: Ivan Weng
> Fix For: 2.0.1.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-909) Add feature test for goh_database with new test framework

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-909:
---
Fix Version/s: 2.0.1.0-incubating

> Add feature test for goh_database with new test framework
> -
>
> Key: HAWQ-909
> URL: https://issues.apache.org/jira/browse/HAWQ-909
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Ivan Weng
>Assignee: Ivan Weng
> Fix For: 2.0.1.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-678) Resource manager should close connection with QD when QD is cancelled and try to return resource to clean up all registered resource context

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao closed HAWQ-678.
--
   Resolution: Fixed
Fix Version/s: 2.0.0.0-incubating

> Resource manager should close connection with QD when QD is cancelled and try 
> to return resource to clean up all registered resource context
> 
>
> Key: HAWQ-678
> URL: https://issues.apache.org/jira/browse/HAWQ-678
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-831) Re-implementation for some data types of Parquet in HAWQ

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-831:
---
Fix Version/s: backlog

> Re-implementation for some data types of Parquet in HAWQ
> 
>
> Key: HAWQ-831
> URL: https://issues.apache.org/jira/browse/HAWQ-831
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Storage
>Reporter: Lili Ma
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Re-implement some HAWQ data type mapping to Parquet, so that HAWQ can be 
> compatible with Hive.
> 1. Currently HAWQ converts data type decimal to byte array, we can modify it 
> to fixed length byte array, so that HAWQ can be compatible with Hive.
> 2. HAWQ itself can convert the data type char to int32 instead of byte array, 
> so that the space for char storage can be saved.
> 3. To be compatible with Hive, HAWQ can change its type mapping for decimal 
> to int96, but this need discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-917) Refactor feature tests for data type check with new googletest framework

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-917:
---
Fix Version/s: 2.0.1.0-incubating

> Refactor feature tests for data type check with new googletest framework
> 
>
> Key: HAWQ-917
> URL: https://issues.apache.org/jira/browse/HAWQ-917
> Project: Apache HAWQ
>  Issue Type: Sub-task
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.0.1.0-incubating
>
>
> This needs to refactor the following 15 test cases.
> ../../../regress/sql/boolean.sql
> ../../../regress/sql/char.sql
> ../../../regress/sql/date.sql
> ../../../regress/sql/float4.sql
> ../../../regress/sql/float8.sql
> ../../../regress/sql/int2.sql
> ../../../regress/sql/int4.sql
> ../../../regress/sql/int8.sql
> ../../../regress/sql/money.sql
> ../../../regress/sql/name.sql
> ../../../regress/sql/oid.sql
> ../../../regress/sql/text.sql
> ../../../regress/sql/time.sql
> ../../../regress/sql/type_sanity.sql
> ../../../regress/sql/varchar.sql



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-916) Replace com.pivotal.hawq package name to org.apache.hawq

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-916:
---
Fix Version/s: 2.0.1.0-incubating

> Replace com.pivotal.hawq package name to org.apache.hawq
> 
>
> Key: HAWQ-916
> URL: https://issues.apache.org/jira/browse/HAWQ-916
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: 2.0.1.0-incubating
>
>
> com.pivotal.hawq.mapreduce types are referenced in at least the following 
> apache hawq (incubating) directories, master branch:
> contrib/hawq-hadoop
> contrib/hawq-hadoop/hawq-mapreduce-tool
> contrib/hawq-hadoop/hawq-mapreduce-parquet
> contrib/hawq-hadoop/hawq-mapreduce-common
> contrib/hawq-hadoop/hawq-mapreduce-ao
> contrib/hawq-hadoop/target/apidocs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-924) Refactor feature test for querycontext with new test framework

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-924:
---
Fix Version/s: 2.0.1.0-incubating

> Refactor feature test for querycontext with new test framework
> --
>
> Key: HAWQ-924
> URL: https://issues.apache.org/jira/browse/HAWQ-924
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: zhenglin tao
>Assignee: zhenglin tao
> Fix For: 2.0.1.0-incubating
>
>
> From code side, QueryContextDispatchingSizeMemoryLimit is disabled. So no 
> need to test it anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-762) Hive aggregation queries through PXF sometimes hang

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-762:
---
Fix Version/s: backlog

> Hive aggregation queries through PXF sometimes hang
> ---
>
> Key: HAWQ-762
> URL: https://issues.apache.org/jira/browse/HAWQ-762
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Hcatalog, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
>  Labels: performance
> Fix For: backlog
>
>
> Reproduce Steps:
> {code}
> select count(*) from hcatalog.default.hivetable;
> {code}
> sometimes, this query will hang and we see from pxf logs that Hive thrift 
> server cannot be connected from PXF agent. 
> While users can still visit hive metastore (through HUE) and execute the same 
> query.
> After a restart of PXF agent, this query goes through without issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-761) Provide brew packaging for HAWQ

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-761:
---
Fix Version/s: backlog

> Provide brew packaging for HAWQ
> ---
>
> Key: HAWQ-761
> URL: https://issues.apache.org/jira/browse/HAWQ-761
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Build
>Reporter: Roman Shaposhnik
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Now that HAWQ is getting really close to rolling its first official Incubator 
> release it would be great to provide brew packaging for it so that more folks 
> can take it for a spin.
> Here's what it takes to add a formula to brew (ask on this JIRA if you have 
> further questions):
>  
> https://github.com/Homebrew/brew/blob/master/share/doc/homebrew/Formula-Cookbook.md
> I propose that for the brew packaging HAWQ is configured to use local 
> filesystem as HDFS (more correctly HCFS) and runs without YARN in a 
> standalone mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-762) Hive aggregation queries through PXF sometimes hang

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-762:
---
Affects Version/s: 2.0.0.0-incubating

> Hive aggregation queries through PXF sometimes hang
> ---
>
> Key: HAWQ-762
> URL: https://issues.apache.org/jira/browse/HAWQ-762
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Hcatalog, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
>  Labels: performance
> Fix For: backlog
>
>
> Reproduce Steps:
> {code}
> select count(*) from hcatalog.default.hivetable;
> {code}
> sometimes, this query will hang and we see from pxf logs that Hive thrift 
> server cannot be connected from PXF agent. 
> While users can still visit hive metastore (through HUE) and execute the same 
> query.
> After a restart of PXF agent, this query goes through without issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-760) Hawq register

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao closed HAWQ-760.
--
   Resolution: Fixed
Fix Version/s: 2.0.0.0-incubating

> Hawq register
> -
>
> Key: HAWQ-760
> URL: https://issues.apache.org/jira/browse/HAWQ-760
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Command Line Tools
>Reporter: Yangcheng Luo
>Assignee: Lili Ma
> Fix For: 2.0.0.0-incubating
>
>
> Users sometimes want to register data files generated by other system like 
> hive into hawq. We should add register function to support registering 
> file(s) generated by other system like hive into hawq. So users could 
> integrate their external file(s) into hawq conveniently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-860) Optimizer generates wrong plan when correlated subquery contains set-returning functions

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-860:
---
Assignee: Haisheng Yuan  (was: Amr El-Helw)

> Optimizer generates wrong plan when correlated subquery contains 
> set-returning functions
> 
>
> Key: HAWQ-860
> URL: https://issues.apache.org/jira/browse/HAWQ-860
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Optimizer
>Reporter: Venkatesh
>Assignee: Haisheng Yuan
> Fix For: 2.0.1.0-incubating
>
>
> Bump ORCA to 1.634 [#119042413]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-860) Optimizer generates wrong plan when correlated subquery contains set-returning functions

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-860:
---
Fix Version/s: 2.0.1.0-incubating

> Optimizer generates wrong plan when correlated subquery contains 
> set-returning functions
> 
>
> Key: HAWQ-860
> URL: https://issues.apache.org/jira/browse/HAWQ-860
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Optimizer
>Reporter: Venkatesh
>Assignee: Amr El-Helw
> Fix For: 2.0.1.0-incubating
>
>
> Bump ORCA to 1.634 [#119042413]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq issue #795: HAWQ-860. Fix ORCA wrong plan when correlated sub...

2016-07-14 Thread hsyuan
Github user hsyuan commented on the issue:

https://github.com/apache/incubator-hawq/pull/795
  
@yaoj2 @wengyanqing @zhangh43 Please take a look.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-860) Optimizer generates wrong plan when correlated subquery contains set-returning functions

2016-07-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378443#comment-15378443
 ] 

ASF GitHub Bot commented on HAWQ-860:
-

Github user hsyuan commented on the issue:

https://github.com/apache/incubator-hawq/pull/795
  
@yaoj2 @wengyanqing @zhangh43 Please take a look.


> Optimizer generates wrong plan when correlated subquery contains 
> set-returning functions
> 
>
> Key: HAWQ-860
> URL: https://issues.apache.org/jira/browse/HAWQ-860
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Optimizer
>Reporter: Venkatesh
>Assignee: Amr El-Helw
>
> Bump ORCA to 1.634 [#119042413]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-925) Set default locale, timezone & datastyle before running sql command/file

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-925:
---
Summary: Set default locale, timezone & datastyle before running sql 
command/file  (was: Set default locale, timezone & datastyle befo.re running 
sql command/file)

> Set default locale, timezone & datastyle before running sql command/file
> 
>
> Key: HAWQ-925
> URL: https://issues.apache.org/jira/browse/HAWQ-925
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> So that sql output could be consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-925) Set default locale, timezone & datastyle before running sql command/file

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao closed HAWQ-925.
--
   Resolution: Fixed
Fix Version/s: 2.0.0.0-incubating

> Set default locale, timezone & datastyle before running sql command/file
> 
>
> Key: HAWQ-925
> URL: https://issues.apache.org/jira/browse/HAWQ-925
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> So that sql output could be consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-742) support "YARN label based scheduling" in HAWQ

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-742:
---
Fix Version/s: backlog

> support "YARN label based scheduling" in HAWQ
> -
>
> Key: HAWQ-742
> URL: https://issues.apache.org/jira/browse/HAWQ-742
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Resource Manager
>Reporter: Lei Chang
>Assignee: Yi Jin
> Fix For: backlog
>
>
> Some customers want to run HAWQ in a subset of nodes by using YARN to 
> configure the subset.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-747) ignore-bad-hosts options need to be propagated to the module which sync updated value of output.replace-datanode-on-failure during hawq init

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-747:
---
Fix Version/s: backlog

> ignore-bad-hosts options need to be propagated to the module which sync 
> updated value of output.replace-datanode-on-failure during hawq init
> 
>
> Key: HAWQ-747
> URL: https://issues.apache.org/jira/browse/HAWQ-747
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: bhuvnesh chaudhary
>Assignee: bhuvnesh chaudhary
> Fix For: backlog
>
>
> ignore-bad-hosts options need to be propagated to the module which sync 
> updated value of output.replace-datanode-on-failure



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-720) Simplify libyarn interface when passing RM/RM scheduler to it

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-720:
---
Fix Version/s: backlog

> Simplify libyarn interface when passing RM/RM scheduler to it
> -
>
> Key: HAWQ-720
> URL: https://issues.apache.org/jira/browse/HAWQ-720
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: libyarn
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: backlog
>
>
> The current libyarnclient constructor is:
> LibYarnClient(string , string , string , string ,
>   string , string , int32_t 
> amPort,
>   string _tracking_url, int heartbeatInterval);
> RM host/RM port, scheduler host/scheduler port can be read from yarn-site.xml
> no need to pass them to this constructor.
> Also, libyarn can get RM HA information from yarn-site.xml, no need to 
> maintain it in yarn-client.xml
> After this improvement, the libyarn interface will be much cleaner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-719) core found when running make checkinstall-good in Mac OS

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-719:
---
Fix Version/s: backlog

> core found when running make checkinstall-good in Mac OS
> 
>
> Key: HAWQ-719
> URL: https://issues.apache.org/jira/browse/HAWQ-719
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Yi Jin
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Core file '/cores/core.77792' (x86_64) was loaded.
> (lldb) bt
> * thread #1: tid = 0x, 0x7fff93739f06 
> libsystem_kernel.dylib`__pthread_kill + 10, stop reason = signal SIGSTOP
>   * frame #0: 0x7fff93739f06 libsystem_kernel.dylib`__pthread_kill + 10
> frame #1: 0x7fff984f34ec libsystem_pthread.dylib`pthread_kill + 90
> frame #2: 0x7fff944d26e7 libsystem_c.dylib`abort + 129
> frame #3: 0x7fff944d285e libsystem_c.dylib`abort_report_np + 181
> frame #4: 0x7fff944f8a14 libsystem_c.dylib`__chk_fail + 48
> frame #5: 0x7fff944f89e4 libsystem_c.dylib`__chk_fail_overflow + 16
> frame #6: 0x7fff944f8f00 libsystem_c.dylib`__sprintf_chk + 199
> frame #7: 0x00010c4eeb6e 
> postgres`external_set_env_vars(extvar=0x7fff537a0880, 
> uri="localhost:51200/", csv='\0', escape=0x, 
> quote=0x, header='\0', scancounter=0) + 926 at fileam.c:2596
> frame #8: 0x00010c4fe629 
> postgres`build_http_header(input=0x7fff537a0968) + 329 at pxfheaders.c:64
> frame #9: 0x00010c4fbce0 
> postgres`get_pxf_item_metadata(profile="Hive", pattern="*", dboid=0) + 256 at 
> hd_work_mgr.c:982
> frame #10: 0x00010c988395 
> postgres`pxf_item_fields_enum_start(profile=0x7fe71c051f40, 
> pattern=0x7fe71c051fa8) + 69 at pxf_functions.c:46
> frame #11: 0x00010c987ebc 
> postgres`pxf_get_item_fields(fcinfo=0x7fff537a1410) + 140 at 
> pxf_functions.c:112
> frame #12: 0x00010c6f2c44 
> postgres`ExecMakeTableFunctionResult(funcexpr=0x7fe71c064630, 
> econtext=0x7fe71c061b68, expectedDesc=0x7fe71c062c10, 
> operatorMemKB=32768) + 1012 at execQual.c:1994
> frame #13: 0x00010c719976 
> postgres`FunctionNext(node=0x7fe71c061708) + 134 at nodeFunctionscan.c:89
> frame #14: 0x00010c6ff2c8 postgres`ExecScan(node=0x7fe71c061708, 
> accessMtd=(postgres`FunctionNext at nodeFunctionscan.c:69)) + 72 at 
> execScan.c:128
> frame #15: 0x00010c7198df 
> postgres`ExecFunctionScan(node=0x7fe71c061708) + 31 at 
> nodeFunctionscan.c:161
> frame #16: 0x00010c6f0b20 
> postgres`ExecProcNode(node=0x7fe71c061708) + 640 at execProcnode.c:947
> frame #17: 0x00010c6e6ccd 
> postgres`ExecutePlan(estate=0x7fe71c061230, planstate=0x7fe71c061708, 
> operation=CMD_SELECT, numberTuples=0, direction=ForwardScanDirection, 
> dest=0x7fe71c0511d0) + 637 at execMain.c:3231
> frame #18: 0x00010c6e672e 
> postgres`ExecutorRun(queryDesc=0x7fe71c060f20, 
> direction=ForwardScanDirection, count=0) + 1054 at execMain.c:1213
> frame #19: 0x00010c8da186 
> postgres`PortalRunSelect(portal=0x7fe71c05ee30, forward='\x01', count=0, 
> dest=0x7fe71c0511d0) + 230 at pquery.c:1731
> frame #20: 0x00010c8d9c91 
> postgres`PortalRun(portal=0x7fe71c05ee30, count=9223372036854775807, 
> isTopLevel='\x01', dest=0x7fe71c0511d0, altdest=0x7fe71c0511d0, 
> completionTag="") + 881 at pquery.c:1553
> frame #21: 0x00010c8d04c5 
> postgres`exec_simple_query(query_string="SELECT * FROM 
> pxf_get_item_fields('Hive', '*');", seqServerHost=0x, 
> seqServerPort=-1) + 2133 at postgres.c:1751
> frame #22: 0x00010c8cea5f postgres`PostgresMain(argc=6, 
> argv=0x7fe71b80dd28, username="yijin") + 7535 at postgres.c:4760
> frame #23: 0x00010c878ab5 
> postgres`BackendRun(port=0x7fe71a6014b0) + 981 at postmaster.c:5889
> frame #24: 0x00010c875f05 
> postgres`BackendStartup(port=0x7fe71a6014b0) + 373 at postmaster.c:5484
> frame #25: 0x00010c8733e0 postgres`ServerLoop + 1248 at 
> postmaster.c:2163
> frame #26: 0x00010c871b13 postgres`PostmasterMain(argc=9, 
> argv=0x7fe71a41d2b0) + 4835 at postmaster.c:1454
> frame #27: 0x00010c77d4cc postgres`main(argc=9, 
> argv=0x7fe71a41d2b0) + 940 at main.c:226
> frame #28: 0x7fff9a20c5ad libdyld.dylib`start + 1
> frame #29: 0x7fff9a20c5ad libdyld.dylib`start + 1
> (lldb)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-714) HAWQ can specify ip address used for a node

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-714:
---
Fix Version/s: backlog

> HAWQ can specify ip address used for a node
> ---
>
> Key: HAWQ-714
> URL: https://issues.apache.org/jira/browse/HAWQ-714
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Fault Tolerance
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: backlog
>
>
> no, you cannot. hawq will take all the ip addresses from all the nodes. 
> currently it is required that all the nodes cannot have any identical ip 
> address. It is an area that can be improved though.
> Thanks
> Lei
> On Wed, May 4, 2016 at 6:00 AM, Gagan Brahmi  wrote:
> Thank you Vineet.
> I figured this problem out in the virtual environment. However, in the
> physical nodes this problem seemed to have caused due to an alias for
> loopback interface. The IP address which was causing the problem was
> 127.0.0.2.
> This brings me to another question. Is there any way we can configure
> HAWQ any specific IP address. In this case can we ask HAWQ to skip
> 127.0.0.2.
> Regards,
> Gagan Brahmi
> On Tue, May 3, 2016 at 11:30 AM, Vineet Goel  wrote:
> > Are you using vagrant or VMs, or are these physical machines?
> >
> > Sometimes, the problem can result from the misleading IP address 
> > configuration of network card in virtual machine. Check if two segments 
> > have the same IP address in eth0.
> > You must specify different IP address of eth0 of different VMs.
> >
> > Thanks
> > -Vineet
> >
> >
> >
> > On May 3, 2016, at 8:30 AM, Gagan Brahmi  wrote:
> >
> > Hi All,
> >
> > I was looking to check if anyone has seen this behavior where segments
> > are not able to communicate with the master in HAWQ 2.0.
> >
> > In a single node setup I don't see any problem with the segment and
> > master communication. The problem seems to be visible if there is two
> > or three machine involved in the hawq cluster.
> >
> > The gp_segment_configuration reports the segment. It is reported up
> > for a few seconds and then it turns it down. If you execute any
> > queries the segment is no longer found in the
> > gp_segment_configuration.
> >
> > Nothing can be found in gp_configuration or gp_configuration_history.
> > hawqstate reports "failures at master" for the segments.
> >
> > I found a jira which had pretty much similar behavior (except for core
> > dumps which I haven't seen yet. Since I am not able run any queries.).
> > The jira in question is https://issues.apache.org/jira/browse/HAWQ-323
> >
> > This issue was closed stating duplicate IP. I am trying to understand
> > if that can be the case.
> >
> > There is no firewall between the segments. Nothing in blocking port
> > 4, 5432, 5437 or 5438. Segment starts up fine. psql to segments on
> > port 4 works fine as well.
> >
> > There is no error in the segments or master logs or startup logs.
> >
> > I tried to integrate hawq with YARN and also using it's own
> > ResourceManager, but found similar behavior. Tried to set the
> > heartbeat interval to 10 seconds (hawq_rm_segment_heartbeat_interval =
> > 10 in hawq-site.xml) but no change in the behavior.
> >
> > Am I missing anything here? Has anyone found similar behavior before?
> >
> >
> > Regards,
> > Gagan Brahmi
> >



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-722) Add Doc around how to use HAWQ via JDBC/ODBC/libpq et al.

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-722:
---
Fix Version/s: backlog

> Add Doc around how to use HAWQ via JDBC/ODBC/libpq et al.
> -
>
> Key: HAWQ-722
> URL: https://issues.apache.org/jira/browse/HAWQ-722
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Documentation
>Reporter: Lei Chang
>Assignee: David Yozie
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-706) Final steps to bring HAWQ in compliance with Bigtop reqs

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-706:
---
Fix Version/s: backlog

> Final steps to bring HAWQ in compliance with Bigtop reqs
> 
>
> Key: HAWQ-706
> URL: https://issues.apache.org/jira/browse/HAWQ-706
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Affects Versions: 2.0.0.0-incubating
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: backlog
>
>
> This ticket is to list and track the required steps to finally enable the 
> integration of HAWQ into Bigtop.
> All relevant resources are linked below, and here's the overview of the 
> remaining steps and the overall status of the integration work.
> *External dependencies*
> - the biggest issue was and remains the use of libthrift, which isn't 
> packaged, provided nor supported by anyone. Right now, Bigtop-HAWQ 
> integration branch 
> [uses|https://git-wip-us.apache.org/repos/asf?p=bigtop.git;a=blob_plain;f=bigtop_toolchain/manifests/libhdfs.pp;hb=refs/heads/BIGTOP-2320]
>  my own pre-built version of the library, hosted 
> [here|https://bintray.com/artifact/download/wangzw/deb/dists/trusty/contrib/binary-amd64].
>  However, this is clearly an insecure and has to be either solved by HAWQ 
> adding this dependency as the source; or by convincing Bigtop community that 
> hosting libthrift library is beneficial for the community at large
> *Packaging*
> - overall, the packaging code is complete and is pushed to the Bigtop branch 
> (see link below). Considering that the work has been completed about 5 weeks 
> ago and was aimed at the state of trunk back in the March, there might be 
> some minor changes, which would require additional tweaks
> - libhdfs library code (if already included into HAWQ project) might require 
> additional changes to the packaging code, so the library can be produces and 
> properly set in the installation phase
> - Bigtop CI has jobs to create CentOS and Ubuntu packages (linked from the 
> BIGTOP-2320 below)
> *Tests*
> - smoke tests need to be created (as per BIGTOP-2322), but that seems to be a 
> minor undertaking once the rest of the work is finished
> - packaging tests are required to be integrated into Bigtop stack BIGTOP-2324
> *Deployment*
> - deployment code is completed. However, it needs to be extended to property 
> support cluster roles and to be linked to the main {{site.pp}} recipe
> - because real-life deployment can not rely on in-house python wrappers using 
> passwordless-ssh, the lifecycle management and initial bootstrap are done 
> directly by calling into HAWQ scripts, providing such functionality. It is 
> possible that some of these interfaces were updated in the last 6 weeks, so 
> additional testing would be needed.
> - it should be responsibility of the HAWQ to provide a concise way of 
> initializing a master, segment, and so on without a need for password-less 
> ssh, which is suboptimal and won't be accepted by Bigtop community as it is 
> breaks the deployment model
> *Toolchain*
> - toolchain code is completed in the bigtop branch. This will allow to build 
> HAWQ in the standard Bigtop container available for the CI and 3rd party users
> - toolchain code needs to be rebased on top of current Bigtop master. and 
> possible conflicts would have to be resolved
> - once the integration is finished, Bigtop slave images will have to be 
> updated to enable automatic CI runs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-686) Changing HAWQ master port configures HAWQ Standby incorrectly

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-686:
---
Fix Version/s: backlog

> Changing HAWQ master port configures HAWQ Standby incorrectly
> -
>
> Key: HAWQ-686
> URL: https://issues.apache.org/jira/browse/HAWQ-686
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Matt
>Assignee: Lei Chang
> Fix For: backlog
>
> Attachments: standby-running-on-10432.png, table-data.png
>
>
> Steps to reproduce:
> - Install HAWQ cluster with hawq_master_address_port set to some value (eg. 
> 5432)
> - Change the hawq_master_address_port to another value (eg 10432)
> - Restart all HAWQ components
> This would lead HAWQ into configuring Standby incorrectly (still pointing to 
> the old port 5432). The gp_master_mirroring table reports that the Standby is 
> *Not Configured*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-670) Error when changing the table distribution policy from random to hash distribution

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-670:
---
Fix Version/s: backlog

> Error when changing the table distribution policy from random to hash 
> distribution
> --
>
> Key: HAWQ-670
> URL: https://issues.apache.org/jira/browse/HAWQ-670
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Haisheng Yuan
>Assignee: Lei Chang
> Fix For: backlog
>
>
> If the current segments number is 8 and I run these queries, 
> {code:sql}
> create table t2 (c1 int) with (bucketnum=5);
> create table t2_2 (c2 int) inherits(t2);
> alter table t2 set distributed by (c1);
> {code}
> The alter table clause will show the following error message:
> {color:red}
> ERROR:  bucketnum requires a numeric value
> {color}
> which is not expected behavior.
> The query should be able to be executed without error messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-644) Failure in security ha environment with certain writable external tables

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao closed HAWQ-644.
--
   Resolution: Fixed
Fix Version/s: 2.0.0.0-incubating

> Failure in security ha environment with certain writable external tables
> 
>
> Key: HAWQ-644
> URL: https://issues.apache.org/jira/browse/HAWQ-644
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF, Security
>Reporter: Goden Yao
>Assignee: Goden Yao
> Fix For: 2.0.0.0-incubating
>
>
> In a Secure HA environment:
> Few tests which tests writable table fail due to empty dfs_address prior to 
> getting the delegation token in the segment.
> On an initial investigation, the shared_path seems to not be set by the hawq 
> master.
> Log from the specific segment. The hdfs path available in the segment is 
> empty and hence the failure.
> {code}
> 2016-04-08 22:53:11.034661 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-1 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034699 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-2 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034785 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","locating 
> token for 0^N\","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"gpbridgeapi.c",521,
> 2016-04-08 22:53:11.034871 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"WARNING","01000","internal 
> error HdfsParsePath: no filesystem protocol found in path 
> ""0^N\^A""","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"fd.c",2501,
> 2016-04-08 22:53:11.035004 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG1","0","Deleted 
> entry for query (sessionid=73, commandcnt=43)",,"INSERT INTO 
> writable_table SELECT * FROM readable_table;",0,,"workfile_queryspace.c",329,
> 2016-04-08 22:53:11.066243 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"ERROR","XX000","fail to 
> parse uri: 0^N\ (cdbfilesystemcredential.c:529)","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"cdbfilesystemcredential.c",529,"Stack trace:
> 10x871f8f postgres  + 0x871f8f
> 20x872679 postgres elog_finish + 0xa9
> 30x99d7b8 postgres find_filesystem_credential_with_uri + 0x78
> 40x7fc935c8f5a2 pxf.so add_delegation_token + 0xa2
> 50x7fc935c8f9b2 pxf.so get_pxf_server + 0xe2
> 60x7fc935c8ff02 pxf.so gpbridge_export_start + 0x42
> 70x7fc935c90170 pxf.so gpbridge_export + 0x50
> 80x507eb8 postgres  + 0x507eb8
> 90x5083af postgres url_fwrite + 0x9f
> 10   0x5042b4 postgres external_insert + 0x184
> 11   0x69dcca postgres ExecInsert + 0x1fa
> 12   0x69d41c postgres ExecDML + 0x1ec
> 13   0x65e185 postgres ExecProcNode + 0x3c5
> 14   0x659f4a postgres  + 0x659f4a
> 15   0x65a8d3 postgres ExecutorRun + 0x4a3
> 16   0x7b550a postgres  + 0x7b550a
> 17   0x7b5baf postgres  + 0x7b5baf
> 18   0x7b6142 postgres PortalRun + 0x342
> 19   0x7b2c21 postgres PostgresMain + 0x3861
> 20   0x763ce3 postgres  + 0x763ce3
> 21   0x76443d postgres  + 0x76443d
> 22   0x76626e postgres PostmasterMain + 0xc7e
> 23   0x6c04ea postgres main + 0x48a
> 24   0x7fc95f276d5d libc.so.6 __libc_start_main + 0xfd
> 25   0x4a1489 postgres  + 0x4a1489
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-659) Memory leak in function dispatcher_bind_executor when executormgr_bind_executor_task returns false

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-659:
---
Fix Version/s: backlog

> Memory leak in function dispatcher_bind_executor when 
> executormgr_bind_executor_task returns false
> --
>
> Key: HAWQ-659
> URL: https://issues.apache.org/jira/browse/HAWQ-659
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Dispatcher
>Reporter: Lili Ma
>Assignee: Lei Chang
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-622) fix libhdfs3 readme

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-622:
---
Fix Version/s: 2.0.0.0-incubating

> fix libhdfs3 readme
> ---
>
> Key: HAWQ-622
> URL: https://issues.apache.org/jira/browse/HAWQ-622
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> ==
> Libhdfs3 is developed by Pivotal and used in HAWQ, which
> is a massive parallel database engine in Pivotal Hadoop
> Distribution.
> ==
> https://github.com/apache/incubator-hawq/blob/bc0904ab02bb3e8c3e3596ce139b3ea6b52e2685/depends/libhdfs3/README.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-622) fix libhdfs3 readme

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-622:
---
Description: 
==
Libhdfs3 is developed by Pivotal and used in HAWQ, which
is a massive parallel database engine in Pivotal Hadoop
Distribution.
==
https://github.com/apache/incubator-hawq/blob/bc0904ab02bb3e8c3e3596ce139b3ea6b52e2685/depends/libhdfs3/README.md



  was:
==
Libhdfs3 is developed by Pivotal and used in HAWQ, which
is a massive parallel database engine in Pivotal Hadoop
Distribution.
==





> fix libhdfs3 readme
> ---
>
> Key: HAWQ-622
> URL: https://issues.apache.org/jira/browse/HAWQ-622
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: Lei Chang
>Assignee: Lei Chang
>
> ==
> Libhdfs3 is developed by Pivotal and used in HAWQ, which
> is a massive parallel database engine in Pivotal Hadoop
> Distribution.
> ==
> https://github.com/apache/incubator-hawq/blob/bc0904ab02bb3e8c3e3596ce139b3ea6b52e2685/depends/libhdfs3/README.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-606) Change seg_max_connections default value and remove gp_enable_column_oriented_table

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-606:
---
Fix Version/s: backlog

> Change seg_max_connections default value and remove 
> gp_enable_column_oriented_table
> ---
>
> Key: HAWQ-606
> URL: https://issues.apache.org/jira/browse/HAWQ-606
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Catalog
>Reporter: Jiali Yao
>Assignee: Lei Chang
> Fix For: backlog
>
>
> According to test evaluation result, we need to change default value for 
> seg_max_connections to 3000.
> gp_enable_column_oriented_table is no needed. Remove this GUC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-614) Table with Segment Reject Limit fails to flush AO file when all data is rejected

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-614:
---
Fix Version/s: backlog

> Table with Segment Reject Limit fails to flush AO file when all data is 
> rejected
> 
>
> Key: HAWQ-614
> URL: https://issues.apache.org/jira/browse/HAWQ-614
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables
>Reporter: Kyle R Dunn
>Assignee: hongwu
>Priority: Minor
> Fix For: backlog
>
> Attachments: Hawq_table.sql, Source_Sql_Server.sql, image008.jpg
>
>
> An error message (attached) is received if *all* data gets rejected (for any 
> reason) when using segment reject limit option with an error table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-501) Switch from using hcatalog to hive as the reserve keyword

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao resolved HAWQ-501.

Resolution: Won't Fix

> Switch from using hcatalog to hive as the reserve keyword
> -
>
> Key: HAWQ-501
> URL: https://issues.apache.org/jira/browse/HAWQ-501
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: External Tables, Hcatalog, PXF
>Reporter: Shivram Mani
>Assignee: Goden Yao
>Priority: Minor
> Fix For: backlog
>
>
> An alternative means of accessing hive tables is for the user to use hcatalog 
> as the keyword eg: table default.customers will be accessed as 
> hcatalog.default.customers.
> The end user must not be aware of the underlying catalog store and should 
> instead only be aware of hive/hbase/hdfs.
> The above table can instead be accessed via hive.default.customers.
> This allows us to expand this approach to also work with other datasources 
> once schema auto discovery is complete HAWQ-450 and HAWQ-500
> A file on HBase can be accessed using hbase. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-501) Switch from using hcatalog to hive as the reserve keyword

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-501:
---
Summary: Switch from using hcatalog to hive as the reserve keyword  (was: 
Switch from using hcatalog as the reserve keyword)

> Switch from using hcatalog to hive as the reserve keyword
> -
>
> Key: HAWQ-501
> URL: https://issues.apache.org/jira/browse/HAWQ-501
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: External Tables, Hcatalog, PXF
>Reporter: Shivram Mani
>Assignee: Goden Yao
>Priority: Minor
> Fix For: backlog
>
>
> An alternative means of accessing hive tables is for the user to use hcatalog 
> as the keyword eg: table default.customers will be accessed as 
> hcatalog.default.customers.
> The end user must not be aware of the underlying catalog store and should 
> instead only be aware of hive/hbase/hdfs.
> The above table can instead be accessed via hive.default.customers.
> This allows us to expand this approach to also work with other datasources 
> once schema auto discovery is complete HAWQ-450 and HAWQ-500
> A file on HBase can be accessed using hbase. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-860) Optimizer generates wrong plan when correlated subquery contains set-returning functions

2016-07-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378337#comment-15378337
 ] 

ASF GitHub Bot commented on HAWQ-860:
-

GitHub user hsyuan opened a pull request:

https://github.com/apache/incubator-hawq/pull/795

HAWQ-860. Fix ORCA wrong plan when correlated subquery contains 
set-returning functions

ORCA 1.633 returns wrong result for the following query:
```sql
select 0 is distinct from (select count(1) from (select unnest(array[1, 2, 
3])) as foo);
?column?
--
 t
 t
 t
(3 rows)
```
Correct result should be:
```sql
?column?
--
 t
(1 row)
```
This bug was fixed by bumping ORCA version to 1.634.

For detail information, see ORCA pull request:
https://github.com/greenplum-db/gporca/pull/49

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hsyuan/incubator-hawq HAWQ-860

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/795.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #795


commit eef9fe3c330119bdeaecc8f5b18382d1b9568e46
Author: Haisheng Yuan and Omer Arap 
Date:   2016-07-14T19:16:40Z

Fix ORCA wrong plan when correlated subquery contains set-returning 
functions




> Optimizer generates wrong plan when correlated subquery contains 
> set-returning functions
> 
>
> Key: HAWQ-860
> URL: https://issues.apache.org/jira/browse/HAWQ-860
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Optimizer
>Reporter: Venkatesh
>Assignee: Amr El-Helw
>
> Bump ORCA to 1.634 [#119042413]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #795: HAWQ-860. Fix ORCA wrong plan when correla...

2016-07-14 Thread hsyuan
GitHub user hsyuan opened a pull request:

https://github.com/apache/incubator-hawq/pull/795

HAWQ-860. Fix ORCA wrong plan when correlated subquery contains 
set-returning functions

ORCA 1.633 returns wrong result for the following query:
```sql
select 0 is distinct from (select count(1) from (select unnest(array[1, 2, 
3])) as foo);
?column?
--
 t
 t
 t
(3 rows)
```
Correct result should be:
```sql
?column?
--
 t
(1 row)
```
This bug was fixed by bumping ORCA version to 1.634.

For detail information, see ORCA pull request:
https://github.com/greenplum-db/gporca/pull/49

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hsyuan/incubator-hawq HAWQ-860

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/795.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #795


commit eef9fe3c330119bdeaecc8f5b18382d1b9568e46
Author: Haisheng Yuan and Omer Arap 
Date:   2016-07-14T19:16:40Z

Fix ORCA wrong plan when correlated subquery contains set-returning 
functions




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-256) Integrate Security with Apache Ranger

2016-07-14 Thread Don Bosco Durai (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378322#comment-15378322
 ] 

Don Bosco Durai commented on HAWQ-256:
--

1. The "Add New User" in Ranger is just to add user in the Ranger DB. The users 
and groups in Ranger are used to help create policies in Ranger. It is not used 
as source of truth by the component for users or groups. The main reason being, 
Ranger doesn't do authentication. So you need to rely on AD/LDAP or use local 
user/password.
2. In the Ranger integration, the policies are stored in the Ranger DB. Ranger 
provides UI and REST APIs to create the policies. In Hive and HBase, the grant 
from their CLI calls our plugin running within their process, which in turn 
calls Ranger REST API. In the case of HAWQ, the C++ client might make the REST 
API to the proxy Ranger Server to set the policies.
3. The model we suggest is to abstract the authorization layer. The default 
behavior is the component natively implementation. And those working in a 
bigger eco-system can alternatively use Ranger or anyone implementing the 
component's interface. So for native implementation, technically nothing should 
change. You still will be saving the ACLs the way you are currently storing and 
using it. When the user choose Ranger as the option, the policies will be 
stored in Ranger DB in Ranger format and the Ranger implementation will pull 
the policies and enforce it. So any ACLs stored in the component native storage 
will not be used.
5. Same as #2. In addition to Ranger UI and REST API, users can also set 
policies via native component CLI commands. This is primarily for backward 
compatibility. However, since Ranger support additional conditions, generally 
it is not possible to set these conditions via native CLI grant commands. 

Looking forward for the design document. Thanks




> Integrate Security with Apache Ranger
> -
>
> Key: HAWQ-256
> URL: https://issues.apache.org/jira/browse/HAWQ-256
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Security
>Reporter: Michael Andre Pearce (IG)
>Assignee: Lili Ma
> Fix For: backlog
>
>
> Integrate security with Apache Ranger for a unified Hadoop security solution. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-786) Framework to support pluggable formats and file systems

2016-07-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-786:
---
Fix Version/s: backlog

> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: hongwu
> Fix For: backlog
>
> Attachments: ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >