[GitHub] incubator-hawq pull request #1187: HAWQ-1408. Fixed crash when alloc not eno...

2017-03-24 Thread liming01
Github user liming01 commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1187#discussion_r108029031
  
--- Diff: src/backend/access/appendonly/appendonlywriter.c ---
@@ -923,30 +923,34 @@ usedByConcurrentTransaction(AOSegfileStatus 
*segfilestat)
  */
 int
 addCandidateSegno(AOSegfileStatus **maxSegno4Seg, int segment_num, 
AOSegfileStatus *candidate_status, bool isKeepHash)
-{  
+{
+   int remaining_num  = segment_num;
if(isKeepHash){
-   int remaining_num  = segment_num;
int idx = (candidate_status->segno+segment_num-1) % 
segment_num;// from 1 to segment_num-1, then 0(segment_num)
if (NULL==maxSegno4Seg[idx] || maxSegno4Seg[idx]->segno > 
candidate_status->segno)  //using the min seg no firstly.
maxSegno4Seg[idx] = candidate_status;

for(int i=0; isegno > 0){
-remaining_num--;
+   remaining_num--;
}
}
-   return remaining_num;
}else{
+   int assigned = false; // candidate_status assigned?
--- End diff --

Sorry for my mistake. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1189: HAWQ-1409. Send AGG-TYPE header to PXF

2017-03-24 Thread hsyuan
Github user hsyuan commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1189#discussion_r108025770
  
--- Diff: src/backend/executor/nodeAgg.c ---
@@ -1950,6 +1950,21 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 * initialize child nodes
 */
outerPlan = outerPlan(node);
+   if (outerPlan->type == T_ExternalScan) {
+   /*
+* Hack to indicate to PXF when there is an external scan
+*/
+   if (aggstate->aggs && list_length(aggstate->aggs) == 1) {
+   foreach(l, aggstate->aggs) {
+   AggrefExprState *aggrefstate = (AggrefExprState 
*) lfirst(l);
--- End diff --

just by changing `lfirst(l)` to `linitial(aggstate->aggs)`, you can remove 
`foreach` loop.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1187: HAWQ-1408. Fixed crash when alloc not eno...

2017-03-24 Thread hsyuan
Github user hsyuan commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1187#discussion_r108025472
  
--- Diff: src/backend/access/appendonly/appendonlywriter.c ---
@@ -1136,26 +1140,22 @@ List *SetSegnoForWrite(List *existing_segnos, Oid 
relid, int segment_num,
 /* If the found segfile status are still no enough,
  * we need to get new ones from the status pool.
  */
-if (remaining_num > 0)
-{
-//generate new segment_num to make sure that in keepHash 
mode, all segment node has at least one segfile is writable
-int newAllocatedNum = remaining_num;
-for(int i= 1; i<= newAllocatedNum; i++)
-{
-int new_status = AORelGetSegfileStatus(aoentry);
-if (new_status == NEXT_END_OF_LIST)
-{
-LWLockRelease(AOSegFileLock);
-ereport(ERROR, (errmsg("cannot open more than %d 
append-only table segment files cocurrently",
-MaxAORelSegFileStatus)));
-}
-AOSegfileStatusPool[new_status].segno = 
++aoentry->max_seg_no;
-AOSegfileStatusPool[new_status].next = 
aoentry->head_rel_segfile.next;
-aoentry->head_rel_segfile.next = new_status;
-remaining_num = addCandidateSegno(maxSegno4Segment, 
segment_num,  [new_status], keepHash);
-}
-Assert(remaining_num==0);//make sure all segno got a 
candidate
-}
+   while(remaining_num>0)
+   {
+   //generate new segment_num to make sure that in 
keepHash mode, all segment node has at least one segfile is writable
+   int new_status = AORelGetSegfileStatus(aoentry);
+   if (new_status == NEXT_END_OF_LIST)
+   {
+   LWLockRelease(AOSegFileLock);
+   ereport(ERROR, (errmsg("cannot open 
more than %d append-only table segment files cocurrently",
--- End diff --

`cocurrently` typo


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1187: HAWQ-1408. Fixed crash when alloc not eno...

2017-03-24 Thread hsyuan
Github user hsyuan commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1187#discussion_r108025409
  
--- Diff: src/backend/access/appendonly/appendonlywriter.c ---
@@ -923,30 +923,34 @@ usedByConcurrentTransaction(AOSegfileStatus 
*segfilestat)
  */
 int
 addCandidateSegno(AOSegfileStatus **maxSegno4Seg, int segment_num, 
AOSegfileStatus *candidate_status, bool isKeepHash)
-{  
+{
+   int remaining_num  = segment_num;
--- End diff --

The space of this chunk of code is messy.
`{}` placement is not consistent, even in a single function.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1187: HAWQ-1408. Fixed crash when alloc not eno...

2017-03-24 Thread hsyuan
Github user hsyuan commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1187#discussion_r108025364
  
--- Diff: src/backend/access/appendonly/appendonlywriter.c ---
@@ -923,30 +923,34 @@ usedByConcurrentTransaction(AOSegfileStatus 
*segfilestat)
  */
 int
 addCandidateSegno(AOSegfileStatus **maxSegno4Seg, int segment_num, 
AOSegfileStatus *candidate_status, bool isKeepHash)
-{  
+{
+   int remaining_num  = segment_num;
if(isKeepHash){
-   int remaining_num  = segment_num;
int idx = (candidate_status->segno+segment_num-1) % 
segment_num;// from 1 to segment_num-1, then 0(segment_num)
if (NULL==maxSegno4Seg[idx] || maxSegno4Seg[idx]->segno > 
candidate_status->segno)  //using the min seg no firstly.
maxSegno4Seg[idx] = candidate_status;

for(int i=0; isegno > 0){
-remaining_num--;
+   remaining_num--;
}
}
-   return remaining_num;
}else{
+   int assigned = false; // candidate_status assigned?
--- End diff --

Why not bool?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1189: HAWQ-1409. Send AGG-TYPE header to PXF

2017-03-24 Thread hsyuan
Github user hsyuan commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1189#discussion_r108025216
  
--- Diff: src/backend/executor/nodeAgg.c ---
@@ -1950,6 +1950,21 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 * initialize child nodes
 */
outerPlan = outerPlan(node);
+   if (outerPlan->type == T_ExternalScan) {
+   /*
+* Hack to indicate to PXF when there is an external scan
+*/
+   if (aggstate->aggs && list_length(aggstate->aggs) == 1) {
+   foreach(l, aggstate->aggs) {
--- End diff --

if the size of `aggstate->aggs` is already 1, do we need use `foreach` to 
iterate?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1189: HAWQ-1409. Send AGG-TYPE header to PXF

2017-03-24 Thread hsyuan
Github user hsyuan commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1189#discussion_r108025159
  
--- Diff: src/backend/executor/nodeAgg.c ---
@@ -1950,6 +1950,21 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 * initialize child nodes
 */
outerPlan = outerPlan(node);
+   if (outerPlan->type == T_ExternalScan) {
+   /*
+* Hack to indicate to PXF when there is an external scan
+*/
+   if (aggstate->aggs && list_length(aggstate->aggs) == 1) {
--- End diff --

Do we need check `aggstate->aggs` is null or not? if it is null, 
`list_length` will return 0.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1189: HAWQ-1409. Send AGG-TYPE header to PXF

2017-03-24 Thread hsyuan
Github user hsyuan commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1189#discussion_r108025134
  
--- Diff: src/backend/executor/nodeAgg.c ---
@@ -1950,6 +1950,21 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 * initialize child nodes
 */
outerPlan = outerPlan(node);
+   if (outerPlan->type == T_ExternalScan) {
--- End diff --

Better use `IsA(outerPlan, ExternalScan)`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1189: HAWQ-1409. Send AGG-TYPE header to PXF

2017-03-24 Thread hsyuan
Github user hsyuan commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1189#discussion_r108025072
  
--- Diff: src/backend/access/external/fileam.c ---
@@ -459,12 +459,20 @@ external_stopscan(FileScanDesc scan)
  * 
  */
 ExternalSelectDesc
-external_getnext_init(PlanState *state) {
+external_getnext_init(PlanState *state, ExternalScanState *es_state) {
ExternalSelectDesc desc = (ExternalSelectDesc) 
palloc0(sizeof(ExternalSelectDescData));
+   Plan *rootPlan;
 
if (state != NULL)
{
desc->projInfo = state->ps_ProjInfo;
+   /*
+* If we have an agg type then our parent is an Agg node
+*/
+   rootPlan = state->state->es_plannedstmt->planTree;
+   if (rootPlan->type == T_Agg && es_state->parent_agg_type) {
--- End diff --

Recommend to use `IsA(rootPlan, Agg)`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1189: HAWQ-1409. Send AGG-TYPE header to PXF

2017-03-24 Thread hsyuan
Github user hsyuan commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1189#discussion_r108025017
  
--- Diff: src/backend/executor/nodeAgg.c ---
@@ -1950,6 +1950,21 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
 * initialize child nodes
 */
outerPlan = outerPlan(node);
+   if (outerPlan->type == T_ExternalScan) {
+   /*
+* Hack to indicate to PXF when there is an external scan
+*/
+   if (aggstate->aggs && list_length(aggstate->aggs) == 1) {
+   foreach(l, aggstate->aggs) {
+   AggrefExprState *aggrefstate = (AggrefExprState 
*) lfirst(l);
+   Aggref *aggref = (Aggref *) 
aggrefstate->xprstate.expr;
+   //Only dealing with one agg
+   if (aggref->aggfnoid == 2147 || 
aggref->aggfnoid == 2803) {
--- End diff --

Would be better use `COUNT_ANY_OID` and `COUNT_STAR_OID`...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1189: HAWQ-1409. Send AGG-TYPE header to PXF

2017-03-24 Thread hsyuan
Github user hsyuan commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1189#discussion_r108024814
  
--- Diff: src/backend/access/external/pxfheaders.c ---
@@ -110,6 +110,20 @@ void build_http_header(PxfInputData *input)
else
churl_headers_append(headers, "X-GP-HAS-FILTER", "0");
 
+   /* Aggregate information */
+   if (input->agg_type) {
+   char agg_groups_str[sizeof(int32)];
--- End diff --

Where is this var used?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (HAWQ-1046) Document migration of LibHDFS3 library to HAWQ

2017-03-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HAWQ-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941010#comment-15941010
 ] 

Hervé Yviquel edited comment on HAWQ-1046 at 3/24/17 8:32 PM:
--

-Any news about this? Is there is still someone working on it? It is kind of 
sad that the Github repository is pretty much abandoned (even more as a user of 
libhdfs3)-
Forget my comment, I just read HDFS-6994 =)


was (Author: elldekaa):
-Any news about this? Is there is still someone working on it? It is kind of 
sad that the Github repository is pretty much abandoned (even more as a user of 
libhdfs3)-
Forget my comment, I just read HDFS-6994 =)

> Document migration of LibHDFS3 library to HAWQ
> --
>
> Key: HAWQ-1046
> URL: https://issues.apache.org/jira/browse/HAWQ-1046
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: libhdfs
>Reporter: Matthew Rocklin
>Assignee: hongwu
> Fix For: backlog
>
>
> Some people used to depend on the libhdfs3 library maintained alongside HAWQ. 
>  This library was merged into the HAWQ codebase, making the situation a bit 
> more confusing.
> Is independent use of libhdfs3 still supported by this community?  If so what 
> is the best way for packagers to reason about versions and releases of this 
> component?  It would be convenient to see documentation on how people can 
> best depend on libhdfs3 separately from HAWQ if this is an intention.
> It looks like people have actually submitted work to the old version
> See: https://github.com/Pivotal-Data-Attic/pivotalrd-libhdfs3/pull/28
> It looks like the warning that the library had moved has been removed:
> https://github.com/Pivotal-Data-Attic/pivotalrd-libhdfs3/commit/ddcb2404a5a67e0f39fe49ed20591545c48ff426
> This removal may lead to some frustration



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HAWQ-1046) Document migration of LibHDFS3 library to HAWQ

2017-03-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HAWQ-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941010#comment-15941010
 ] 

Hervé Yviquel edited comment on HAWQ-1046 at 3/24/17 8:32 PM:
--

-Any news about this? Is there is still someone working on it? It is kind of 
sad that the Github repository is pretty much abandoned (even more as a user of 
libhdfs3)-
Forget my comment, I just read HDFS-6994 =)


was (Author: elldekaa):
Any news about this? Is there is still someone working on it? It is kind of sad 
that the Github repository is pretty much abandoned (even more as a user of 
libhdfs3)

> Document migration of LibHDFS3 library to HAWQ
> --
>
> Key: HAWQ-1046
> URL: https://issues.apache.org/jira/browse/HAWQ-1046
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: libhdfs
>Reporter: Matthew Rocklin
>Assignee: hongwu
> Fix For: backlog
>
>
> Some people used to depend on the libhdfs3 library maintained alongside HAWQ. 
>  This library was merged into the HAWQ codebase, making the situation a bit 
> more confusing.
> Is independent use of libhdfs3 still supported by this community?  If so what 
> is the best way for packagers to reason about versions and releases of this 
> component?  It would be convenient to see documentation on how people can 
> best depend on libhdfs3 separately from HAWQ if this is an intention.
> It looks like people have actually submitted work to the old version
> See: https://github.com/Pivotal-Data-Attic/pivotalrd-libhdfs3/pull/28
> It looks like the warning that the library had moved has been removed:
> https://github.com/Pivotal-Data-Attic/pivotalrd-libhdfs3/commit/ddcb2404a5a67e0f39fe49ed20591545c48ff426
> This removal may lead to some frustration



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HAWQ-1046) Document migration of LibHDFS3 library to HAWQ

2017-03-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HAWQ-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941010#comment-15941010
 ] 

Hervé Yviquel edited comment on HAWQ-1046 at 3/24/17 8:30 PM:
--

Any news about this? Is there is still someone working on it? It is kind of sad 
that the Github repository is pretty much abandoned (even more as a user of 
libhdfs3)


was (Author: elldekaa):
Any news about this? Is there is still someone working? It is kind of sad that 
the github repository is pretty much abandoned (even more as a user of libhdfs3)

> Document migration of LibHDFS3 library to HAWQ
> --
>
> Key: HAWQ-1046
> URL: https://issues.apache.org/jira/browse/HAWQ-1046
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: libhdfs
>Reporter: Matthew Rocklin
>Assignee: hongwu
> Fix For: backlog
>
>
> Some people used to depend on the libhdfs3 library maintained alongside HAWQ. 
>  This library was merged into the HAWQ codebase, making the situation a bit 
> more confusing.
> Is independent use of libhdfs3 still supported by this community?  If so what 
> is the best way for packagers to reason about versions and releases of this 
> component?  It would be convenient to see documentation on how people can 
> best depend on libhdfs3 separately from HAWQ if this is an intention.
> It looks like people have actually submitted work to the old version
> See: https://github.com/Pivotal-Data-Attic/pivotalrd-libhdfs3/pull/28
> It looks like the warning that the library had moved has been removed:
> https://github.com/Pivotal-Data-Attic/pivotalrd-libhdfs3/commit/ddcb2404a5a67e0f39fe49ed20591545c48ff426
> This removal may lead to some frustration



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1046) Document migration of LibHDFS3 library to HAWQ

2017-03-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HAWQ-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941010#comment-15941010
 ] 

Hervé Yviquel commented on HAWQ-1046:
-

Any news about this? Is there is still someone working? It is kind of sad that 
the github repository is pretty much abandoned (even more as a user of libhdfs3)

> Document migration of LibHDFS3 library to HAWQ
> --
>
> Key: HAWQ-1046
> URL: https://issues.apache.org/jira/browse/HAWQ-1046
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: libhdfs
>Reporter: Matthew Rocklin
>Assignee: hongwu
> Fix For: backlog
>
>
> Some people used to depend on the libhdfs3 library maintained alongside HAWQ. 
>  This library was merged into the HAWQ codebase, making the situation a bit 
> more confusing.
> Is independent use of libhdfs3 still supported by this community?  If so what 
> is the best way for packagers to reason about versions and releases of this 
> component?  It would be convenient to see documentation on how people can 
> best depend on libhdfs3 separately from HAWQ if this is an intention.
> It looks like people have actually submitted work to the old version
> See: https://github.com/Pivotal-Data-Attic/pivotalrd-libhdfs3/pull/28
> It looks like the warning that the library had moved has been removed:
> https://github.com/Pivotal-Data-Attic/pivotalrd-libhdfs3/commit/ddcb2404a5a67e0f39fe49ed20591545c48ff426
> This removal may lead to some frustration



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq issue #1188: HAWQ-1406. Update HAWQ product version strings f...

2017-03-24 Thread radarwave
Github user radarwave commented on the issue:

https://github.com/apache/incubator-hawq/pull/1188
  
May need to add one more file:
tools/bin/gppylib/data/2.2.json (can copy from 
tools/bin/gppylib/data/2.1.json)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1189: HAWQ-1409. Send AGG-TYPE header to PXF

2017-03-24 Thread kavinderd
GitHub user kavinderd opened a pull request:

https://github.com/apache/incubator-hawq/pull/1189

HAWQ-1409. Send AGG-TYPE header to PXF

This change is mean to be a proof of concept that pushing down
aggregate function information from HAWQ to the underlying external
storage layer does indeed improve performance

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kavinderd/incubator-hawq HAWQ-1409

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1189.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1189


commit 7f14691d427a4d4669da8c9127d8f1d0800dcdd8
Author: Kavinder Dhaliwal 
Date:   2017-03-03T23:27:05Z

HAWQ-1409. Send AGG-TYPE header to PXF

This change is mean to be a proof of concept that pushing down
aggregate function information from HAWQ to the underlying external
storage layer does indeed improve performance




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-1409) HAWQ send additional header to PXF to indicate aggregate function type

2017-03-24 Thread Kavinder Dhaliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940520#comment-15940520
 ] 

Kavinder Dhaliwal commented on HAWQ-1409:
-

Currently the design of this implementation will only support count. So a new 
header AGG-TYPE will be sent from HAWQ to PXF with the following possible values

"count"
"unknown"

This simplifies the initial implementation

> HAWQ send additional header to PXF to indicate aggregate function type
> --
>
> Key: HAWQ-1409
> URL: https://issues.apache.org/jira/browse/HAWQ-1409
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Ed Espino
> Fix For: 2.3.0.0-incubating
>
>
> PXF can take advantage of some file formats such as ORC and leverage the 
> stats in the metadata. This means that for some simple aggregate functions 
> like count, min, max without any complex joins or filters PXF can simply read 
> the metadata and avoid reading tuples. In order for PXF to know that a query 
> can be completed via ORC metadata HAWQ must indicate to PXF that the query is 
> an aggregate query and the type of function



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1409) HAWQ send additional header to PXF to indicate aggregate function type

2017-03-24 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-1409:
---

Assignee: Kavinder Dhaliwal  (was: Ed Espino)

> HAWQ send additional header to PXF to indicate aggregate function type
> --
>
> Key: HAWQ-1409
> URL: https://issues.apache.org/jira/browse/HAWQ-1409
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: 2.3.0.0-incubating
>
>
> PXF can take advantage of some file formats such as ORC and leverage the 
> stats in the metadata. This means that for some simple aggregate functions 
> like count, min, max without any complex joins or filters PXF can simply read 
> the metadata and avoid reading tuples. In order for PXF to know that a query 
> can be completed via ORC metadata HAWQ must indicate to PXF that the query is 
> an aggregate query and the type of function



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1409) HAWQ send additional header to PXF to indicate aggregate function type

2017-03-24 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-1409:
---

 Summary: HAWQ send additional header to PXF to indicate aggregate 
function type
 Key: HAWQ-1409
 URL: https://issues.apache.org/jira/browse/HAWQ-1409
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: PXF
Reporter: Kavinder Dhaliwal
Assignee: Ed Espino
 Fix For: 2.3.0.0-incubating


PXF can take advantage of some file formats such as ORC and leverage the stats 
in the metadata. This means that for some simple aggregate functions like 
count, min, max without any complex joins or filters PXF can simply read the 
metadata and avoid reading tuples. In order for PXF to know that a query can be 
completed via ORC metadata HAWQ must indicate to PXF that the query is an 
aggregate query and the type of function




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1407) Update HAWQ DB product version (2.2.0.0-incubating)

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-1407.
-

> Update HAWQ DB product version (2.2.0.0-incubating)
> ---
>
> Key: HAWQ-1407
> URL: https://issues.apache.org/jira/browse/HAWQ-1407
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Affects Versions: 2.1.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.2.0.0-incubating
>
>
> For the Apache HAWQ 2.2.0.0-incubating release, update the HAWQ DB version 
> displayed with "SELECT VERSION()" (use PSQL command line utility).  This is 
> controlled by the "GP_VERSION=2.2.0.0-incubating" variable in the 
> "getversion" file in the root source directory.  This will be applied to the 
> release branch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1407) Update HAWQ DB product version (2.2.0.0-incubating)

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-1407.
---
Resolution: Fixed

> Update HAWQ DB product version (2.2.0.0-incubating)
> ---
>
> Key: HAWQ-1407
> URL: https://issues.apache.org/jira/browse/HAWQ-1407
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Affects Versions: 2.1.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.2.0.0-incubating
>
>
> For the Apache HAWQ 2.2.0.0-incubating release, update the HAWQ DB version 
> displayed with "SELECT VERSION()" (use PSQL command line utility).  This is 
> controlled by the "GP_VERSION=2.2.0.0-incubating" variable in the 
> "getversion" file in the root source directory.  This will be applied to the 
> release branch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1407) Update HAWQ DB product version (2.2.0.0-incubating)

2017-03-24 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940495#comment-15940495
 ] 

Ruilong Huo commented on HAWQ-1407:
---

Close this issue as it is addressed by 
[HAWQ-1406|https://issues.apache.org/jira/browse/HAWQ-1406].

> Update HAWQ DB product version (2.2.0.0-incubating)
> ---
>
> Key: HAWQ-1407
> URL: https://issues.apache.org/jira/browse/HAWQ-1407
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Affects Versions: 2.1.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.2.0.0-incubating
>
>
> For the Apache HAWQ 2.2.0.0-incubating release, update the HAWQ DB version 
> displayed with "SELECT VERSION()" (use PSQL command line utility).  This is 
> controlled by the "GP_VERSION=2.2.0.0-incubating" variable in the 
> "getversion" file in the root source directory.  This will be applied to the 
> release branch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1406) Update HAWQ product version strings to 2.2.0.0 (HAWQ/HAWQ Ambari Plugin) & 3.2.0.0 (PXF)

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1406:
--
Summary: Update HAWQ product version strings to 2.2.0.0 (HAWQ/HAWQ Ambari 
Plugin) & 3.2.0.0 (PXF)  (was: Update HAWQ product version strings to 2.2.0.0 
(HAWQ/HAWQ Ambari Plugin) & 3.1.0.0 (PXF))

> Update HAWQ product version strings to 2.2.0.0 (HAWQ/HAWQ Ambari Plugin) & 
> 3.2.0.0 (PXF)
> 
>
> Key: HAWQ-1406
> URL: https://issues.apache.org/jira/browse/HAWQ-1406
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Affects Versions: 2.1.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.2.0.0-incubating
>
>
> Need to update the HAWQ (2.2.0.0), HAWQ Ambari Plugin (2.2.0.0) and PXF 
> (3.1.0.0) versions so that we can clearly identify Apache HAWQ 
> 2.2.0.0-incubating artifacts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1406) Update HAWQ product version strings to 2.2.0.0 (HAWQ/HAWQ Ambari Plugin) & 3.2.0.0 (PXF)

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1406:
--
Description: Need to update the HAWQ (2.2.0.0), HAWQ Ambari Plugin 
(2.2.0.0) and PXF (3.2.0.0) versions so that we can clearly identify Apache 
HAWQ 2.2.0.0-incubating artifacts.  (was: Need to update the HAWQ (2.2.0.0), 
HAWQ Ambari Plugin (2.2.0.0) and PXF (3.1.0.0) versions so that we can clearly 
identify Apache HAWQ 2.2.0.0-incubating artifacts.)

> Update HAWQ product version strings to 2.2.0.0 (HAWQ/HAWQ Ambari Plugin) & 
> 3.2.0.0 (PXF)
> 
>
> Key: HAWQ-1406
> URL: https://issues.apache.org/jira/browse/HAWQ-1406
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Affects Versions: 2.1.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.2.0.0-incubating
>
>
> Need to update the HAWQ (2.2.0.0), HAWQ Ambari Plugin (2.2.0.0) and PXF 
> (3.2.0.0) versions so that we can clearly identify Apache HAWQ 
> 2.2.0.0-incubating artifacts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq issue #1188: HAWQ-1406. Update HAWQ product version strings f...

2017-03-24 Thread huor
Github user huor commented on the issue:

https://github.com/apache/incubator-hawq/pull/1188
  
@edespino, @shivzone, @denalex, @radarwave, @paul-guo-, please review. 
Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1188: HAWQ-1406. Update HAWQ product version st...

2017-03-24 Thread huor
GitHub user huor opened a pull request:

https://github.com/apache/incubator-hawq/pull/1188

HAWQ-1406. Update HAWQ product version strings for 2.2.0.0-incubating 
release

Update HAWQ and HAWQ Ambari Plugin version strings for 2.2.0.0-incubating 
release.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/huor/incubator-hawq huor_version

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1188.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1188


commit f89006864f8f9014d10372c34d217df7845cfa86
Author: Ruilong Huo 
Date:   2017-03-24T14:50:09Z

HAWQ-1406. Update HAWQ product version strings for 2.2.0.0-incubating 
release




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HAWQ-1408) PANICs during COPY ... FROM STDIN

2017-03-24 Thread Ming LI (JIRA)
Ming LI created HAWQ-1408:
-

 Summary: PANICs during COPY ... FROM STDIN
 Key: HAWQ-1408
 URL: https://issues.apache.org/jira/browse/HAWQ-1408
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Core
Reporter: Ming LI
Assignee: Ed Espino
 Fix For: 2.1.0.0-incubating


We found PANIC (and respective core dumps). From the initial analysis from the 
logs and core dump, the query causing this PANIC is a "COPY ... FROM STDIN". 
This query does not always panic.
This kind of queries are executed from Java/Scala code (by one of IG Spark 
Jobs). Connection to the DB is managed by connection pool (commons-dbcp2) and 
validated on borrow by “select 1” validation query. IG is using 
postgresql-9.4-1206-jdbc41 as a java driver to create those connections. I 
believe they should be using the driver from DataDirect, available in PivNet; 
however, I haven't found hard evidence pointing the driver as a root cause.

My initial analysis on the packcore for the master PANIC. Not sure if this 
helps or makes sense.

This is the backtrace of the packcore for process 466858:

{code}
(gdb) bt
#0  0x7fd875f906ab in raise () from 
/data/logs/52280/packcore-core.postgres.466858/lib64/libpthread.so.0
#1  0x008c0b19 in SafeHandlerForSegvBusIll (postgres_signal_arg=11, 
processName=) at elog.c:4519
#2  
#3  0x0053b9c3 in SetSegnoForWrite (existing_segnos=0x4c46ff0, 
existing_segnos@entry=0x0, relid=relid@entry=1195061, 
segment_num=segment_num@entry=6, forNewRel=forNewRel@entry=0 '\000', 
keepHash=keepHash@entry=1 '\001') at appendonlywriter.c:1166
#4  0x0053c08f in assignPerRelSegno 
(all_relids=all_relids@entry=0x2b96d68, segment_num=6) at 
appendonlywriter.c:1212
#5  0x005f79e8 in DoCopy (stmt=stmt@entry=0x2b2a3d8, 
queryString=) at copy.c:1591
#6  0x007ef737 in ProcessUtility (parsetree=parsetree@entry=0x2b2a3d8, 
queryString=0x2c2f550 "COPY 
mis_data_ig_client_derived_attributes.client_derived_attributes_src (id, 
tracking_id, name, value_string, value_timestamp, value_number, value_boolean, 
environment, account, channel, device, feat"...,
params=0x0, isTopLevel=isTopLevel@entry=1 '\001', 
dest=dest@entry=0x2b2a7c8, completionTag=completionTag@entry=0x7ffcb5e318e0 "") 
at utility.c:1076
#7  0x007ea95e in PortalRunUtility (portal=portal@entry=0x2b8eab0, 
utilityStmt=utilityStmt@entry=0x2b2a3d8, isTopLevel=isTopLevel@entry=1 '\001', 
dest=dest@entry=0x2b2a7c8, completionTag=completionTag@entry=0x7ffcb5e318e0 "") 
at pquery.c:1969
#8  0x007ec13e in PortalRunMulti (portal=portal@entry=0x2b8eab0, 
isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0x2b2a7c8, 
altdest=altdest@entry=0x2b2a7c8, 
completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:2079
#9  0x007ede95 in PortalRun (portal=portal@entry=0x2b8eab0, 
count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\001', 
dest=dest@entry=0x2b2a7c8, altdest=altdest@entry=0x2b2a7c8, 
completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:1596
#10 0x007e5ad9 in exec_simple_query 
(query_string=query_string@entry=0x2b29100 "COPY 
mis_data_ig_client_derived_attributes.client_derived_attributes_src (id, 
tracking_id, name, value_string, value_timestamp, value_number, value_boolean, 
environment, account, channel, device, feat"...,
seqServerHost=seqServerHost@entry=0x0, 
seqServerPort=seqServerPort@entry=-1) at postgres.c:1816
#11 0x007e6cb2 in PostgresMain (argc=, argv=, argv@entry=0x29d7820, username=0x29d75d0 "mis_ig") at postgres.c:4840
#12 0x00799540 in BackendRun (port=0x29afc50) at postmaster.c:5915
#13 BackendStartup (port=0x29afc50) at postmaster.c:5484
#14 ServerLoop () at postmaster.c:2163
#15 0x0079c309 in PostmasterMain (argc=, argv=) at postmaster.c:1454
#16 0x004a4209 in main (argc=9, argv=0x29af010) at main.c:226
{code}

Jumping into the frame 3 and running info locals, we found something odd for 
"status" variable:

{code}
(gdb) f 3
#3  0x0053b9c3 in SetSegnoForWrite (existing_segnos=0x4c46ff0, 
existing_segnos@entry=0x0, relid=relid@entry=1195061, 
segment_num=segment_num@entry=6, forNewRel=forNewRel@entry=0 '\000', 
keepHash=keepHash@entry=1 '\001') at appendonlywriter.c:1166
1166appendonlywriter.c: No such file or directory.
(gdb) info locals
status = 0x0
[...]
{code}

This panic comes from this piece of code in "appendonlywritter.c":

{code}
for (int i = 0; i < segment_num; i++)
{
AOSegfileStatus *status = maxSegno4Segment[i];
status->inuse = true;
status->xid = CurrentXid;
existing_segnos = lappend_int(existing_segnos,  status->segno);
}
{code}

So, we are pulling a 0x0 (null ?!) entry from _maxSegno4Segment_... That's 
extrange, because earlier in this function we populate this array, and we 
should not reach this section unless this 

[jira] [Updated] (HAWQ-1408) PANICs during COPY ... FROM STDIN

2017-03-24 Thread Ming LI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming LI updated HAWQ-1408:
--
Affects Version/s: backlog

> PANICs during COPY ... FROM STDIN
> -
>
> Key: HAWQ-1408
> URL: https://issues.apache.org/jira/browse/HAWQ-1408
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: backlog
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.1.0.0-incubating
>
>
> We found PANIC (and respective core dumps). From the initial analysis from 
> the logs and core dump, the query causing this PANIC is a "COPY ... FROM 
> STDIN". This query does not always panic.
> This kind of queries are executed from Java/Scala code (by one of IG Spark 
> Jobs). Connection to the DB is managed by connection pool (commons-dbcp2) and 
> validated on borrow by “select 1” validation query. IG is using 
> postgresql-9.4-1206-jdbc41 as a java driver to create those connections. I 
> believe they should be using the driver from DataDirect, available in PivNet; 
> however, I haven't found hard evidence pointing the driver as a root cause.
> My initial analysis on the packcore for the master PANIC. Not sure if this 
> helps or makes sense.
> This is the backtrace of the packcore for process 466858:
> {code}
> (gdb) bt
> #0  0x7fd875f906ab in raise () from 
> /data/logs/52280/packcore-core.postgres.466858/lib64/libpthread.so.0
> #1  0x008c0b19 in SafeHandlerForSegvBusIll (postgres_signal_arg=11, 
> processName=) at elog.c:4519
> #2  
> #3  0x0053b9c3 in SetSegnoForWrite (existing_segnos=0x4c46ff0, 
> existing_segnos@entry=0x0, relid=relid@entry=1195061, 
> segment_num=segment_num@entry=6, forNewRel=forNewRel@entry=0 '\000', 
> keepHash=keepHash@entry=1 '\001') at appendonlywriter.c:1166
> #4  0x0053c08f in assignPerRelSegno 
> (all_relids=all_relids@entry=0x2b96d68, segment_num=6) at 
> appendonlywriter.c:1212
> #5  0x005f79e8 in DoCopy (stmt=stmt@entry=0x2b2a3d8, 
> queryString=) at copy.c:1591
> #6  0x007ef737 in ProcessUtility 
> (parsetree=parsetree@entry=0x2b2a3d8, queryString=0x2c2f550 "COPY 
> mis_data_ig_client_derived_attributes.client_derived_attributes_src (id, 
> tracking_id, name, value_string, value_timestamp, value_number, 
> value_boolean, environment, account, channel, device, feat"...,
> params=0x0, isTopLevel=isTopLevel@entry=1 '\001', 
> dest=dest@entry=0x2b2a7c8, completionTag=completionTag@entry=0x7ffcb5e318e0 
> "") at utility.c:1076
> #7  0x007ea95e in PortalRunUtility (portal=portal@entry=0x2b8eab0, 
> utilityStmt=utilityStmt@entry=0x2b2a3d8, isTopLevel=isTopLevel@entry=1 
> '\001', dest=dest@entry=0x2b2a7c8, 
> completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:1969
> #8  0x007ec13e in PortalRunMulti (portal=portal@entry=0x2b8eab0, 
> isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0x2b2a7c8, 
> altdest=altdest@entry=0x2b2a7c8, 
> completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:2079
> #9  0x007ede95 in PortalRun (portal=portal@entry=0x2b8eab0, 
> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\001', 
> dest=dest@entry=0x2b2a7c8, altdest=altdest@entry=0x2b2a7c8, 
> completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:1596
> #10 0x007e5ad9 in exec_simple_query 
> (query_string=query_string@entry=0x2b29100 "COPY 
> mis_data_ig_client_derived_attributes.client_derived_attributes_src (id, 
> tracking_id, name, value_string, value_timestamp, value_number, 
> value_boolean, environment, account, channel, device, feat"...,
> seqServerHost=seqServerHost@entry=0x0, 
> seqServerPort=seqServerPort@entry=-1) at postgres.c:1816
> #11 0x007e6cb2 in PostgresMain (argc=, argv= out>, argv@entry=0x29d7820, username=0x29d75d0 "mis_ig") at postgres.c:4840
> #12 0x00799540 in BackendRun (port=0x29afc50) at postmaster.c:5915
> #13 BackendStartup (port=0x29afc50) at postmaster.c:5484
> #14 ServerLoop () at postmaster.c:2163
> #15 0x0079c309 in PostmasterMain (argc=, 
> argv=) at postmaster.c:1454
> #16 0x004a4209 in main (argc=9, argv=0x29af010) at main.c:226
> {code}
> Jumping into the frame 3 and running info locals, we found something odd for 
> "status" variable:
> {code}
> (gdb) f 3
> #3  0x0053b9c3 in SetSegnoForWrite (existing_segnos=0x4c46ff0, 
> existing_segnos@entry=0x0, relid=relid@entry=1195061, 
> segment_num=segment_num@entry=6, forNewRel=forNewRel@entry=0 '\000', 
> keepHash=keepHash@entry=1 '\001') at appendonlywriter.c:1166
> 1166  appendonlywriter.c: No such file or directory.
> (gdb) info locals
> status = 0x0
> [...]
> {code}
> This panic comes from this piece of code in "appendonlywritter.c":
> {code}
> for (int i = 0; i < segment_num; i++)
> {
> AOSegfileStatus 

[jira] [Commented] (HAWQ-1408) PANICs during COPY ... FROM STDIN

2017-03-24 Thread Ming LI (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940431#comment-15940431
 ] 

Ming LI commented on HAWQ-1408:
---

In case of there is no reproduce steps, I only went through the code to find 
out possible root cause. Now I get one possible cause.

The possible reason is related https://issues.apache.org/jira/browse/HAWQ-642.
(1) When keepHash, we can not guarantee that only generate remaining_num is 
enough to alloc seg file for all segment_num. Because the next new generated 
seg file number may have same hash key id with the old one.
(2) When !keepHash, now the remaining_num returned from addCandidateSegno() is 
not precise. So we need to fix it to meet the need of HAWQ-642.
(3) At the final call to addCandidateSegno(), we should keep monitoring the 
remaining_num instead of at the beginning because of the reason (1). So that 
even the new seg file is not actually used by this query ( maybe for hash key 
conflict), we can continue to alloc enough seg files for this query.

@lilima1 @hubertzhang , please correct me if I am wrong. Thanks.

> PANICs during COPY ... FROM STDIN
> -
>
> Key: HAWQ-1408
> URL: https://issues.apache.org/jira/browse/HAWQ-1408
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: backlog
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.1.0.0-incubating
>
>
> We found PANIC (and respective core dumps). From the initial analysis from 
> the logs and core dump, the query causing this PANIC is a "COPY ... FROM 
> STDIN". This query does not always panic.
> This kind of queries are executed from Java/Scala code (by one of IG Spark 
> Jobs). Connection to the DB is managed by connection pool (commons-dbcp2) and 
> validated on borrow by “select 1” validation query. IG is using 
> postgresql-9.4-1206-jdbc41 as a java driver to create those connections. I 
> believe they should be using the driver from DataDirect, available in PivNet; 
> however, I haven't found hard evidence pointing the driver as a root cause.
> My initial analysis on the packcore for the master PANIC. Not sure if this 
> helps or makes sense.
> This is the backtrace of the packcore for process 466858:
> {code}
> (gdb) bt
> #0  0x7fd875f906ab in raise () from 
> /data/logs/52280/packcore-core.postgres.466858/lib64/libpthread.so.0
> #1  0x008c0b19 in SafeHandlerForSegvBusIll (postgres_signal_arg=11, 
> processName=) at elog.c:4519
> #2  
> #3  0x0053b9c3 in SetSegnoForWrite (existing_segnos=0x4c46ff0, 
> existing_segnos@entry=0x0, relid=relid@entry=1195061, 
> segment_num=segment_num@entry=6, forNewRel=forNewRel@entry=0 '\000', 
> keepHash=keepHash@entry=1 '\001') at appendonlywriter.c:1166
> #4  0x0053c08f in assignPerRelSegno 
> (all_relids=all_relids@entry=0x2b96d68, segment_num=6) at 
> appendonlywriter.c:1212
> #5  0x005f79e8 in DoCopy (stmt=stmt@entry=0x2b2a3d8, 
> queryString=) at copy.c:1591
> #6  0x007ef737 in ProcessUtility 
> (parsetree=parsetree@entry=0x2b2a3d8, queryString=0x2c2f550 "COPY 
> mis_data_ig_client_derived_attributes.client_derived_attributes_src (id, 
> tracking_id, name, value_string, value_timestamp, value_number, 
> value_boolean, environment, account, channel, device, feat"...,
> params=0x0, isTopLevel=isTopLevel@entry=1 '\001', 
> dest=dest@entry=0x2b2a7c8, completionTag=completionTag@entry=0x7ffcb5e318e0 
> "") at utility.c:1076
> #7  0x007ea95e in PortalRunUtility (portal=portal@entry=0x2b8eab0, 
> utilityStmt=utilityStmt@entry=0x2b2a3d8, isTopLevel=isTopLevel@entry=1 
> '\001', dest=dest@entry=0x2b2a7c8, 
> completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:1969
> #8  0x007ec13e in PortalRunMulti (portal=portal@entry=0x2b8eab0, 
> isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0x2b2a7c8, 
> altdest=altdest@entry=0x2b2a7c8, 
> completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:2079
> #9  0x007ede95 in PortalRun (portal=portal@entry=0x2b8eab0, 
> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\001', 
> dest=dest@entry=0x2b2a7c8, altdest=altdest@entry=0x2b2a7c8, 
> completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:1596
> #10 0x007e5ad9 in exec_simple_query 
> (query_string=query_string@entry=0x2b29100 "COPY 
> mis_data_ig_client_derived_attributes.client_derived_attributes_src (id, 
> tracking_id, name, value_string, value_timestamp, value_number, 
> value_boolean, environment, account, channel, device, feat"...,
> seqServerHost=seqServerHost@entry=0x0, 
> seqServerPort=seqServerPort@entry=-1) at postgres.c:1816
> #11 0x007e6cb2 in PostgresMain (argc=, argv= out>, argv@entry=0x29d7820, username=0x29d75d0 "mis_ig") at postgres.c:4840
> #12 0x00799540 in 

[jira] [Assigned] (HAWQ-1408) PANICs during COPY ... FROM STDIN

2017-03-24 Thread Ming LI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming LI reassigned HAWQ-1408:
-

Assignee: Ming LI  (was: Ed Espino)

> PANICs during COPY ... FROM STDIN
> -
>
> Key: HAWQ-1408
> URL: https://issues.apache.org/jira/browse/HAWQ-1408
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.1.0.0-incubating
>
>
> We found PANIC (and respective core dumps). From the initial analysis from 
> the logs and core dump, the query causing this PANIC is a "COPY ... FROM 
> STDIN". This query does not always panic.
> This kind of queries are executed from Java/Scala code (by one of IG Spark 
> Jobs). Connection to the DB is managed by connection pool (commons-dbcp2) and 
> validated on borrow by “select 1” validation query. IG is using 
> postgresql-9.4-1206-jdbc41 as a java driver to create those connections. I 
> believe they should be using the driver from DataDirect, available in PivNet; 
> however, I haven't found hard evidence pointing the driver as a root cause.
> My initial analysis on the packcore for the master PANIC. Not sure if this 
> helps or makes sense.
> This is the backtrace of the packcore for process 466858:
> {code}
> (gdb) bt
> #0  0x7fd875f906ab in raise () from 
> /data/logs/52280/packcore-core.postgres.466858/lib64/libpthread.so.0
> #1  0x008c0b19 in SafeHandlerForSegvBusIll (postgres_signal_arg=11, 
> processName=) at elog.c:4519
> #2  
> #3  0x0053b9c3 in SetSegnoForWrite (existing_segnos=0x4c46ff0, 
> existing_segnos@entry=0x0, relid=relid@entry=1195061, 
> segment_num=segment_num@entry=6, forNewRel=forNewRel@entry=0 '\000', 
> keepHash=keepHash@entry=1 '\001') at appendonlywriter.c:1166
> #4  0x0053c08f in assignPerRelSegno 
> (all_relids=all_relids@entry=0x2b96d68, segment_num=6) at 
> appendonlywriter.c:1212
> #5  0x005f79e8 in DoCopy (stmt=stmt@entry=0x2b2a3d8, 
> queryString=) at copy.c:1591
> #6  0x007ef737 in ProcessUtility 
> (parsetree=parsetree@entry=0x2b2a3d8, queryString=0x2c2f550 "COPY 
> mis_data_ig_client_derived_attributes.client_derived_attributes_src (id, 
> tracking_id, name, value_string, value_timestamp, value_number, 
> value_boolean, environment, account, channel, device, feat"...,
> params=0x0, isTopLevel=isTopLevel@entry=1 '\001', 
> dest=dest@entry=0x2b2a7c8, completionTag=completionTag@entry=0x7ffcb5e318e0 
> "") at utility.c:1076
> #7  0x007ea95e in PortalRunUtility (portal=portal@entry=0x2b8eab0, 
> utilityStmt=utilityStmt@entry=0x2b2a3d8, isTopLevel=isTopLevel@entry=1 
> '\001', dest=dest@entry=0x2b2a7c8, 
> completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:1969
> #8  0x007ec13e in PortalRunMulti (portal=portal@entry=0x2b8eab0, 
> isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0x2b2a7c8, 
> altdest=altdest@entry=0x2b2a7c8, 
> completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:2079
> #9  0x007ede95 in PortalRun (portal=portal@entry=0x2b8eab0, 
> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\001', 
> dest=dest@entry=0x2b2a7c8, altdest=altdest@entry=0x2b2a7c8, 
> completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:1596
> #10 0x007e5ad9 in exec_simple_query 
> (query_string=query_string@entry=0x2b29100 "COPY 
> mis_data_ig_client_derived_attributes.client_derived_attributes_src (id, 
> tracking_id, name, value_string, value_timestamp, value_number, 
> value_boolean, environment, account, channel, device, feat"...,
> seqServerHost=seqServerHost@entry=0x0, 
> seqServerPort=seqServerPort@entry=-1) at postgres.c:1816
> #11 0x007e6cb2 in PostgresMain (argc=, argv= out>, argv@entry=0x29d7820, username=0x29d75d0 "mis_ig") at postgres.c:4840
> #12 0x00799540 in BackendRun (port=0x29afc50) at postmaster.c:5915
> #13 BackendStartup (port=0x29afc50) at postmaster.c:5484
> #14 ServerLoop () at postmaster.c:2163
> #15 0x0079c309 in PostmasterMain (argc=, 
> argv=) at postmaster.c:1454
> #16 0x004a4209 in main (argc=9, argv=0x29af010) at main.c:226
> {code}
> Jumping into the frame 3 and running info locals, we found something odd for 
> "status" variable:
> {code}
> (gdb) f 3
> #3  0x0053b9c3 in SetSegnoForWrite (existing_segnos=0x4c46ff0, 
> existing_segnos@entry=0x0, relid=relid@entry=1195061, 
> segment_num=segment_num@entry=6, forNewRel=forNewRel@entry=0 '\000', 
> keepHash=keepHash@entry=1 '\001') at appendonlywriter.c:1166
> 1166  appendonlywriter.c: No such file or directory.
> (gdb) info locals
> status = 0x0
> [...]
> {code}
> This panic comes from this piece of code in "appendonlywritter.c":
> {code}
> for (int i = 0; i < segment_num; i++)
> {
> AOSegfileStatus *status = 

[jira] [Created] (HAWQ-1407) Update HAWQ DB product version (2.2.0.0-incubating)

2017-03-24 Thread Ruilong Huo (JIRA)
Ruilong Huo created HAWQ-1407:
-

 Summary: Update HAWQ DB product version (2.2.0.0-incubating)
 Key: HAWQ-1407
 URL: https://issues.apache.org/jira/browse/HAWQ-1407
 Project: Apache HAWQ
  Issue Type: Task
  Components: Build
Reporter: Ruilong Huo
Assignee: Ed Espino
 Fix For: 2.2.0.0-incubating


For the Apache HAWQ 2.2.0.0-incubating release, update the HAWQ DB version 
displayed with "SELECT VERSION()" (use PSQL command line utility).  This is 
controlled by the "GP_VERSION=2.2.0.0-incubating" variable in the "getversion" 
file in the root source directory.  This will be applied to the release branch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1407) Update HAWQ DB product version (2.2.0.0-incubating)

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1407:
--
Affects Version/s: 2.1.0.0-incubating

> Update HAWQ DB product version (2.2.0.0-incubating)
> ---
>
> Key: HAWQ-1407
> URL: https://issues.apache.org/jira/browse/HAWQ-1407
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Affects Versions: 2.1.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ed Espino
> Fix For: 2.2.0.0-incubating
>
>
> For the Apache HAWQ 2.2.0.0-incubating release, update the HAWQ DB version 
> displayed with "SELECT VERSION()" (use PSQL command line utility).  This is 
> controlled by the "GP_VERSION=2.2.0.0-incubating" variable in the 
> "getversion" file in the root source directory.  This will be applied to the 
> release branch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1407) Update HAWQ DB product version (2.2.0.0-incubating)

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1407:
-

Assignee: Ruilong Huo  (was: Ed Espino)

> Update HAWQ DB product version (2.2.0.0-incubating)
> ---
>
> Key: HAWQ-1407
> URL: https://issues.apache.org/jira/browse/HAWQ-1407
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Affects Versions: 2.1.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.2.0.0-incubating
>
>
> For the Apache HAWQ 2.2.0.0-incubating release, update the HAWQ DB version 
> displayed with "SELECT VERSION()" (use PSQL command line utility).  This is 
> controlled by the "GP_VERSION=2.2.0.0-incubating" variable in the 
> "getversion" file in the root source directory.  This will be applied to the 
> release branch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1406) Update HAWQ product version strings to 2.2.0.0 (HAWQ/HAWQ Ambari Plugin) & 3.1.0.0 (PXF)

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1406:
--
Affects Version/s: 2.1.0.0-incubating

> Update HAWQ product version strings to 2.2.0.0 (HAWQ/HAWQ Ambari Plugin) & 
> 3.1.0.0 (PXF)
> 
>
> Key: HAWQ-1406
> URL: https://issues.apache.org/jira/browse/HAWQ-1406
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Affects Versions: 2.1.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.2.0.0-incubating
>
>
> Need to update the HAWQ (2.2.0.0), HAWQ Ambari Plugin (2.2.0.0) and PXF 
> (3.1.0.0) versions so that we can clearly identify Apache HAWQ 
> 2.2.0.0-incubating artifacts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1406) Update HAWQ product version strings to 2.2.0.0 (HAWQ/HAWQ Ambari Plugin) & 3.1.0.0 (PXF)

2017-03-24 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940405#comment-15940405
 ] 

Ruilong Huo commented on HAWQ-1406:
---

[~vVineet], please let me know if there is version change in PXF. Thanks.

> Update HAWQ product version strings to 2.2.0.0 (HAWQ/HAWQ Ambari Plugin) & 
> 3.1.0.0 (PXF)
> 
>
> Key: HAWQ-1406
> URL: https://issues.apache.org/jira/browse/HAWQ-1406
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.2.0.0-incubating
>
>
> Need to update the HAWQ (2.2.0.0), HAWQ Ambari Plugin (2.2.0.0) and PXF 
> (3.1.0.0) versions so that we can clearly identify Apache HAWQ 
> 2.2.0.0-incubating artifacts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1406) Update HAWQ product version strings to 2.2.0.0 (HAWQ/HAWQ Ambari Plugin) & 3.1.0.0 (PXF)

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1406:
-

Assignee: Ruilong Huo  (was: Ed Espino)

> Update HAWQ product version strings to 2.2.0.0 (HAWQ/HAWQ Ambari Plugin) & 
> 3.1.0.0 (PXF)
> 
>
> Key: HAWQ-1406
> URL: https://issues.apache.org/jira/browse/HAWQ-1406
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.2.0.0-incubating
>
>
> Need to update the HAWQ (2.2.0.0), HAWQ Ambari Plugin (2.2.0.0) and PXF 
> (3.1.0.0) versions so that we can clearly identify Apache HAWQ 
> 2.2.0.0-incubating artifacts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1406) Update HAWQ product version strings to 2.2.0.0 (HAWQ/HAWQ Ambari Plugin) & 3.1.0.0 (PXF)

2017-03-24 Thread Ruilong Huo (JIRA)
Ruilong Huo created HAWQ-1406:
-

 Summary: Update HAWQ product version strings to 2.2.0.0 (HAWQ/HAWQ 
Ambari Plugin) & 3.1.0.0 (PXF)
 Key: HAWQ-1406
 URL: https://issues.apache.org/jira/browse/HAWQ-1406
 Project: Apache HAWQ
  Issue Type: Task
  Components: Build
Reporter: Ruilong Huo
Assignee: Ed Espino
 Fix For: 2.2.0.0-incubating


Need to update the HAWQ (2.2.0.0), HAWQ Ambari Plugin (2.2.0.0) and PXF 
(3.1.0.0) versions so that we can clearly identify Apache HAWQ 
2.2.0.0-incubating artifacts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1401) DESTDIR option not functioning well in 'make install'

2017-03-24 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1401.
-
Resolution: Fixed

Now we can do make install as below steps:
./configure --prefix=/usr/loca/apache-hawq
make
make install DESTDIR=/tmp/fake-root

Then HAWQ will get installed to /tmp/fake-root/usr/local/apache-hawq directory.

This can help we do rpm package from source tarball.

> DESTDIR option not functioning well in 'make install'
> -
>
> Key: HAWQ-1401
> URL: https://issues.apache.org/jira/browse/HAWQ-1401
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.2.0.0-incubating
>Reporter: Radar Lei
>Assignee: Radar Lei
> Fix For: 2.2.0.0-incubating
>
>
> Currently 'make install DESTDIR=/destdir/usr/local/apache-hawq' is not 
> working for all the files. 
> For example: our configuration files under 'etc' folder. Build rpm from 
> source tarball relies on this 'DESTDIR' option.
> We should make sure DESTDIR is working for all the files in the install list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-331) Fix HAWQ Jenkins pullrequest build reporting

2017-03-24 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-331:
--

Assignee: Ed Espino  (was: Radar Lei)

> Fix HAWQ Jenkins pullrequest build reporting
> 
>
> Key: HAWQ-331
> URL: https://issues.apache.org/jira/browse/HAWQ-331
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Goden Yao
>Assignee: Ed Espino
> Fix For: 2.2.0.0-incubating
>
>
> https://builds.apache.org/job/HAWQ-build-pullrequest/83/console
> It has been recently discovered that Jenkins reports SUCCESS even when a 
> build was actually failed.
> *Acceptance Criteria*
> 1. No false SUCCESS when it was a failure
> 2. Include Installcheck good, unit tests in the build process



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-331) Fix HAWQ Jenkins pullrequest build reporting

2017-03-24 Thread Radar Lei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940290#comment-15940290
 ] 

Radar Lei commented on HAWQ-331:


I believe this already fixed by Ed, re-assign to Ed.

> Fix HAWQ Jenkins pullrequest build reporting
> 
>
> Key: HAWQ-331
> URL: https://issues.apache.org/jira/browse/HAWQ-331
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Goden Yao
>Assignee: Ed Espino
> Fix For: 2.2.0.0-incubating
>
>
> https://builds.apache.org/job/HAWQ-build-pullrequest/83/console
> It has been recently discovered that Jenkins reports SUCCESS even when a 
> build was actually failed.
> *Acceptance Criteria*
> 1. No false SUCCESS when it was a failure
> 2. Include Installcheck good, unit tests in the build process



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq pull request #1180: HAWQ-1396. Fix the bug when query hcatalo...

2017-03-24 Thread linwen
Github user linwen closed the pull request at:

https://github.com/apache/incubator-hawq/pull/1180


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (HAWQ-920) HAWQ upgrade from 2.0.0.0 to 2.0.1.0

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-920.
--
Resolution: Fixed

> HAWQ upgrade from 2.0.0.0 to 2.0.1.0
> 
>
> Key: HAWQ-920
> URL: https://issues.apache.org/jira/browse/HAWQ-920
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Upgrade
>Affects Versions: 2.0.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.1.0.0-incubating
>
>
> This is an umbrella task to cover HAWQ upgrade from 2.0.0.0 to 2.0.1.0 
> related tasks, we need to address below items:
> 1. Investigate and identify instructions for hawq upgrade, including hawq 
> itself, procedural language packages, etc. This also need to be added to hawq 
> 2.0.1.0 documentation.
> 2. Add basic data verification for hawq upgrade, including procedural 
> languages, user-defined functions, etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-920) HAWQ upgrade from 2.0.0.0 to 2.0.1.0

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-920.


> HAWQ upgrade from 2.0.0.0 to 2.0.1.0
> 
>
> Key: HAWQ-920
> URL: https://issues.apache.org/jira/browse/HAWQ-920
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Upgrade
>Affects Versions: 2.0.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.1.0.0-incubating
>
>
> This is an umbrella task to cover HAWQ upgrade from 2.0.0.0 to 2.0.1.0 
> related tasks, we need to address below items:
> 1. Investigate and identify instructions for hawq upgrade, including hawq 
> itself, procedural language packages, etc. This also need to be added to hawq 
> 2.0.1.0 documentation.
> 2. Add basic data verification for hawq upgrade, including procedural 
> languages, user-defined functions, etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-921) Instructions and documentation for HAWQ upgrade from 2.0.0.0 to 2.0.1.0

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-921:
-
Fix Version/s: (was: 2.2.0.0-incubating)
   2.1.0.0-incubating

> Instructions and documentation for HAWQ upgrade from 2.0.0.0 to 2.0.1.0
> ---
>
> Key: HAWQ-921
> URL: https://issues.apache.org/jira/browse/HAWQ-921
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Upgrade
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.1.0.0-incubating
>
>
> For HAWQ upgrade from 2.0.0.0 to 2.0.1.0, we need to investigate and identify 
> instructions for hawq upgrade, including hawq itself, procedural language 
> packages, etc. This also need to be added to hawq 2.0.1.0 documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-921) Instructions and documentation for HAWQ upgrade from 2.0.0.0 to 2.0.1.0

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-921.


> Instructions and documentation for HAWQ upgrade from 2.0.0.0 to 2.0.1.0
> ---
>
> Key: HAWQ-921
> URL: https://issues.apache.org/jira/browse/HAWQ-921
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Upgrade
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.1.0.0-incubating
>
>
> For HAWQ upgrade from 2.0.0.0 to 2.0.1.0, we need to investigate and identify 
> instructions for hawq upgrade, including hawq itself, procedural language 
> packages, etc. This also need to be added to hawq 2.0.1.0 documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-920) HAWQ upgrade from 2.0.0.0 to 2.0.1.0

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-920:
-
Fix Version/s: (was: 2.2.0.0-incubating)
   2.1.0.0-incubating

> HAWQ upgrade from 2.0.0.0 to 2.0.1.0
> 
>
> Key: HAWQ-920
> URL: https://issues.apache.org/jira/browse/HAWQ-920
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Upgrade
>Affects Versions: 2.0.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.1.0.0-incubating
>
>
> This is an umbrella task to cover HAWQ upgrade from 2.0.0.0 to 2.0.1.0 
> related tasks, we need to address below items:
> 1. Investigate and identify instructions for hawq upgrade, including hawq 
> itself, procedural language packages, etc. This also need to be added to hawq 
> 2.0.1.0 documentation.
> 2. Add basic data verification for hawq upgrade, including procedural 
> languages, user-defined functions, etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-921) Instructions and documentation for HAWQ upgrade from 2.0.0.0 to 2.0.1.0

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-921.
--
Resolution: Fixed

> Instructions and documentation for HAWQ upgrade from 2.0.0.0 to 2.0.1.0
> ---
>
> Key: HAWQ-921
> URL: https://issues.apache.org/jira/browse/HAWQ-921
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Upgrade
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.1.0.0-incubating
>
>
> For HAWQ upgrade from 2.0.0.0 to 2.0.1.0, we need to investigate and identify 
> instructions for hawq upgrade, including hawq itself, procedural language 
> packages, etc. This also need to be added to hawq 2.0.1.0 documentation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1401) DESTDIR option not functioning well in 'make install'

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1401:
-

Assignee: Radar Lei  (was: Ruilong Huo)

> DESTDIR option not functioning well in 'make install'
> -
>
> Key: HAWQ-1401
> URL: https://issues.apache.org/jira/browse/HAWQ-1401
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.2.0.0-incubating
>Reporter: Radar Lei
>Assignee: Radar Lei
> Fix For: 2.2.0.0-incubating
>
>
> Currently 'make install DESTDIR=/destdir/usr/local/apache-hawq' is not 
> working for all the files. 
> For example: our configuration files under 'etc' folder. Build rpm from 
> source tarball relies on this 'DESTDIR' option.
> We should make sure DESTDIR is working for all the files in the install list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-98) Moving HAWQ docker file into code base

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-98?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-98.
---

> Moving HAWQ docker file into code base
> --
>
> Key: HAWQ-98
> URL: https://issues.apache.org/jira/browse/HAWQ-98
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Build
>Reporter: Goden Yao
>Assignee: Richard Guo
> Fix For: 2.2.0.0-incubating
>
>
> We have a pre-built docker image (check [HAWQ build & 
> install|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61320026]
>  sitting outside the codebase.
> It should be incorporated in the Apache git and maintained by the community.
> Proposed location is to create a  folder under root



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-98) Moving HAWQ docker file into code base

2017-03-24 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940014#comment-15940014
 ] 

Ruilong Huo commented on HAWQ-98:
-

Resolve the issue as the docker is available in hawq code base.

> Moving HAWQ docker file into code base
> --
>
> Key: HAWQ-98
> URL: https://issues.apache.org/jira/browse/HAWQ-98
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Build
>Reporter: Goden Yao
>Assignee: Richard Guo
> Fix For: 2.2.0.0-incubating
>
>
> We have a pre-built docker image (check [HAWQ build & 
> install|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61320026]
>  sitting outside the codebase.
> It should be incorporated in the Apache git and maintained by the community.
> Proposed location is to create a  folder under root



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-98) Moving HAWQ docker file into code base

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-98?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-98.
-
Resolution: Fixed
  Assignee: Richard Guo  (was: Roman Shaposhnik)

> Moving HAWQ docker file into code base
> --
>
> Key: HAWQ-98
> URL: https://issues.apache.org/jira/browse/HAWQ-98
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Build
>Reporter: Goden Yao
>Assignee: Richard Guo
> Fix For: 2.2.0.0-incubating
>
>
> We have a pre-built docker image (check [HAWQ build & 
> install|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61320026]
>  sitting outside the codebase.
> It should be incorporated in the Apache git and maintained by the community.
> Proposed location is to create a  folder under root



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-98) Moving HAWQ docker file into code base

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-98?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-98:

Component/s: Build

> Moving HAWQ docker file into code base
> --
>
> Key: HAWQ-98
> URL: https://issues.apache.org/jira/browse/HAWQ-98
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Build
>Reporter: Goden Yao
>Assignee: Roman Shaposhnik
> Fix For: 2.2.0.0-incubating
>
>
> We have a pre-built docker image (check [HAWQ build & 
> install|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61320026]
>  sitting outside the codebase.
> It should be incorporated in the Apache git and maintained by the community.
> Proposed location is to create a  folder under root



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1224) It would be useful to have a gradle task that runs a PXF instance

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1224:
--
Fix Version/s: (was: 2.2.0.0-incubating)
   2.3.0.0-incubating

> It would be useful to have a gradle task that runs a PXF instance
> -
>
> Key: HAWQ-1224
> URL: https://issues.apache.org/jira/browse/HAWQ-1224
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.3.0.0-incubating
>
> Attachments: HAWQ-1224.patch.txt
>
>
> For testing and tinkering it is very useful to be able to just say
>   $ gradle appRun
> and have a working instance of PXF running.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-124) Create Project Maturity Model summary file

2017-03-24 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940008#comment-15940008
 ] 

Ruilong Huo commented on HAWQ-124:
--

[~espino], it seems you have create page to address this.
https://cwiki.apache.org/confluence/display/HAWQ/Apache+Project+Maturity+Model

> Create Project Maturity Model summary file
> --
>
> Key: HAWQ-124
> URL: https://issues.apache.org/jira/browse/HAWQ-124
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Reporter: Caleb Welton
>Assignee: Ed Espino
> Fix For: 2.2.0.0-incubating
>
>
> Graduating from an Apache Incubator project requires showing the Apache 
> Incubator IPMC that we have reached a level of maturity as an incubator 
> project.  One tool that can be used to assess our maturity is the [Apache 
> Project Maturity Model 
> Document|https://community.apache.org/apache-way/apache-project-maturity-model.html].
>   
> I propose we do something similar to what Groovy did and include a Project 
> Maturity Self assessment in our source code and evaluate ourselves with 
> respect to project maturity with each of our reports.  
> To do:
> 1. Create a MATURITY.adoc file in our root project directory containing our 
> self assessment.
> See 
> https://github.com/apache/groovy/blob/67b87a3592f13a6281f5b20081c37a66c80079b9/MATURITY.adoc
>  as an example document in the Groovy project.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-311) Data Transfer tool

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-311:
-
Fix Version/s: (was: 2.2.0.0-incubating)
   2.3.0.0-incubating

> Data Transfer tool
> --
>
> Key: HAWQ-311
> URL: https://issues.apache.org/jira/browse/HAWQ-311
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: NILESH MANKAR
> Fix For: 2.3.0.0-incubating
>
>
> Some users asked a tool to transfer data between HAWQ clusters. It is quite 
> useful for data migration.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-124) Create Project Maturity Model summary file

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-124:


Assignee: Ed Espino  (was: Lei Chang)

> Create Project Maturity Model summary file
> --
>
> Key: HAWQ-124
> URL: https://issues.apache.org/jira/browse/HAWQ-124
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Reporter: Caleb Welton
>Assignee: Ed Espino
> Fix For: 2.2.0.0-incubating
>
>
> Graduating from an Apache Incubator project requires showing the Apache 
> Incubator IPMC that we have reached a level of maturity as an incubator 
> project.  One tool that can be used to assess our maturity is the [Apache 
> Project Maturity Model 
> Document|https://community.apache.org/apache-way/apache-project-maturity-model.html].
>   
> I propose we do something similar to what Groovy did and include a Project 
> Maturity Self assessment in our source code and evaluate ourselves with 
> respect to project maturity with each of our reports.  
> To do:
> 1. Create a MATURITY.adoc file in our root project directory containing our 
> self assessment.
> See 
> https://github.com/apache/groovy/blob/67b87a3592f13a6281f5b20081c37a66c80079b9/MATURITY.adoc
>  as an example document in the Groovy project.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-309) Support Centos/RHEL 7

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-309:


Assignee: Ed Espino  (was: Lei Chang)

> Support Centos/RHEL 7 
> --
>
> Key: HAWQ-309
> URL: https://issues.apache.org/jira/browse/HAWQ-309
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Build
>Reporter: Lei Chang
>Assignee: Ed Espino
> Fix For: 2.2.0.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-916) Replace com.pivotal.hawq package name to org.apache.hawq

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-916:
-
Fix Version/s: (was: 2.2.0.0-incubating)
   2.3.0.0-incubating

> Replace com.pivotal.hawq package name to org.apache.hawq
> 
>
> Key: HAWQ-916
> URL: https://issues.apache.org/jira/browse/HAWQ-916
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: 2.3.0.0-incubating
>
> Attachments: pivotal.txt
>
>
> com.pivotal.hawq.mapreduce types are referenced in at least the following 
> apache hawq (incubating) directories, master branch:
> contrib/hawq-hadoop
> contrib/hawq-hadoop/hawq-mapreduce-tool
> contrib/hawq-hadoop/hawq-mapreduce-parquet
> contrib/hawq-hadoop/hawq-mapreduce-common
> contrib/hawq-hadoop/hawq-mapreduce-ao
> contrib/hawq-hadoop/target/apidocs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-783) Remove quicklz in medadata

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-783:
-
Fix Version/s: (was: 2.2.0.0-incubating)
   2.3.0.0-incubating

> Remove quicklz in medadata
> --
>
> Key: HAWQ-783
> URL: https://issues.apache.org/jira/browse/HAWQ-783
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Storage
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: 2.3.0.0-incubating
>
>
> This is the rest work of complete quicklz removal, beside HAWQ-780 (Remove 
> quicklz compression related code but keep related meta data in short term).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-947) set work_mem cannot work

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-947:
-
Fix Version/s: (was: 2.2.0.0-incubating)
   2.3.0.0-incubating

> set work_mem cannot work
> 
>
> Key: HAWQ-947
> URL: https://issues.apache.org/jira/browse/HAWQ-947
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.1.0.0-incubating
>Reporter: Biao Wu
>Assignee: Lei Chang
> Fix For: 2.3.0.0-incubating
>
>
> HAWQ version is 2.0.1.0 build dev.
> EXPLAIN ANALYZE:
> Work_mem: 9554K bytes max, 63834K bytes wanted。
> then set work_mem to '512MB',but not work
> {code:sql}
> test=# EXPLAIN ANALYZE SELECT count(DISTINCT item_sku_id)
> test-# FROM gdm_m03_item_sku_da
> test-# WHERE item_origin ='中国大陆';
>   
>   
>QUERY PLAN
> 
> 
>  Aggregate  (cost=54177150.69..54177150.70 rows=1 width=8)
>Rows out:  Avg 1.0 rows x 1 workers.  
> Max/Last(seg-1:BJHC-HEBE-9014.hadoop.jd.local/seg-1:BJHC-HEBE-9014.hadoop.jd.local)
>  1/1 rows with 532498/532498 ms to end, start offset by 201/201 ms.
>->  Gather Motion 306:1  (slice2; segments: 306)  
> (cost=54177147.60..54177150.68 rows=1 width=8)
>  Rows out:  Avg 306.0 rows x 1 workers at destination.  
> Max/Last(seg-1:BJHC-HEBE-9014.hadoop.jd.local/seg-1:BJHC-HEBE-9014.hadoop.jd.local)
>  306/306 rows with 529394/529394 ms to first row, 532498/532498 ms to end, 
> start offset b
> y 201/201 ms.
>  ->  Aggregate  (cost=54177147.60..54177147.61 rows=1 width=8)
>Rows out:  Avg 1.0 rows x 306 workers.  
> Max/Last(seg305:BJHC-HEBE-9031.hadoop.jd.local/seg258:BJHC-HEBE-9029.hadoop.jd.local)
>  1/1 rows with 530367/532274 ms to end, start offset by 396/246 ms.
>Executor memory:  9554K bytes avg, 9554K bytes max 
> (seg305:BJHC-HEBE-9031.hadoop.jd.local).
>Work_mem used:  9554K bytes avg, 9554K bytes max 
> (seg305:BJHC-HEBE-9031.hadoop.jd.local).
>Work_mem wanted: 63695K bytes avg, 63834K bytes max 
> (seg296:BJHC-HEBE-9031.hadoop.jd.local) to lessen workfile I/O affecting 306 
> workers.
>->  Redistribute Motion 306:306  (slice1; segments: 306)  
> (cost=0.00..53550018.97 rows=819776 width=11)
>  Hash Key: gdm_m03_item_sku_da.item_sku_id
>  Rows out:  Avg 820083.0 rows x 306 workers at 
> destination.  
> Max/Last(seg296:BJHC-HEBE-9031.hadoop.jd.local/seg20:BJHC-HEBE-9016.hadoop.jd.local)
>  821880/818660 rows with 769/771 ms to first row, 524681/525063 ms to e
> nd, start offset by 352/307 ms.
>  ->  Append-only Scan on gdm_m03_item_sku_da  
> (cost=0.00..48532990.00 rows=819776 width=11)
>Filter: item_origin::text = '中国大陆'::text
>Rows out:  Avg 820083.0 rows x 306 workers.  
> Max/Last(seg46:BJHC-HEBE-9017.hadoop.jd.local/seg5:BJHC-HEBE-9015.hadoop.jd.local)
>  893390/810582 rows with 28/127 ms to first row, 73062/526318 ms to end, 
> start off
> set by 354/458 ms.
>  Slice statistics:
>(slice0)Executor memory: 1670K bytes.
>(slice1)Executor memory: 3578K bytes avg x 306 workers, 4711K bytes 
> max (seg172:BJHC-HEBE-9024.hadoop.jd.local).
>(slice2)  * Executor memory: 10056K bytes avg x 306 workers, 10056K bytes 
> max (seg305:BJHC-HEBE-9031.hadoop.jd.local).  Work_mem: 9554K bytes max, 
> 63834K bytes wanted.
>  Statement statistics:
>Memory used: 262144K bytes
>Memory wanted: 64233K bytes
>  Settings:  default_hash_table_bucket_number=6
>  Dispatcher statistics:
>executors used(total/cached/new connection): (612/0/612); dispatcher 
> time(total/connection/dispatch data): (489.036 ms/192.741 ms/293.357 ms).
>dispatch data time(max/min/avg): (37.798 ms/0.011 ms/3.504 ms); consume 
> executor data time(max/min/avg): (0.016 ms/0.002 ms/0.005 ms); free executor 
> time(max/min/avg): (0.000 ms/0.000 ms/0.000 ms).
>  Data locality statistics:
>data locality ratio: 0.864; virtual segment number: 306; different host 
> number: 17; virtual segment number per host(avg/min/max): (18/18/18); segment 
> size(avg/min/max): 

[jira] [Updated] (HAWQ-1058) Create a separated tarball for libhdfs3

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1058:
--
Fix Version/s: (was: 2.2.0.0-incubating)
   2.3.0.0-incubating

> Create a separated tarball for libhdfs3
> ---
>
> Key: HAWQ-1058
> URL: https://issues.apache.org/jira/browse/HAWQ-1058
> Project: Apache HAWQ
>  Issue Type: Test
>  Components: libhdfs
>Affects Versions: 2.0.0.0-incubating
>Reporter: Zhanwei Wang
>Assignee: Lei Chang
> Fix For: 2.3.0.0-incubating
>
>
> As discussed in the dev mail list. Proposed by Ramon that create a separated 
> tarball for libhdfs3 at HAWQ release.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-996) gpfdist online help instructs user to download HAWQ Loader package from incorrect site

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-996:
-
Fix Version/s: (was: 2.2.0.0-incubating)
   backlog

> gpfdist online help instructs user to download HAWQ Loader package from 
> incorrect site
> --
>
> Key: HAWQ-996
> URL: https://issues.apache.org/jira/browse/HAWQ-996
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Lisa Owen
>Assignee: Lei Chang
>Priority: Minor
> Fix For: backlog
>
>
> running "gpfdist --help" displays the following incorrect output:
> *
> RUNNING GPFDIST AS A WINDOWS SERVICE
> *
> HAWQ Loaders allow gpfdist to run as a Windows Service.
> Follow the instructions below to download, register and
> activate gpfdist as a service:
> 1. Update your HAWQ Loader package to the latest
>version. This package is available from the
>EMC Download Center (https://emc.subscribenet.com)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1111) Support for IN() operator in PXF

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-:
--
Fix Version/s: (was: 2.2.0.0-incubating)
   2.3.0.0-incubating

> Support for IN() operator in PXF
> 
>
> Key: HAWQ-
> URL: https://issues.apache.org/jira/browse/HAWQ-
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Vineet Goel
>Assignee: Vineet Goel
> Fix For: 2.3.0.0-incubating
>
>
> HAWQ PXF external tables should be optimized for IN() operator so that users 
> get the benefit of predicate pushdown. In order to achieve this, HAWQ bridge 
> must send serialized expression for IN() operator to PXF. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1111) Support for IN() operator in PXF

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-:
-

Assignee: Vineet Goel  (was: Lei Chang)

> Support for IN() operator in PXF
> 
>
> Key: HAWQ-
> URL: https://issues.apache.org/jira/browse/HAWQ-
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Vineet Goel
>Assignee: Vineet Goel
> Fix For: 2.2.0.0-incubating
>
>
> HAWQ PXF external tables should be optimized for IN() operator so that users 
> get the benefit of predicate pushdown. In order to achieve this, HAWQ bridge 
> must send serialized expression for IN() operator to PXF. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-879) Verify the options specified when creating table

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-879:
-
Fix Version/s: (was: 2.2.0.0-incubating)
   backlog

> Verify the options specified when creating table
> 
>
> Key: HAWQ-879
> URL: https://issues.apache.org/jira/browse/HAWQ-879
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Parser, Tests
>Reporter: Lili Ma
>Assignee: Jiali Yao
> Fix For: backlog
>
>
> When creating table, there are a lot of options can be specified, including 
> appendonly, orientation, compresstype, compresslevel, pagesize, rowgroupsize, 
> blocksize, etc.  We need to verify all the combinations of different options 
> and check whether the result output is valid.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-950) PXF support for Float filters encoded in header data

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-950:
-
Fix Version/s: (was: 2.2.0.0-incubating)
   2.3.0.0-incubating

> PXF support for Float filters encoded in header data
> 
>
> Key: HAWQ-950
> URL: https://issues.apache.org/jira/browse/HAWQ-950
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Kavinder Dhaliwal
>Assignee: Goden Yao
> Fix For: 2.3.0.0-incubating
>
>
> HAWQ-779 contributed by [~jiadx] introduced the ability for hawq to serialize 
> filters on float columns and send the data to PXF. However, PXF is not 
> currently capable of parsing float values in the string filter.
> We need to 
> 1. add support for float type on JAVA side.
> 2. add unit test for this change.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-29) Refactor HAWQ InputFormat to support Spark/Scala

2017-03-24 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-29?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-29:

Fix Version/s: (was: 2.2.0.0-incubating)
   backlog

> Refactor HAWQ InputFormat to support Spark/Scala
> 
>
> Key: HAWQ-29
> URL: https://issues.apache.org/jira/browse/HAWQ-29
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lirong Jian
>Assignee: Lirong Jian
>Priority: Minor
>  Labels: features
> Fix For: backlog
>
>
> Currently the implementation of HAWQ InputFormat doesn't support Spark/Scala 
> very well. We need to refactor the code to support that feature. More 
> specifically, we need implement the serializable interface for some classes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)