[jira] [Resolved] (HAWQ-665) Dump memory usage information during runaway query termination

2016-04-13 Thread Foyzur Rahman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Foyzur Rahman resolved HAWQ-665.

   Resolution: Fixed
Fix Version/s: 2.0.0-beta-incubating

> Dump memory usage information during runaway query termination
> --
>
> Key: HAWQ-665
> URL: https://issues.apache.org/jira/browse/HAWQ-665
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Query Execution
>Reporter: Foyzur Rahman
>Assignee: Foyzur Rahman
> Fix For: 2.0.0-beta-incubating
>
>
> Currently when we run out of memory, we logged the memory usage of all the 
> running queries on the segment where OOM happened. However, if runaway 
> termination successfully terminates the biggest offender, before we hit OOM, 
> we do not have any logging mechanism. This left us with insufficient 
> information for root cause analysis of memory leaks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-540) Change output.replace-datanode-on-failure to true by default

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240626#comment-15240626
 ] 

ASF GitHub Bot commented on HAWQ-540:
-

Github user ictmalili commented on the pull request:

https://github.com/apache/incubator-hawq/pull/607#issuecomment-209769414
  
+1


> Change output.replace-datanode-on-failure to true by default
> 
>
> Key: HAWQ-540
> URL: https://issues.apache.org/jira/browse/HAWQ-540
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: zhenglin tao
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> By default, output.replace-datanode-on-failure should be true for large 
> cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-540) Change output.replace-datanode-on-failure to true by default

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240625#comment-15240625
 ] 

ASF GitHub Bot commented on HAWQ-540:
-

GitHub user ztao1987 opened a pull request:

https://github.com/apache/incubator-hawq/pull/607

HAWQ-540. Change output.replace-datanode-on-failure back to false.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ztao1987/incubator-hawq HAWQ-540

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/607.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #607


commit d583350e2d52ca5035770936bb9a73fa53e4de8d
Author: zhenglin tao 
Date:   2016-04-14T05:22:34Z

HAWQ-540. Change output.replace-datanode-on-failure back to false.




> Change output.replace-datanode-on-failure to true by default
> 
>
> Key: HAWQ-540
> URL: https://issues.apache.org/jira/browse/HAWQ-540
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: zhenglin tao
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> By default, output.replace-datanode-on-failure should be true for large 
> cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-665) Dump memory usage information during runaway query termination

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240620#comment-15240620
 ] 

ASF GitHub Bot commented on HAWQ-665:
-

Github user foyzur closed the pull request at:

https://github.com/apache/incubator-hawq/pull/599


> Dump memory usage information during runaway query termination
> --
>
> Key: HAWQ-665
> URL: https://issues.apache.org/jira/browse/HAWQ-665
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Query Execution
>Reporter: Foyzur Rahman
>Assignee: Foyzur Rahman
>
> Currently when we run out of memory, we logged the memory usage of all the 
> running queries on the segment where OOM happened. However, if runaway 
> termination successfully terminates the biggest offender, before we hit OOM, 
> we do not have any logging mechanism. This left us with insufficient 
> information for root cause analysis of memory leaks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240587#comment-15240587
 ] 

ASF GitHub Bot commented on HAWQ-462:
-

Github user ictmalili commented on the pull request:

https://github.com/apache/incubator-hawq/pull/503#issuecomment-209759590
  
Adding @ztao1987 's comments, LGTM.


> Querying Hcatalog in HA Secure Environment Fails
> 
>
> Key: HAWQ-462
> URL: https://issues.apache.org/jira/browse/HAWQ-462
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, Hcatalog, PXF
>Affects Versions: 2.0.0-beta-incubating
>Reporter: Kavinder Dhaliwal
>Assignee: Shivram Mani
> Fix For: 2.0.0
>
>
> On an HA Secure Cluster querying a hive external table works:
> {code}
> create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b 
> boolean) location 
> ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
> format 'custom' (formatter='pxfwritable_import');
> select * from pxf_hive;
> {code}
> but querying the same table via hcatalog does not
> {code}
> SELECT * FROM hcatalog.default.hive_table;
> ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
> (hd_work_mgr.c:930)
> {code}
> This should be fixed by the PR for 
> https://issues.apache.org/jira/browse/HAWQ-317



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-644) Failure in security ha environment with certain writable external tables

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240585#comment-15240585
 ] 

ASF GitHub Bot commented on HAWQ-644:
-

Github user ictmalili commented on the pull request:

https://github.com/apache/incubator-hawq/pull/604#issuecomment-209759488
  
LGTM. +1


> Failure in security ha environment with certain writable external tables
> 
>
> Key: HAWQ-644
> URL: https://issues.apache.org/jira/browse/HAWQ-644
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF, Security
>Reporter: Goden Yao
>Assignee: Goden Yao
>
> In a Secure HA environment:
> Few tests which tests writable table fail due to empty dfs_address prior to 
> getting the delegation token in the segment.
> On an initial investigation, the shared_path seems to not be set by the hawq 
> master.
> Log from the specific segment. The hdfs path available in the segment is 
> empty and hence the failure.
> {code}
> 2016-04-08 22:53:11.034661 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-1 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034699 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-2 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034785 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","locating 
> token for 0^N\","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"gpbridgeapi.c",521,
> 2016-04-08 22:53:11.034871 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"WARNING","01000","internal 
> error HdfsParsePath: no filesystem protocol found in path 
> ""0^N\^A""","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"fd.c",2501,
> 2016-04-08 22:53:11.035004 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG1","0","Deleted 
> entry for query (sessionid=73, commandcnt=43)",,"INSERT INTO 
> writable_table SELECT * FROM readable_table;",0,,"workfile_queryspace.c",329,
> 2016-04-08 22:53:11.066243 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"ERROR","XX000","fail to 
> parse uri: 0^N\ (cdbfilesystemcredential.c:529)","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"cdbfilesystemcredential.c",529,"Stack trace:
> 10x871f8f postgres  + 0x871f8f
> 20x872679 postgres elog_finish + 0xa9
> 30x99d7b8 postgres find_filesystem_credential_with_uri + 0x78
> 40x7fc935c8f5a2 pxf.so add_delegation_token + 0xa2
> 50x7fc935c8f9b2 pxf.so get_pxf_server + 0xe2
> 60x7fc935c8ff02 pxf.so gpbridge_export_start + 0x42
> 70x7fc935c90170 pxf.so gpbridge_export + 0x50
> 80x507eb8 postgres  + 0x507eb8
> 90x5083af postgres url_fwrite + 0x9f
> 10   0x5042b4 postgres external_insert + 0x184
> 11   0x69dcca postgres ExecInsert + 0x1fa
> 12   0x69d41c postgres ExecDML + 0x1ec
> 13   0x65e185 postgres ExecProcNode + 0x3c5
> 14   0x659f4a postgres  + 0x659f4a
> 15   0x65a8d3 postgres ExecutorRun + 0x4a3
> 16   0x7b550a postgres  + 0x7b550a
> 17   0x7b5baf postgres  + 0x7b5baf
> 18   0x7b6142 postgres PortalRun + 0x342
> 19   0x7b2c21 postgres PostgresMain + 0x3861
> 20   0x763ce3 postgres  + 0x763ce3
> 21   0x76443d postgres  + 0x76443d
> 22   0x76626e postgres PostmasterMain + 0xc7e
> 23   0x6c04ea postgres main + 0x48a
> 24   0x7fc95f276d5d libc.so.6 __libc_start_main + 0xfd
> 25   0x4a1489 

[jira] [Commented] (HAWQ-665) Dump memory usage information during runaway query termination

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240547#comment-15240547
 ] 

ASF GitHub Bot commented on HAWQ-665:
-

Github user liming01 commented on the pull request:

https://github.com/apache/incubator-hawq/pull/599#issuecomment-209749629
  
@foyzur , code merged, please close this PR.


> Dump memory usage information during runaway query termination
> --
>
> Key: HAWQ-665
> URL: https://issues.apache.org/jira/browse/HAWQ-665
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Query Execution
>Reporter: Foyzur Rahman
>Assignee: Foyzur Rahman
>
> Currently when we run out of memory, we logged the memory usage of all the 
> running queries on the segment where OOM happened. However, if runaway 
> termination successfully terminates the biggest offender, before we hit OOM, 
> we do not have any logging mechanism. This left us with insufficient 
> information for root cause analysis of memory leaks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240545#comment-15240545
 ] 

ASF GitHub Bot commented on HAWQ-462:
-

Github user ztao1987 commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/503#discussion_r59660545
  
--- Diff: src/bin/gpfusion/gpbridgeapi.c ---
@@ -536,3 +536,8 @@ void free_token_resources(PxfInputData *inputData)
 
pfree(inputData->token);
 }
+
+void free_dfs_address()
+{
+   free(dfs_address);
+}
--- End diff --

Yes, just check NULL condition is enough.


> Querying Hcatalog in HA Secure Environment Fails
> 
>
> Key: HAWQ-462
> URL: https://issues.apache.org/jira/browse/HAWQ-462
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, Hcatalog, PXF
>Affects Versions: 2.0.0-beta-incubating
>Reporter: Kavinder Dhaliwal
>Assignee: Shivram Mani
> Fix For: 2.0.0
>
>
> On an HA Secure Cluster querying a hive external table works:
> {code}
> create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b 
> boolean) location 
> ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
> format 'custom' (formatter='pxfwritable_import');
> select * from pxf_hive;
> {code}
> but querying the same table via hcatalog does not
> {code}
> SELECT * FROM hcatalog.default.hive_table;
> ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
> (hd_work_mgr.c:930)
> {code}
> This should be fixed by the PR for 
> https://issues.apache.org/jira/browse/HAWQ-317



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-646) Bump up optimizer version number

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240546#comment-15240546
 ] 

ASF GitHub Bot commented on HAWQ-646:
-

Github user liming01 commented on the pull request:

https://github.com/apache/incubator-hawq/pull/603#issuecomment-209749511
  
@hsyuan ,  code merged, please close this PR. 


> Bump up optimizer version number
> 
>
> Key: HAWQ-646
> URL: https://issues.apache.org/jira/browse/HAWQ-646
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Optimizer
>Reporter: Haisheng Yuan
>Assignee: Amr El-Helw
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240541#comment-15240541
 ] 

ASF GitHub Bot commented on HAWQ-462:
-

Github user kavinderd commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/503#discussion_r59660368
  
--- Diff: src/bin/gpfusion/gpbridgeapi.c ---
@@ -536,3 +536,8 @@ void free_token_resources(PxfInputData *inputData)
 
pfree(inputData->token);
 }
+
+void free_dfs_address()
+{
+   free(dfs_address);
+}
--- End diff --

Oh right, good catch. I will add a check in `free_dfs_address` for secure 
filesystem. Sound good?


> Querying Hcatalog in HA Secure Environment Fails
> 
>
> Key: HAWQ-462
> URL: https://issues.apache.org/jira/browse/HAWQ-462
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, Hcatalog, PXF
>Affects Versions: 2.0.0-beta-incubating
>Reporter: Kavinder Dhaliwal
>Assignee: Shivram Mani
> Fix For: 2.0.0
>
>
> On an HA Secure Cluster querying a hive external table works:
> {code}
> create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b 
> boolean) location 
> ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
> format 'custom' (formatter='pxfwritable_import');
> select * from pxf_hive;
> {code}
> but querying the same table via hcatalog does not
> {code}
> SELECT * FROM hcatalog.default.hive_table;
> ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
> (hd_work_mgr.c:930)
> {code}
> This should be fixed by the PR for 
> https://issues.apache.org/jira/browse/HAWQ-317



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240538#comment-15240538
 ] 

ASF GitHub Bot commented on HAWQ-462:
-

Github user ztao1987 commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/503#discussion_r59660246
  
--- Diff: src/bin/gpfusion/gpbridgeapi.c ---
@@ -536,3 +536,8 @@ void free_token_resources(PxfInputData *inputData)
 
pfree(inputData->token);
 }
+
+void free_dfs_address()
+{
+   free(dfs_address);
+}
--- End diff --

I checked free_dfs_address is called by gpbridge_export_start and 
gpbridge_import_start, do you mean this two funtions are only called in the 
secure mode? If not, you are freeing NULL pointer.


> Querying Hcatalog in HA Secure Environment Fails
> 
>
> Key: HAWQ-462
> URL: https://issues.apache.org/jira/browse/HAWQ-462
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, Hcatalog, PXF
>Affects Versions: 2.0.0-beta-incubating
>Reporter: Kavinder Dhaliwal
>Assignee: Shivram Mani
> Fix For: 2.0.0
>
>
> On an HA Secure Cluster querying a hive external table works:
> {code}
> create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b 
> boolean) location 
> ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
> format 'custom' (formatter='pxfwritable_import');
> select * from pxf_hive;
> {code}
> but querying the same table via hcatalog does not
> {code}
> SELECT * FROM hcatalog.default.hive_table;
> ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
> (hd_work_mgr.c:930)
> {code}
> This should be fixed by the PR for 
> https://issues.apache.org/jira/browse/HAWQ-317



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-672) Add python module pygresql back into hawq workspace

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240536#comment-15240536
 ] 

ASF GitHub Bot commented on HAWQ-672:
-

Github user radarwave commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/606#discussion_r59660199
  
--- Diff: tools/bin/ext/__init__.py ---
@@ -0,0 +1,306 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+# 
+#   http://www.apache.org/licenses/LICENSE-2.0
+# 
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+from error import *
+
+from tokens import *
+from events import *
+from nodes import *
+
+from loader import *
+from dumper import *
+
+try:
+from cyaml import *
--- End diff --

We would better remove the codes not related to pygresql in this 
'__init__.py' file.


> Add python module pygresql back into hawq workspace
> ---
>
> Key: HAWQ-672
> URL: https://issues.apache.org/jira/browse/HAWQ-672
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Lei Chang
>
> HAWQ-271 (Remove external python modules) removed a lot of external python 
> modules besides pygresql. Now to install pygresql, we have to hack a bit as  
> https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install said, i.e. 
> install postgresql package at first and then remove it after installing 
> pygresql, else pip will report error like "pg_config: command not found". 
> Since installing postgresql is a bit painful and the workaround seems to be 
> ugly, we could revert the code change for pygresql for HAWQ-271. That is to 
> say, users do not need to install pygresql themselves. Note that we did some 
> hack to upstream pygresql.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-672) Add python module pygresql back into hawq workspace

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240535#comment-15240535
 ] 

ASF GitHub Bot commented on HAWQ-672:
-

Github user radarwave commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/606#discussion_r59660154
  
--- Diff: tools/bin/generate-greenplum-path.sh ---
@@ -90,16 +90,9 @@ EOF
 fi
 
 #setup PYTHONPATH
-# OSX does NOT need pygresql/ path
-if [ "${PLAT}" = "Darwin" ] ; then
-cat < Add python module pygresql back into hawq workspace
> ---
>
> Key: HAWQ-672
> URL: https://issues.apache.org/jira/browse/HAWQ-672
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Lei Chang
>
> HAWQ-271 (Remove external python modules) removed a lot of external python 
> modules besides pygresql. Now to install pygresql, we have to hack a bit as  
> https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install said, i.e. 
> install postgresql package at first and then remove it after installing 
> pygresql, else pip will report error like "pg_config: command not found". 
> Since installing postgresql is a bit painful and the workaround seems to be 
> ugly, we could revert the code change for pygresql for HAWQ-271. That is to 
> say, users do not need to install pygresql themselves. Note that we did some 
> hack to upstream pygresql.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240532#comment-15240532
 ] 

ASF GitHub Bot commented on HAWQ-462:
-

Github user kavinderd commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/503#discussion_r59659935
  
--- Diff: src/bin/gpfusion/gpbridgeapi.c ---
@@ -536,3 +536,8 @@ void free_token_resources(PxfInputData *inputData)
 
pfree(inputData->token);
 }
+
+void free_dfs_address()
+{
+   free(dfs_address);
+}
--- End diff --

Yes only in the secure mode. However it is only dispatched from master in 
the secure mode as well so it is never allocated in any other case


> Querying Hcatalog in HA Secure Environment Fails
> 
>
> Key: HAWQ-462
> URL: https://issues.apache.org/jira/browse/HAWQ-462
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, Hcatalog, PXF
>Affects Versions: 2.0.0-beta-incubating
>Reporter: Kavinder Dhaliwal
>Assignee: Shivram Mani
> Fix For: 2.0.0
>
>
> On an HA Secure Cluster querying a hive external table works:
> {code}
> create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b 
> boolean) location 
> ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
> format 'custom' (formatter='pxfwritable_import');
> select * from pxf_hive;
> {code}
> but querying the same table via hcatalog does not
> {code}
> SELECT * FROM hcatalog.default.hive_table;
> ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
> (hd_work_mgr.c:930)
> {code}
> This should be fixed by the PR for 
> https://issues.apache.org/jira/browse/HAWQ-317



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240527#comment-15240527
 ] 

ASF GitHub Bot commented on HAWQ-462:
-

Github user ztao1987 commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/503#discussion_r59659812
  
--- Diff: src/bin/gpfusion/gpbridgeapi.c ---
@@ -536,3 +536,8 @@ void free_token_resources(PxfInputData *inputData)
 
pfree(inputData->token);
 }
+
+void free_dfs_address()
+{
+   free(dfs_address);
+}
--- End diff --

Will free_dfs_address only be called in secure mode? Otherwise it will 
coredump


> Querying Hcatalog in HA Secure Environment Fails
> 
>
> Key: HAWQ-462
> URL: https://issues.apache.org/jira/browse/HAWQ-462
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, Hcatalog, PXF
>Affects Versions: 2.0.0-beta-incubating
>Reporter: Kavinder Dhaliwal
>Assignee: Shivram Mani
> Fix For: 2.0.0
>
>
> On an HA Secure Cluster querying a hive external table works:
> {code}
> create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b 
> boolean) location 
> ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
> format 'custom' (formatter='pxfwritable_import');
> select * from pxf_hive;
> {code}
> but querying the same table via hcatalog does not
> {code}
> SELECT * FROM hcatalog.default.hive_table;
> ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
> (hd_work_mgr.c:930)
> {code}
> This should be fixed by the PR for 
> https://issues.apache.org/jira/browse/HAWQ-317



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240521#comment-15240521
 ] 

ASF GitHub Bot commented on HAWQ-462:
-

Github user ztao1987 commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/503#discussion_r59659407
  
--- Diff: src/backend/cdb/cdbquerycontextdispatching.c ---
@@ -770,6 +770,30 @@ RebuildTupleForRelation(QueryContextInfo *cxt)
 }
 
 /*
+ * Deserialize the Namespace data
+ */
+static void
+RebuildNamespace(QueryContextInfo *cxt)
+{
+
+   int len;
+   char buffer[4], *binary;
+   ReadData(cxt, buffer, sizeof(buffer), TRUE);
+
+   len = (int) ntohl(*(uint32 *) buffer);
+   binary = palloc(len);
+   if(ReadData(cxt, binary, len, TRUE))
+   {
+   StringInfoData buffer;
+   initStringInfoOfString(, binary, len);
+   dfs_address = strdup(buffer.data);
--- End diff --

Why not just strdup(binary) to dfs_address? 


> Querying Hcatalog in HA Secure Environment Fails
> 
>
> Key: HAWQ-462
> URL: https://issues.apache.org/jira/browse/HAWQ-462
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, Hcatalog, PXF
>Affects Versions: 2.0.0-beta-incubating
>Reporter: Kavinder Dhaliwal
>Assignee: Shivram Mani
> Fix For: 2.0.0
>
>
> On an HA Secure Cluster querying a hive external table works:
> {code}
> create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b 
> boolean) location 
> ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
> format 'custom' (formatter='pxfwritable_import');
> select * from pxf_hive;
> {code}
> but querying the same table via hcatalog does not
> {code}
> SELECT * FROM hcatalog.default.hive_table;
> ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
> (hd_work_mgr.c:930)
> {code}
> This should be fixed by the PR for 
> https://issues.apache.org/jira/browse/HAWQ-317



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-672) Add python module pygresql back into hawq workspace

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240515#comment-15240515
 ] 

ASF GitHub Bot commented on HAWQ-672:
-

GitHub user paul-guo- opened a pull request:

https://github.com/apache/incubator-hawq/pull/606

HAWQ-672. Add python module pygresql back into hawq workspace

In this patch, all of the new added files come from the workspace after 
reverting HAWQ-271. Other changes mostly follow the original patch for 
HAWQ-271. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/paul-guo-/incubator-hawq master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/606.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #606


commit 90816ddb5535bda8c0f24464dc86e2e5dd79389b
Author: Paul Guo 
Date:   2016-04-14T03:02:54Z

HAWQ-672. Add python module pygresql back into hawq workspace




> Add python module pygresql back into hawq workspace
> ---
>
> Key: HAWQ-672
> URL: https://issues.apache.org/jira/browse/HAWQ-672
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Lei Chang
>
> HAWQ-271 (Remove external python modules) removed a lot of external python 
> modules besides pygresql. Now to install pygresql, we have to hack a bit as  
> https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install said, i.e. 
> install postgresql package at first and then remove it after installing 
> pygresql, else pip will report error like "pg_config: command not found". 
> Since installing postgresql is a bit painful and the workaround seems to be 
> ugly, we could revert the code change for pygresql for HAWQ-271. That is to 
> say, users do not need to install pygresql themselves. Note that we did some 
> hack to upstream pygresql.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-644) Failure in security ha environment with certain writable external tables

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240491#comment-15240491
 ] 

ASF GitHub Bot commented on HAWQ-644:
-

Github user ztao1987 commented on the pull request:

https://github.com/apache/incubator-hawq/pull/604#issuecomment-209733741
  
+1


> Failure in security ha environment with certain writable external tables
> 
>
> Key: HAWQ-644
> URL: https://issues.apache.org/jira/browse/HAWQ-644
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF, Security
>Reporter: Goden Yao
>Assignee: Goden Yao
>
> In a Secure HA environment:
> Few tests which tests writable table fail due to empty dfs_address prior to 
> getting the delegation token in the segment.
> On an initial investigation, the shared_path seems to not be set by the hawq 
> master.
> Log from the specific segment. The hdfs path available in the segment is 
> empty and hence the failure.
> {code}
> 2016-04-08 22:53:11.034661 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-1 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034699 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-2 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034785 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","locating 
> token for 0^N\","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"gpbridgeapi.c",521,
> 2016-04-08 22:53:11.034871 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"WARNING","01000","internal 
> error HdfsParsePath: no filesystem protocol found in path 
> ""0^N\^A""","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"fd.c",2501,
> 2016-04-08 22:53:11.035004 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG1","0","Deleted 
> entry for query (sessionid=73, commandcnt=43)",,"INSERT INTO 
> writable_table SELECT * FROM readable_table;",0,,"workfile_queryspace.c",329,
> 2016-04-08 22:53:11.066243 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"ERROR","XX000","fail to 
> parse uri: 0^N\ (cdbfilesystemcredential.c:529)","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"cdbfilesystemcredential.c",529,"Stack trace:
> 10x871f8f postgres  + 0x871f8f
> 20x872679 postgres elog_finish + 0xa9
> 30x99d7b8 postgres find_filesystem_credential_with_uri + 0x78
> 40x7fc935c8f5a2 pxf.so add_delegation_token + 0xa2
> 50x7fc935c8f9b2 pxf.so get_pxf_server + 0xe2
> 60x7fc935c8ff02 pxf.so gpbridge_export_start + 0x42
> 70x7fc935c90170 pxf.so gpbridge_export + 0x50
> 80x507eb8 postgres  + 0x507eb8
> 90x5083af postgres url_fwrite + 0x9f
> 10   0x5042b4 postgres external_insert + 0x184
> 11   0x69dcca postgres ExecInsert + 0x1fa
> 12   0x69d41c postgres ExecDML + 0x1ec
> 13   0x65e185 postgres ExecProcNode + 0x3c5
> 14   0x659f4a postgres  + 0x659f4a
> 15   0x65a8d3 postgres ExecutorRun + 0x4a3
> 16   0x7b550a postgres  + 0x7b550a
> 17   0x7b5baf postgres  + 0x7b5baf
> 18   0x7b6142 postgres PortalRun + 0x342
> 19   0x7b2c21 postgres PostgresMain + 0x3861
> 20   0x763ce3 postgres  + 0x763ce3
> 21   0x76443d postgres  + 0x76443d
> 22   0x76626e postgres PostmasterMain + 0xc7e
> 23   0x6c04ea postgres main + 0x48a
> 24   0x7fc95f276d5d libc.so.6 __libc_start_main + 0xfd
> 25   0x4a1489 postgres  + 

[jira] [Commented] (HAWQ-673) Unify out put of explain analyze.

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240478#comment-15240478
 ] 

ASF GitHub Bot commented on HAWQ-673:
-

GitHub user zhangh43 opened a pull request:

https://github.com/apache/incubator-hawq/pull/605

HAWQ-673. Unify out put of explain analyze.

output of explain analyze is different between one QE and multiple QE, we 
should unify the output format.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangh43/incubator-hawq explain

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/605.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #605


commit 71cf812960a407909ae75024acb9a202d74d4759
Author: hzhang2 
Date:   2016-04-14T01:36:51Z

fff




> Unify out put of explain analyze.
> -
>
> Key: HAWQ-673
> URL: https://issues.apache.org/jira/browse/HAWQ-673
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
>
> output of explain analyze is different between one QE and multiple QE, we 
> should unify the output format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-673) Unify out put of explain analyze.

2016-04-13 Thread Hubert Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hubert Zhang reassigned HAWQ-673:
-

Assignee: Hubert Zhang  (was: Lei Chang)

> Unify out put of explain analyze.
> -
>
> Key: HAWQ-673
> URL: https://issues.apache.org/jira/browse/HAWQ-673
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
>
> output of explain analyze is different between one QE and multiple QE, we 
> should unify the output format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-673) Unify out put of explain analyze.

2016-04-13 Thread Hubert Zhang (JIRA)
Hubert Zhang created HAWQ-673:
-

 Summary: Unify out put of explain analyze.
 Key: HAWQ-673
 URL: https://issues.apache.org/jira/browse/HAWQ-673
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Core
Reporter: Hubert Zhang
Assignee: Lei Chang


output of explain analyze is different between one QE and multiple QE, we 
should unify the output format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-672) Add python module pygresql back into hawq workspace

2016-04-13 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-672:
-

 Summary: Add python module pygresql back into hawq workspace
 Key: HAWQ-672
 URL: https://issues.apache.org/jira/browse/HAWQ-672
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Lei Chang


HAWQ-271 (Remove external python modules) removed a lot of external python 
modules besides pygresql. Now to install pygresql, we have to hack a bit as  
https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install said, i.e. 
install postgresql package at first and then remove it after installing 
pygresql, else pip will report error like "pg_config: command not found". Since 
installing postgresql is a bit painful and the workaround seems to be ugly, we 
could revert the code change for pygresql for HAWQ-271. That is to say, users 
do not need to install pygresql themselves. Note that we did some hack to 
upstream pygresql.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-669) Interconnect guc gp_interconnect_transmit_timeout type error

2016-04-13 Thread Ivan Weng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Weng resolved HAWQ-669.

   Resolution: Fixed
Fix Version/s: 2.0.0-beta-incubating

> Interconnect guc gp_interconnect_transmit_timeout type error
> 
>
> Key: HAWQ-669
> URL: https://issues.apache.org/jira/browse/HAWQ-669
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Interconnect
>Reporter: Ivan Weng
>Assignee: Lei Chang
> Fix For: 2.0.0-beta-incubating
>
>
> Interconnect guc gp_interconnect_transmit_timeout type is int, when compute 
> it by us, integer is overflowed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-644) Failure in security ha environment with certain writable external tables

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240165#comment-15240165
 ] 

ASF GitHub Bot commented on HAWQ-644:
-

Github user kavinderd commented on the pull request:

https://github.com/apache/incubator-hawq/pull/604#issuecomment-209679799
  
@sansanichfb We should add some, but need to run in a secure environment.


> Failure in security ha environment with certain writable external tables
> 
>
> Key: HAWQ-644
> URL: https://issues.apache.org/jira/browse/HAWQ-644
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF, Security
>Reporter: Goden Yao
>Assignee: Goden Yao
>
> In a Secure HA environment:
> Few tests which tests writable table fail due to empty dfs_address prior to 
> getting the delegation token in the segment.
> On an initial investigation, the shared_path seems to not be set by the hawq 
> master.
> Log from the specific segment. The hdfs path available in the segment is 
> empty and hence the failure.
> {code}
> 2016-04-08 22:53:11.034661 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-1 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034699 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-2 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034785 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","locating 
> token for 0^N\","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"gpbridgeapi.c",521,
> 2016-04-08 22:53:11.034871 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"WARNING","01000","internal 
> error HdfsParsePath: no filesystem protocol found in path 
> ""0^N\^A""","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"fd.c",2501,
> 2016-04-08 22:53:11.035004 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG1","0","Deleted 
> entry for query (sessionid=73, commandcnt=43)",,"INSERT INTO 
> writable_table SELECT * FROM readable_table;",0,,"workfile_queryspace.c",329,
> 2016-04-08 22:53:11.066243 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"ERROR","XX000","fail to 
> parse uri: 0^N\ (cdbfilesystemcredential.c:529)","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"cdbfilesystemcredential.c",529,"Stack trace:
> 10x871f8f postgres  + 0x871f8f
> 20x872679 postgres elog_finish + 0xa9
> 30x99d7b8 postgres find_filesystem_credential_with_uri + 0x78
> 40x7fc935c8f5a2 pxf.so add_delegation_token + 0xa2
> 50x7fc935c8f9b2 pxf.so get_pxf_server + 0xe2
> 60x7fc935c8ff02 pxf.so gpbridge_export_start + 0x42
> 70x7fc935c90170 pxf.so gpbridge_export + 0x50
> 80x507eb8 postgres  + 0x507eb8
> 90x5083af postgres url_fwrite + 0x9f
> 10   0x5042b4 postgres external_insert + 0x184
> 11   0x69dcca postgres ExecInsert + 0x1fa
> 12   0x69d41c postgres ExecDML + 0x1ec
> 13   0x65e185 postgres ExecProcNode + 0x3c5
> 14   0x659f4a postgres  + 0x659f4a
> 15   0x65a8d3 postgres ExecutorRun + 0x4a3
> 16   0x7b550a postgres  + 0x7b550a
> 17   0x7b5baf postgres  + 0x7b5baf
> 18   0x7b6142 postgres PortalRun + 0x342
> 19   0x7b2c21 postgres PostgresMain + 0x3861
> 20   0x763ce3 postgres  + 0x763ce3
> 21   0x76443d postgres  + 0x76443d
> 22   0x76626e postgres PostmasterMain + 0xc7e
> 23   0x6c04ea postgres main + 0x48a
> 24   

[jira] [Commented] (HAWQ-644) Failure in security ha environment with certain writable external tables

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240163#comment-15240163
 ] 

ASF GitHub Bot commented on HAWQ-644:
-

Github user sansanichfb commented on the pull request:

https://github.com/apache/incubator-hawq/pull/604#issuecomment-209679145
  
Any regression tests needed to cover edge case?


> Failure in security ha environment with certain writable external tables
> 
>
> Key: HAWQ-644
> URL: https://issues.apache.org/jira/browse/HAWQ-644
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF, Security
>Reporter: Goden Yao
>Assignee: Goden Yao
>
> In a Secure HA environment:
> Few tests which tests writable table fail due to empty dfs_address prior to 
> getting the delegation token in the segment.
> On an initial investigation, the shared_path seems to not be set by the hawq 
> master.
> Log from the specific segment. The hdfs path available in the segment is 
> empty and hence the failure.
> {code}
> 2016-04-08 22:53:11.034661 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-1 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034699 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-2 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034785 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","locating 
> token for 0^N\","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"gpbridgeapi.c",521,
> 2016-04-08 22:53:11.034871 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"WARNING","01000","internal 
> error HdfsParsePath: no filesystem protocol found in path 
> ""0^N\^A""","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"fd.c",2501,
> 2016-04-08 22:53:11.035004 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG1","0","Deleted 
> entry for query (sessionid=73, commandcnt=43)",,"INSERT INTO 
> writable_table SELECT * FROM readable_table;",0,,"workfile_queryspace.c",329,
> 2016-04-08 22:53:11.066243 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"ERROR","XX000","fail to 
> parse uri: 0^N\ (cdbfilesystemcredential.c:529)","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"cdbfilesystemcredential.c",529,"Stack trace:
> 10x871f8f postgres  + 0x871f8f
> 20x872679 postgres elog_finish + 0xa9
> 30x99d7b8 postgres find_filesystem_credential_with_uri + 0x78
> 40x7fc935c8f5a2 pxf.so add_delegation_token + 0xa2
> 50x7fc935c8f9b2 pxf.so get_pxf_server + 0xe2
> 60x7fc935c8ff02 pxf.so gpbridge_export_start + 0x42
> 70x7fc935c90170 pxf.so gpbridge_export + 0x50
> 80x507eb8 postgres  + 0x507eb8
> 90x5083af postgres url_fwrite + 0x9f
> 10   0x5042b4 postgres external_insert + 0x184
> 11   0x69dcca postgres ExecInsert + 0x1fa
> 12   0x69d41c postgres ExecDML + 0x1ec
> 13   0x65e185 postgres ExecProcNode + 0x3c5
> 14   0x659f4a postgres  + 0x659f4a
> 15   0x65a8d3 postgres ExecutorRun + 0x4a3
> 16   0x7b550a postgres  + 0x7b550a
> 17   0x7b5baf postgres  + 0x7b5baf
> 18   0x7b6142 postgres PortalRun + 0x342
> 19   0x7b2c21 postgres PostgresMain + 0x3861
> 20   0x763ce3 postgres  + 0x763ce3
> 21   0x76443d postgres  + 0x76443d
> 22   0x76626e postgres PostmasterMain + 0xc7e
> 23   0x6c04ea postgres main + 0x48a
> 24   0x7fc95f276d5d libc.so.6 

[jira] [Commented] (HAWQ-644) Failure in security ha environment with certain writable external tables

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240161#comment-15240161
 ] 

ASF GitHub Bot commented on HAWQ-644:
-

Github user shivzone commented on the pull request:

https://github.com/apache/incubator-hawq/pull/604#issuecomment-209678894
  
+1


> Failure in security ha environment with certain writable external tables
> 
>
> Key: HAWQ-644
> URL: https://issues.apache.org/jira/browse/HAWQ-644
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF, Security
>Reporter: Goden Yao
>Assignee: Goden Yao
>
> In a Secure HA environment:
> Few tests which tests writable table fail due to empty dfs_address prior to 
> getting the delegation token in the segment.
> On an initial investigation, the shared_path seems to not be set by the hawq 
> master.
> Log from the specific segment. The hdfs path available in the segment is 
> empty and hence the failure.
> {code}
> 2016-04-08 22:53:11.034661 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-1 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034699 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-2 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034785 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","locating 
> token for 0^N\","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"gpbridgeapi.c",521,
> 2016-04-08 22:53:11.034871 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"WARNING","01000","internal 
> error HdfsParsePath: no filesystem protocol found in path 
> ""0^N\^A""","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"fd.c",2501,
> 2016-04-08 22:53:11.035004 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG1","0","Deleted 
> entry for query (sessionid=73, commandcnt=43)",,"INSERT INTO 
> writable_table SELECT * FROM readable_table;",0,,"workfile_queryspace.c",329,
> 2016-04-08 22:53:11.066243 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"ERROR","XX000","fail to 
> parse uri: 0^N\ (cdbfilesystemcredential.c:529)","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"cdbfilesystemcredential.c",529,"Stack trace:
> 10x871f8f postgres  + 0x871f8f
> 20x872679 postgres elog_finish + 0xa9
> 30x99d7b8 postgres find_filesystem_credential_with_uri + 0x78
> 40x7fc935c8f5a2 pxf.so add_delegation_token + 0xa2
> 50x7fc935c8f9b2 pxf.so get_pxf_server + 0xe2
> 60x7fc935c8ff02 pxf.so gpbridge_export_start + 0x42
> 70x7fc935c90170 pxf.so gpbridge_export + 0x50
> 80x507eb8 postgres  + 0x507eb8
> 90x5083af postgres url_fwrite + 0x9f
> 10   0x5042b4 postgres external_insert + 0x184
> 11   0x69dcca postgres ExecInsert + 0x1fa
> 12   0x69d41c postgres ExecDML + 0x1ec
> 13   0x65e185 postgres ExecProcNode + 0x3c5
> 14   0x659f4a postgres  + 0x659f4a
> 15   0x65a8d3 postgres ExecutorRun + 0x4a3
> 16   0x7b550a postgres  + 0x7b550a
> 17   0x7b5baf postgres  + 0x7b5baf
> 18   0x7b6142 postgres PortalRun + 0x342
> 19   0x7b2c21 postgres PostgresMain + 0x3861
> 20   0x763ce3 postgres  + 0x763ce3
> 21   0x76443d postgres  + 0x76443d
> 22   0x76626e postgres PostmasterMain + 0xc7e
> 23   0x6c04ea postgres main + 0x48a
> 24   0x7fc95f276d5d libc.so.6 __libc_start_main + 0xfd
> 25   0x4a1489 postgres  + 

[jira] [Commented] (HAWQ-644) Failure in security ha environment with certain writable external tables

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240159#comment-15240159
 ] 

ASF GitHub Bot commented on HAWQ-644:
-

Github user shivzone commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/604#discussion_r59638357
  
--- Diff: src/backend/cdb/cdbquerycontextdispatching.c ---
@@ -3031,7 +3031,7 @@ prepareDfsAddressForDispatch(QueryContextInfo* cxt)
if (!enable_secure_filesystem)
return;
const char *namespace = cxt->sharedPath;
-   int size = strlen(namespace);
+   int size = strlen(namespace) + 1;
--- End diff --

please add a comment mentioning about null terminator character


> Failure in security ha environment with certain writable external tables
> 
>
> Key: HAWQ-644
> URL: https://issues.apache.org/jira/browse/HAWQ-644
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF, Security
>Reporter: Goden Yao
>Assignee: Goden Yao
>
> In a Secure HA environment:
> Few tests which tests writable table fail due to empty dfs_address prior to 
> getting the delegation token in the segment.
> On an initial investigation, the shared_path seems to not be set by the hawq 
> master.
> Log from the specific segment. The hdfs path available in the segment is 
> empty and hence the failure.
> {code}
> 2016-04-08 22:53:11.034661 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-1 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034699 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-2 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034785 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","locating 
> token for 0^N\","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"gpbridgeapi.c",521,
> 2016-04-08 22:53:11.034871 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"WARNING","01000","internal 
> error HdfsParsePath: no filesystem protocol found in path 
> ""0^N\^A""","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"fd.c",2501,
> 2016-04-08 22:53:11.035004 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG1","0","Deleted 
> entry for query (sessionid=73, commandcnt=43)",,"INSERT INTO 
> writable_table SELECT * FROM readable_table;",0,,"workfile_queryspace.c",329,
> 2016-04-08 22:53:11.066243 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"ERROR","XX000","fail to 
> parse uri: 0^N\ (cdbfilesystemcredential.c:529)","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"cdbfilesystemcredential.c",529,"Stack trace:
> 10x871f8f postgres  + 0x871f8f
> 20x872679 postgres elog_finish + 0xa9
> 30x99d7b8 postgres find_filesystem_credential_with_uri + 0x78
> 40x7fc935c8f5a2 pxf.so add_delegation_token + 0xa2
> 50x7fc935c8f9b2 pxf.so get_pxf_server + 0xe2
> 60x7fc935c8ff02 pxf.so gpbridge_export_start + 0x42
> 70x7fc935c90170 pxf.so gpbridge_export + 0x50
> 80x507eb8 postgres  + 0x507eb8
> 90x5083af postgres url_fwrite + 0x9f
> 10   0x5042b4 postgres external_insert + 0x184
> 11   0x69dcca postgres ExecInsert + 0x1fa
> 12   0x69d41c postgres ExecDML + 0x1ec
> 13   0x65e185 postgres ExecProcNode + 0x3c5
> 14   0x659f4a postgres  + 0x659f4a
> 15   0x65a8d3 postgres 

[jira] [Commented] (HAWQ-644) Failure in security ha environment with certain writable external tables

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240147#comment-15240147
 ] 

ASF GitHub Bot commented on HAWQ-644:
-

GitHub user kavinderd opened a pull request:

https://github.com/apache/incubator-hawq/pull/604

HAWQ-644. Account for '\0' when dispatching namespace length



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kavinderd/incubator-hawq HAWQ-644

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/604.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #604


commit 6538dce957cd59079db524b2466d0f27d4b5c287
Author: Kavinder Dhaliwal 
Date:   2016-04-13T22:32:19Z

HAWQ-644. Account for '\0' when dispatching namespace length




> Failure in security ha environment with certain writable external tables
> 
>
> Key: HAWQ-644
> URL: https://issues.apache.org/jira/browse/HAWQ-644
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF, Security
>Reporter: Goden Yao
>Assignee: Goden Yao
>
> In a Secure HA environment:
> Few tests which tests writable table fail due to empty dfs_address prior to 
> getting the delegation token in the segment.
> On an initial investigation, the shared_path seems to not be set by the hawq 
> master.
> Log from the specific segment. The hdfs path available in the segment is 
> empty and hence the failure.
> {code}
> 2016-04-08 22:53:11.034661 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-1 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034699 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF 
> received from configuration HA Namenode-2 having rpc-address 
>  and rest-address 
> ","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"ha_config.c",157,
> 2016-04-08 22:53:11.034785 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","locating 
> token for 0^N\","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"gpbridgeapi.c",521,
> 2016-04-08 22:53:11.034871 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"WARNING","01000","internal 
> error HdfsParsePath: no filesystem protocol found in path 
> ""0^N\^A""","External table readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"fd.c",2501,
> 2016-04-08 22:53:11.035004 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG1","0","Deleted 
> entry for query (sessionid=73, commandcnt=43)",,"INSERT INTO 
> writable_table SELECT * FROM readable_table;",0,,"workfile_queryspace.c",329,
> 2016-04-08 22:53:11.066243 
> UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08
>  22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"ERROR","XX000","fail to 
> parse uri: 0^N\ (cdbfilesystemcredential.c:529)","External table 
> readable_table, line 1 of 
> pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: 
> ","INSERT INTO writable_table SELECT * FROM 
> readable_table;",0,,"cdbfilesystemcredential.c",529,"Stack trace:
> 10x871f8f postgres  + 0x871f8f
> 20x872679 postgres elog_finish + 0xa9
> 30x99d7b8 postgres find_filesystem_credential_with_uri + 0x78
> 40x7fc935c8f5a2 pxf.so add_delegation_token + 0xa2
> 50x7fc935c8f9b2 pxf.so get_pxf_server + 0xe2
> 60x7fc935c8ff02 pxf.so gpbridge_export_start + 0x42
> 70x7fc935c90170 pxf.so gpbridge_export + 0x50
> 80x507eb8 postgres  + 0x507eb8
> 90x5083af postgres url_fwrite + 0x9f
> 10   0x5042b4 

[jira] [Assigned] (HAWQ-671) Validation raises error when adding accepted containers into a down node and fix function typo

2016-04-13 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-671:
---

Assignee: Yi Jin  (was: Lei Chang)

> Validation raises error when adding accepted containers into a down node and 
> fix function typo
> --
>
> Key: HAWQ-671
> URL: https://issues.apache.org/jira/browse/HAWQ-671
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-671) Validation raises error when adding accepted containers into a down node and fix function typo

2016-04-13 Thread Yi Jin (JIRA)
Yi Jin created HAWQ-671:
---

 Summary: Validation raises error when adding accepted containers 
into a down node and fix function typo
 Key: HAWQ-671
 URL: https://issues.apache.org/jira/browse/HAWQ-671
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Resource Manager
Reporter: Yi Jin
Assignee: Lei Chang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-646) Bump up optimizer version number

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239774#comment-15239774
 ] 

ASF GitHub Bot commented on HAWQ-646:
-

Github user hsyuan commented on the pull request:

https://github.com/apache/incubator-hawq/pull/603#issuecomment-209581991
  
@d @oarap please take a look.


> Bump up optimizer version number
> 
>
> Key: HAWQ-646
> URL: https://issues.apache.org/jira/browse/HAWQ-646
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Optimizer
>Reporter: Haisheng Yuan
>Assignee: Amr El-Helw
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-543) src backend makefile fails when you try to run unittest's target

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239704#comment-15239704
 ] 

ASF GitHub Bot commented on HAWQ-543:
-

Github user hsyuan commented on the pull request:

https://github.com/apache/incubator-hawq/pull/490#issuecomment-209569945
  
+1


> src backend makefile fails when you try to run unittest's target
> 
>
> Key: HAWQ-543
> URL: https://issues.apache.org/jira/browse/HAWQ-543
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: C.J. Jameson
>Assignee: Lei Chang
>
> When we try to package the software, we see a failure



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-670) Error when changing the table distribution policy from random to hash distribution

2016-04-13 Thread Haisheng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haisheng Yuan updated HAWQ-670:
---
Description: 
If the current segments number is 8 and I run these queries, 
{code:sql}
create table t2 (c1 int) with (bucketnum=5);
create table t2_2 (c2 int) inherits(t2);
alter table t2 set distributed by (c1);
{code}
The alter table clause will show the following error message:
{color:red}
ERROR:  bucketnum requires a numeric value
{color}

which is not expected behavior.

The query should be able to be executed without error messages.

  was:
If the current segments number is 8 and I run these queries, 
{code:sql}
create table t2 (c1 int) with (bucketnum=5);
create table t2_2 (c2 int) inherits(t2);
alter table t2 set distributed by (c1);
{code}
The alter table clause will show the following error message:
{color:red}
ERROR:  bucketnum requires a numeric value
{color}
which is not expected behavior.

The query should be able to be executed without error messages.


> Error when changing the table distribution policy from random to hash 
> distribution
> --
>
> Key: HAWQ-670
> URL: https://issues.apache.org/jira/browse/HAWQ-670
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Haisheng Yuan
>Assignee: Lei Chang
>
> If the current segments number is 8 and I run these queries, 
> {code:sql}
> create table t2 (c1 int) with (bucketnum=5);
> create table t2_2 (c2 int) inherits(t2);
> alter table t2 set distributed by (c1);
> {code}
> The alter table clause will show the following error message:
> {color:red}
> ERROR:  bucketnum requires a numeric value
> {color}
> which is not expected behavior.
> The query should be able to be executed without error messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-404) Add sort during INSERT of append only row oriented partition tables

2016-04-13 Thread Haisheng Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haisheng Yuan resolved HAWQ-404.

Resolution: Fixed

> Add sort during INSERT of append only row oriented partition tables
> ---
>
> Key: HAWQ-404
> URL: https://issues.apache.org/jira/browse/HAWQ-404
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Haisheng Yuan
>Assignee: Lei Chang
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-669) Interconnect guc gp_interconnect_transmit_timeout type error

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239033#comment-15239033
 ] 

ASF GitHub Bot commented on HAWQ-669:
-

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-hawq/pull/602


> Interconnect guc gp_interconnect_transmit_timeout type error
> 
>
> Key: HAWQ-669
> URL: https://issues.apache.org/jira/browse/HAWQ-669
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Interconnect
>Reporter: Ivan Weng
>Assignee: Lei Chang
>
> Interconnect guc gp_interconnect_transmit_timeout type is int, when compute 
> it by us, integer is overflowed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-669) Interconnect guc gp_interconnect_transmit_timeout type error

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239031#comment-15239031
 ] 

ASF GitHub Bot commented on HAWQ-669:
-

Github user zhangh43 commented on the pull request:

https://github.com/apache/incubator-hawq/pull/602#issuecomment-209358509
  
+1


> Interconnect guc gp_interconnect_transmit_timeout type error
> 
>
> Key: HAWQ-669
> URL: https://issues.apache.org/jira/browse/HAWQ-669
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Interconnect
>Reporter: Ivan Weng
>Assignee: Lei Chang
>
> Interconnect guc gp_interconnect_transmit_timeout type is int, when compute 
> it by us, integer is overflowed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-669) Interconnect guc gp_interconnect_transmit_timeout type error

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239027#comment-15239027
 ] 

ASF GitHub Bot commented on HAWQ-669:
-

GitHub user wengyanqing opened a pull request:

https://github.com/apache/incubator-hawq/pull/602

HAWQ-669. Fix interconnect guc gp_interconnect_transmit_timeout type …

Interconnect guc gp_interconnect_transmit_timeout type is int, when compute 
it by us, integer is overflowed.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wengyanqing/incubator-hawq HAWQ-669

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/602.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #602


commit 5ccab40c2fb8e56b1e7379dbeb05bd6b94fc3c43
Author: ivan 
Date:   2016-04-13T10:24:55Z

HAWQ-669. Fix interconnect guc gp_interconnect_transmit_timeout type issue




> Interconnect guc gp_interconnect_transmit_timeout type error
> 
>
> Key: HAWQ-669
> URL: https://issues.apache.org/jira/browse/HAWQ-669
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Interconnect
>Reporter: Ivan Weng
>Assignee: Lei Chang
>
> Interconnect guc gp_interconnect_transmit_timeout type is int, when compute 
> it by us, integer is overflowed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-669) Interconnect guc gp_interconnect_transmit_timeout type error

2016-04-13 Thread Ivan Weng (JIRA)
Ivan Weng created HAWQ-669:
--

 Summary: Interconnect guc gp_interconnect_transmit_timeout type 
error
 Key: HAWQ-669
 URL: https://issues.apache.org/jira/browse/HAWQ-669
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Interconnect
Reporter: Ivan Weng
Assignee: Lei Chang


Interconnect guc gp_interconnect_transmit_timeout type is int, when compute it 
by us, integer is overflowed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-667) QD resource heart-beat sleeps too long time.

2016-04-13 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-667.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

> QD resource heart-beat sleeps too long time.
> 
>
> Key: HAWQ-667
> URL: https://issues.apache.org/jira/browse/HAWQ-667
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-667) QD resource heart-beat sleeps too long time.

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15239007#comment-15239007
 ] 

ASF GitHub Bot commented on HAWQ-667:
-

Github user changleicn closed the pull request at:

https://github.com/apache/incubator-hawq/pull/601


> QD resource heart-beat sleeps too long time.
> 
>
> Key: HAWQ-667
> URL: https://issues.apache.org/jira/browse/HAWQ-667
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-667) QD resource heart-beat sleeps too long time.

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238941#comment-15238941
 ] 

ASF GitHub Bot commented on HAWQ-667:
-

Github user zhangh43 commented on the pull request:

https://github.com/apache/incubator-hawq/pull/601#issuecomment-209332293
  
+1


> QD resource heart-beat sleeps too long time.
> 
>
> Key: HAWQ-667
> URL: https://issues.apache.org/jira/browse/HAWQ-667
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-667) QD resource heart-beat sleeps too long time.

2016-04-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15238931#comment-15238931
 ] 

ASF GitHub Bot commented on HAWQ-667:
-

GitHub user changleicn opened a pull request:

https://github.com/apache/incubator-hawq/pull/601

HAWQ-667. QD resource heart-beat sleeps too long time



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/changleicn/incubator-hawq HAWQ-667

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/601.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #601


commit e22a8d94745cec087a7e6cfcf517e5ee36251f94
Author: Lei Chang 
Date:   2016-04-13T09:21:21Z

HAWQ-667. QD resource heart-beat sleeps too long time




> QD resource heart-beat sleeps too long time.
> 
>
> Key: HAWQ-667
> URL: https://issues.apache.org/jira/browse/HAWQ-667
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-668) hawq check should able to check yarn settings

2016-04-13 Thread Radar Lei (JIRA)
Radar Lei created HAWQ-668:
--

 Summary: hawq check should able to check yarn settings
 Key: HAWQ-668
 URL: https://issues.apache.org/jira/browse/HAWQ-668
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: Command Line Tools
Reporter: Radar Lei
Assignee: Lei Chang


Since hawq now use yarn for resource manager, so we'd better to use 'hawq 
check' to validate yarn configurations.

BTW, we should update some os, hdfs, hawq-site.xml expected values with latest 
best practice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-668) hawq check should able to check yarn settings

2016-04-13 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-668:
--

Assignee: Radar Lei  (was: Lei Chang)

> hawq check should able to check yarn settings
> -
>
> Key: HAWQ-668
> URL: https://issues.apache.org/jira/browse/HAWQ-668
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Radar Lei
>
> Since hawq now use yarn for resource manager, so we'd better to use 'hawq 
> check' to validate yarn configurations.
> BTW, we should update some os, hdfs, hawq-site.xml expected values with 
> latest best practice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-667) QD resource heart-beat sleeps too long time.

2016-04-13 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-667:
---

Assignee: Yi Jin  (was: Lei Chang)

> QD resource heart-beat sleeps too long time.
> 
>
> Key: HAWQ-667
> URL: https://issues.apache.org/jira/browse/HAWQ-667
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-667) QD resource heart-beat sleeps too long time.

2016-04-13 Thread Yi Jin (JIRA)
Yi Jin created HAWQ-667:
---

 Summary: QD resource heart-beat sleeps too long time.
 Key: HAWQ-667
 URL: https://issues.apache.org/jira/browse/HAWQ-667
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Resource Manager
Reporter: Yi Jin
Assignee: Lei Chang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-656) gppkg can't remove installed module

2016-04-13 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-656.

Resolution: Fixed

> gppkg can't remove installed module
> ---
>
> Key: HAWQ-656
> URL: https://issues.apache.org/jira/browse/HAWQ-656
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Radar Lei
>Assignee: Lei Chang
>
> gppkg can't remove installed module because of - have more 
> than one '-' in the name-version string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-661) Remove legacy gpextract command

2016-04-13 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-661.

Resolution: Fixed

> Remove legacy gpextract command
> ---
>
> Key: HAWQ-661
> URL: https://issues.apache.org/jira/browse/HAWQ-661
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Lei Chang
>
> Now we have hawq extract, should remove gpextract command from code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)