[jira] [Commented] (HAWQ-1212) Upgrade libhdfs3 with upstream hadoop/hdp

2016-12-12 Thread hongwu (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743616#comment-15743616
 ] 

hongwu commented on HAWQ-1212:
--

Got, thanks.

> Upgrade libhdfs3 with upstream hadoop/hdp
> -
>
> Key: HAWQ-1212
> URL: https://issues.apache.org/jira/browse/HAWQ-1212
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: libhdfs
>Reporter: hongwu
>Assignee: Lei Chang
>
> Since current libhdfs3 implementation is based on old version of HDFS, it is 
> necessary to upgrade libhdfs3 with latest HDFS of lots of new features and 
> configuration parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1212) Upgrade libhdfs3 with upstream hadoop/hdp

2016-12-12 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743612#comment-15743612
 ] 

Roman Shaposhnik commented on HAWQ-1212:


My plan is to help you with mentorship required to merge it with Apache Hadoop 
codebase -- somebody in HAWQ will still have to commit to doing all the actual 
work.

> Upgrade libhdfs3 with upstream hadoop/hdp
> -
>
> Key: HAWQ-1212
> URL: https://issues.apache.org/jira/browse/HAWQ-1212
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: libhdfs
>Reporter: hongwu
>Assignee: Lei Chang
>
> Since current libhdfs3 implementation is based on old version of HDFS, it is 
> necessary to upgrade libhdfs3 with latest HDFS of lots of new features and 
> configuration parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1202) docs - add PXF server configuration parameters to reference

2016-12-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743475#comment-15743475
 ] 

ASF GitHub Bot commented on HAWQ-1202:
--

Github user dyozie commented on a diff in the pull request:

https://github.com/apache/incubator-hawq-docs/pull/70#discussion_r92063656
  
--- Diff: reference/guc/parameter_definitions.html.md.erb ---
@@ -2740,9 +2754,26 @@ Determines whether `ANALYZE` collects statistics for 
readable PXF tables. If tru
 |-|-|-|
 | Boolean | true| master, session, reload |
 
+## pxf\_remote\_service\_login
+
+Temporary configuration parameter identifying the login credential to 
forward to remote services requiring authentication.
+
+| Value Range | Default
 | Set Classifications |

+|-|-|-|
+| string | | master, session, reload |
+
+## pxf\_remote\_service\_secret
+
+Temporary configuration parameter identifying the password credential to 
forward to remote services requiring authentication.
+
+| Value Range | Default
 | Set Classifications |

+|-|-|-|
+| string | | master, session, reload |
+
--- End diff --

What does "temporary" mean for the above two parameters?


> docs - add PXF server configuration parameters to reference
> ---
>
> Key: HAWQ-1202
> URL: https://issues.apache.org/jira/browse/HAWQ-1202
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>Priority: Minor
>
> certain PXF server configuration parameters are referenced in the 
> documentation.  most of these are not included in the server configuration 
> parameter reference.  add them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1215) PXF HiveORC profile doesn't handle complex types correctly

2016-12-12 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1215:
---
Affects Version/s: 2.0.0.0-incubating

> PXF HiveORC profile doesn't handle complex types correctly
> --
>
> Key: HAWQ-1215
> URL: https://issues.apache.org/jira/browse/HAWQ-1215
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> The new HiveORC profile has an issue with handling complex hive types 
> (array,map,struct,union,etc). The object inspector being used marks all these 
> complex types as string and hence during resolution time, PXF treats them as 
> primitive data types and fails.
> We get the following exception
> {code}
> 2016-12-12 10:13:37.0579 DEBUG tomcat-http--13 
> org.apache.hawq.pxf.service.rest.BridgeResource - Starting streaming fragment 
> 0 of resource /hive/warehouse/hive_collections_table_orc/00_0
> 2016-12-12 10:13:37.0580 ERROR tomcat-http--13 
> org.apache.hawq.pxf.service.rest.BridgeResource - Exception thrown when 
> streaming
> java.lang.ClassCastException: java.util.ArrayList cannot be cast to 
> org.apache.hadoop.io.Text
> at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableStringObjectInspector.getPrimitiveJavaObject(WritableStringObjectInspector.java:46)
> at 
> org.apache.hawq.pxf.plugins.hive.HiveResolver.resolvePrimitive(HiveResolver.java:563)
> at 
> org.apache.hawq.pxf.plugins.hive.HiveResolver.traverseTuple(HiveResolver.java:368)
> at 
> org.apache.hawq.pxf.plugins.hive.HiveResolver.traverseStruct(HiveResolver.java:470)
> at 
> org.apache.hawq.pxf.plugins.hive.HiveORCSerdeResolver.getFields(HiveORCSerdeResolver.java:81)
> at org.apache.hawq.pxf.service.ReadBridge.getNext(ReadBridge.java:104)
> at 
> org.apache.hawq.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:140)
> {code}
> HiveORC profile uses the column types from the schema definition in HAWQ. 
> Complex fields are defined as text in HAWQ and hence is treated as string and 
> results in this error. This should be modified to use the schema definition 
> from Fragmenter metadata instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1215) PXF HiveORC profile doesn't handle complex types correctly

2016-12-12 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1215:
---
Description: 
The new HiveORC profile has an issue with handling complex hive types 
(array,map,struct,union,etc). The object inspector being used marks all these 
complex types as string and hence during resolution time, PXF treats them as 
primitive data types and fails.

We get the following exception
{code}
2016-12-12 10:13:37.0579 DEBUG tomcat-http--13 
org.apache.hawq.pxf.service.rest.BridgeResource - Starting streaming fragment 0 
of resource /hive/warehouse/hive_collections_table_orc/00_0
2016-12-12 10:13:37.0580 ERROR tomcat-http--13 
org.apache.hawq.pxf.service.rest.BridgeResource - Exception thrown when 
streaming
java.lang.ClassCastException: java.util.ArrayList cannot be cast to 
org.apache.hadoop.io.Text
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableStringObjectInspector.getPrimitiveJavaObject(WritableStringObjectInspector.java:46)
at 
org.apache.hawq.pxf.plugins.hive.HiveResolver.resolvePrimitive(HiveResolver.java:563)
at 
org.apache.hawq.pxf.plugins.hive.HiveResolver.traverseTuple(HiveResolver.java:368)
at 
org.apache.hawq.pxf.plugins.hive.HiveResolver.traverseStruct(HiveResolver.java:470)
at 
org.apache.hawq.pxf.plugins.hive.HiveORCSerdeResolver.getFields(HiveORCSerdeResolver.java:81)
at org.apache.hawq.pxf.service.ReadBridge.getNext(ReadBridge.java:104)
at 
org.apache.hawq.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:140)
{code}

HiveORC profile uses the column types from the schema definition in HAWQ. 
Complex fields are defined as text in HAWQ and hence is treated as string and 
results in this error. This should be modified to use the schema definition 
from Fragmenter metadata instead.

  was:
The new HiveORC profile has an issue with handling complex hive types 
(array,map,struct,union,etc). The object inspector being used marks all these 
complex types as string and hence during resolution time, PXF treats them as 
primitive data types and fails.
We get the following exception
{code}
2016-12-12 10:13:37.0579 DEBUG tomcat-http--13 
org.apache.hawq.pxf.service.rest.BridgeResource - Starting streaming fragment 0 
of resource /hive/warehouse/hive_collections_table_orc/00_0
2016-12-12 10:13:37.0580 ERROR tomcat-http--13 
org.apache.hawq.pxf.service.rest.BridgeResource - Exception thrown when 
streaming
java.lang.ClassCastException: java.util.ArrayList cannot be cast to 
org.apache.hadoop.io.Text
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableStringObjectInspector.getPrimitiveJavaObject(WritableStringObjectInspector.java:46)
at 
org.apache.hawq.pxf.plugins.hive.HiveResolver.resolvePrimitive(HiveResolver.java:563)
at 
org.apache.hawq.pxf.plugins.hive.HiveResolver.traverseTuple(HiveResolver.java:368)
at 
org.apache.hawq.pxf.plugins.hive.HiveResolver.traverseStruct(HiveResolver.java:470)
at 
org.apache.hawq.pxf.plugins.hive.HiveORCSerdeResolver.getFields(HiveORCSerdeResolver.java:81)
at org.apache.hawq.pxf.service.ReadBridge.getNext(ReadBridge.java:104)
at 
org.apache.hawq.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:140)
{code}


> PXF HiveORC profile doesn't handle complex types correctly
> --
>
> Key: HAWQ-1215
> URL: https://issues.apache.org/jira/browse/HAWQ-1215
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Lei Chang
>
> The new HiveORC profile has an issue with handling complex hive types 
> (array,map,struct,union,etc). The object inspector being used marks all these 
> complex types as string and hence during resolution time, PXF treats them as 
> primitive data types and fails.
> We get the following exception
> {code}
> 2016-12-12 10:13:37.0579 DEBUG tomcat-http--13 
> org.apache.hawq.pxf.service.rest.BridgeResource - Starting streaming fragment 
> 0 of resource /hive/warehouse/hive_collections_table_orc/00_0
> 2016-12-12 10:13:37.0580 ERROR tomcat-http--13 
> org.apache.hawq.pxf.service.rest.BridgeResource - Exception thrown when 
> streaming
> java.lang.ClassCastException: java.util.ArrayList cannot be cast to 
> org.apache.hadoop.io.Text
> at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableStringObjectInspector.getPrimitiveJavaObject(WritableStringObjectInspector.java:46)
> at 
> org.apache.hawq.pxf.plugins.hive.HiveResolver.resolvePrimitive(HiveResolver.java:563)
> at 
> org.apache.hawq.pxf.plugins.hive.HiveResolver.traverseTuple(HiveResolver.java:368)
> at 
> org.apache.hawq.pxf.plugins.hive.HiveResolver.traverseStruct(HiveResolver.java:470)
> at 
> 

[jira] [Assigned] (HAWQ-1215) PXF HiveORC profile doesn't handle complex types correctly

2016-12-12 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1215:
--

Assignee: Shivram Mani  (was: Lei Chang)

> PXF HiveORC profile doesn't handle complex types correctly
> --
>
> Key: HAWQ-1215
> URL: https://issues.apache.org/jira/browse/HAWQ-1215
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> The new HiveORC profile has an issue with handling complex hive types 
> (array,map,struct,union,etc). The object inspector being used marks all these 
> complex types as string and hence during resolution time, PXF treats them as 
> primitive data types and fails.
> We get the following exception
> {code}
> 2016-12-12 10:13:37.0579 DEBUG tomcat-http--13 
> org.apache.hawq.pxf.service.rest.BridgeResource - Starting streaming fragment 
> 0 of resource /hive/warehouse/hive_collections_table_orc/00_0
> 2016-12-12 10:13:37.0580 ERROR tomcat-http--13 
> org.apache.hawq.pxf.service.rest.BridgeResource - Exception thrown when 
> streaming
> java.lang.ClassCastException: java.util.ArrayList cannot be cast to 
> org.apache.hadoop.io.Text
> at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableStringObjectInspector.getPrimitiveJavaObject(WritableStringObjectInspector.java:46)
> at 
> org.apache.hawq.pxf.plugins.hive.HiveResolver.resolvePrimitive(HiveResolver.java:563)
> at 
> org.apache.hawq.pxf.plugins.hive.HiveResolver.traverseTuple(HiveResolver.java:368)
> at 
> org.apache.hawq.pxf.plugins.hive.HiveResolver.traverseStruct(HiveResolver.java:470)
> at 
> org.apache.hawq.pxf.plugins.hive.HiveORCSerdeResolver.getFields(HiveORCSerdeResolver.java:81)
> at org.apache.hawq.pxf.service.ReadBridge.getNext(ReadBridge.java:104)
> at 
> org.apache.hawq.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:140)
> {code}
> HiveORC profile uses the column types from the schema definition in HAWQ. 
> Complex fields are defined as text in HAWQ and hence is treated as string and 
> results in this error. This should be modified to use the schema definition 
> from Fragmenter metadata instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1215) PXF HiveORC profile doesn't handle complex types correctly

2016-12-12 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1215:
--

 Summary: PXF HiveORC profile doesn't handle complex types correctly
 Key: HAWQ-1215
 URL: https://issues.apache.org/jira/browse/HAWQ-1215
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Shivram Mani
Assignee: Lei Chang


The new HiveORC profile has an issue with handling complex hive types 
(array,map,struct,union,etc). The object inspector being used marks all these 
complex types as string and hence during resolution time, PXF treats them as 
primitive data types and fails.
We get the following exception
{code}
2016-12-12 10:13:37.0579 DEBUG tomcat-http--13 
org.apache.hawq.pxf.service.rest.BridgeResource - Starting streaming fragment 0 
of resource /hive/warehouse/hive_collections_table_orc/00_0
2016-12-12 10:13:37.0580 ERROR tomcat-http--13 
org.apache.hawq.pxf.service.rest.BridgeResource - Exception thrown when 
streaming
java.lang.ClassCastException: java.util.ArrayList cannot be cast to 
org.apache.hadoop.io.Text
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableStringObjectInspector.getPrimitiveJavaObject(WritableStringObjectInspector.java:46)
at 
org.apache.hawq.pxf.plugins.hive.HiveResolver.resolvePrimitive(HiveResolver.java:563)
at 
org.apache.hawq.pxf.plugins.hive.HiveResolver.traverseTuple(HiveResolver.java:368)
at 
org.apache.hawq.pxf.plugins.hive.HiveResolver.traverseStruct(HiveResolver.java:470)
at 
org.apache.hawq.pxf.plugins.hive.HiveORCSerdeResolver.getFields(HiveORCSerdeResolver.java:81)
at org.apache.hawq.pxf.service.ReadBridge.getNext(ReadBridge.java:104)
at 
org.apache.hawq.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:140)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1205) Change hawq start script once finding enable_ranger GUC is on.

2016-12-12 Thread Alexander Denissov (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742675#comment-15742675
 ] 

Alexander Denissov commented on HAWQ-1205:
--

I think there are 2 points here:

1. HAWQ init() should run only if Ranger GUC is turned off. To this extend, if 
init() finds the GUC is to be ON, it should stop and error out with the proper 
message.
2. HAWQ start() should analyze the GUC and start RPS if the GUC is ON, which is 
what this story is about. Why is it closed ?

> Change hawq start script once finding enable_ranger GUC is on.
> --
>
> Key: HAWQ-1205
> URL: https://issues.apache.org/jira/browse/HAWQ-1205
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF, Security
>Reporter: Lili Ma
>Assignee: Lili Ma
> Fix For: backlog
>
>
> If hawq start finds enable_ranger GUC is on, it needs to start RPS service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1210) Documentation regarding usage of libhdfs3 in concurrent environment

2016-12-12 Thread William Forson (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742603#comment-15742603
 ] 

William Forson commented on HAWQ-1210:
--

Hi Zhanwei Wang,

Unfortunately, I don't think I will have the bandwidth to debug this further 
for at least a few weeks. So far, I've been using libhdfs3 as a black-box 
component (i.e. I've really only looked at {{hdfs.h}} and build logic), so I 
will have to get myself up to speed on the basic organization of the codebase, 
etc.

However, since there is a decent chance I will be using libhdfs3 as a 
production dependency, in a multi-threaded environment, I would definitely like 
to understand what is going on here. So I will try to look into this as soon as 
I have the time.

Thanks!

> Documentation regarding usage of libhdfs3 in concurrent environment
> ---
>
> Key: HAWQ-1210
> URL: https://issues.apache.org/jira/browse/HAWQ-1210
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: William Forson
>Assignee: Lei Chang
> Attachments: hdfs_fs_concurrent_test.cpp
>
>
> Hi,
> I've been using libhdfs3 in a single-threaded environment for several months 
> now, without any problems. However, as soon as I tried using the library 
> concurrently from multiple threads: hello, segfaults.
> Although the source of these segfaults is annoyingly subtle, I've managed to 
> isolate it to a relatively small block of my code that does nothing 
> interesting aside from using libhdfs3 to download a single hdfs file.
> To be clear: I assume that the mistake here is mine -- that is, that I am 
> using your library incorrectly. However, I have been unable to find any 
> documentation as to how the libhdfs3 API _should_ be used in a multi-threaded 
> environment. I initially interpreted this to mean, "go to town, it's all more 
> or less thread-safe", but I am now questioning that interpretation.
> So, I have a question, and a request.
> Question: Are there any known, non-obvious concurrency gotchas regarding the 
> usage of libhdfs3 (or whatever it's currently called)?
> Request: Could you please add some documentation, to the README and/or 
> hdfs.h, regarding usage in a concurrent environment? (ideally, such notes 
> would annotate individual components of the API in hdfs.h, but if the answer 
> to my question above is, "No", then this could perhaps be a single sentence 
> in the README which affirmatively states that the library is generally safe 
> for concurrent usage without additional/explicit synchronization -- anything 
> would be better than nothing :))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq issue #1040: HAWQ-1195. Fixed error "Two or more external tab...

2016-12-12 Thread liming01
Github user liming01 commented on the issue:

https://github.com/apache/incubator-hawq/pull/1040
  
When we found the same error table used for the same query, we should 
assign a new sets of seg files for INSERT INTO ERROR TABLE, which means 
different relation will insert into the different sets of segfiles.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (HAWQ-1214) Remove resource_parameters

2016-12-12 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1214:
---
Summary: Remove resource_parameters  (was: Remove resource_parameters in 
calculate_planner_segment_num())

> Remove resource_parameters
> --
>
> Key: HAWQ-1214
> URL: https://issues.apache.org/jira/browse/HAWQ-1214
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> The reasons are:
> 1) It is not used anymore.
> 2) Have seen crash due to this.  The scenario is that:
>  
> The datalocality memory context on which resource_parameters is palloc-ed is 
> reset before calculate_planner_segment_num() finishes, thus later access to 
> resource_parameters could lead to crash or wrong results. A code path example 
> which could lead to segfault is:
> ProcessUtility()->PerformCursorOpen()->_copyPlannedStmt()->_copyQueryResourceParameters()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1214) Remove resource_parameters in calculate_planner_segment_num()

2016-12-12 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1214:
--

 Summary: Remove resource_parameters in 
calculate_planner_segment_num()
 Key: HAWQ-1214
 URL: https://issues.apache.org/jira/browse/HAWQ-1214
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Lei Chang


The reasons are:

1) It is not used anymore.

2) Have seen crash due to this.  The scenario is that:
 
The datalocality memory context on which resource_parameters is palloc-ed is 
reset before calculate_planner_segment_num() finishes, thus later access to 
resource_parameters could lead to crash or wrong results. A code path example 
which could lead to segfault is:

ProcessUtility()->PerformCursorOpen()->_copyPlannedStmt()->_copyQueryResourceParameters()




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-1214) Remove resource_parameters in calculate_planner_segment_num()

2016-12-12 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1214:
--

Assignee: Paul Guo  (was: Lei Chang)

> Remove resource_parameters in calculate_planner_segment_num()
> -
>
> Key: HAWQ-1214
> URL: https://issues.apache.org/jira/browse/HAWQ-1214
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> The reasons are:
> 1) It is not used anymore.
> 2) Have seen crash due to this.  The scenario is that:
>  
> The datalocality memory context on which resource_parameters is palloc-ed is 
> reset before calculate_planner_segment_num() finishes, thus later access to 
> resource_parameters could lead to crash or wrong results. A code path example 
> which could lead to segfault is:
> ProcessUtility()->PerformCursorOpen()->_copyPlannedStmt()->_copyQueryResourceParameters()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1212) Upgrade libhdfs3 with upstream hadoop/hdp

2016-12-12 Thread hongwu (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15741377#comment-15741377
 ] 

hongwu commented on HAWQ-1212:
--

[~rvs], yes. There needs more investigation to list these details of new 
features. For libhdfs3, you plan to merge its codes to Apache Hadoop codebase?

> Upgrade libhdfs3 with upstream hadoop/hdp
> -
>
> Key: HAWQ-1212
> URL: https://issues.apache.org/jira/browse/HAWQ-1212
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: libhdfs
>Reporter: hongwu
>Assignee: Lei Chang
>
> Since current libhdfs3 implementation is based on old version of HDFS, it is 
> necessary to upgrade libhdfs3 with latest HDFS of lots of new features and 
> configuration parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #1048: HAWQ-1140. Parallelize test cases for haw...

2016-12-12 Thread wcl14
GitHub user wcl14 opened a pull request:

https://github.com/apache/incubator-hawq/pull/1048

HAWQ-1140. Parallelize test cases for hawqregister usage2.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wcl14/incubator-hawq HAWQ-1140

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1048.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1048


commit 8da85b198f70df5edbecb1cf1e4a780f65f4b357
Author: Chunling Wang 
Date:   2016-12-12T08:56:58Z

HAWQ-1140. Parallelize test cases for hawqregister usage2.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1047: HAWQ-1003. Implement batched ACL check through R...

2016-12-12 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1047
  
Seems there are file conflicts with 
https://github.com/apache/incubator-hawq/pull/1046

At least rangerrest.h


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1040: HAWQ-1195. Fixed error "Two or more external tab...

2016-12-12 Thread liming01
Github user liming01 commented on the issue:

https://github.com/apache/incubator-hawq/pull/1040
  
Revert this commit because running installcheck-good failed.
It is my fault, I had thought that we have move all testcases in 
installcheck-good to featuretest, so I only ran feature test.

Need more time to investigate why it conflict at the same relation file 
number.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1040: HAWQ-1195. Fixed error "Two or more exter...

2016-12-12 Thread liming01
GitHub user liming01 reopened a pull request:

https://github.com/apache/incubator-hawq/pull/1040

HAWQ-1195. Fixed error "Two or more external tables use the same erro…

…r table "

The error table should be works same as normal user table on hdfs, which 
support multiple INSERTs.

Hi @wangzw, could you help me to review it? Thanks.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/liming01/incubator-hawq mli/errtab_chk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1040.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1040


commit 2a8eaa7ef6ac025f65d03510a2428825cd383341
Author: Ming LI 
Date:   2016-12-06T09:17:50Z

HAWQ-1195. Fixed error "Two or more external tables use the same error 
table "

The error table should be works same as normal user table on hdfs, which 
support multiple INSERTs.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Reopened] (HAWQ-1195) Synchrony:Union not working on external tables ERROR:"Two or more external tables use the same error table ""xxxxxxx"" in a statement (execMain.c:274)"

2016-12-12 Thread Ming LI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming LI reopened HAWQ-1195:
---

Revert this commit because running installcheck-good failed.
It is my fault, I had thought that we have move all testcases in 
installcheck-good to featuretest, so I only ran feature test.

Need more time to investigate why it conflict at the same relation file number.

> Synchrony:Union not working on external tables ERROR:"Two or more external 
> tables use the same error table ""xxx"" in a statement (execMain.c:274)"
> ---
>
> Key: HAWQ-1195
> URL: https://issues.apache.org/jira/browse/HAWQ-1195
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: backlog
>
>
> Hello,
> User create an external table and define the error table. Then he do the 
> union on the same external table with different where condition. Then return 
> the error:ERROR:  Two or more external tables use the same error table 
> "err_ext_pdr_cdci_pivotal_request_43448" in a statement (execMain.c:274)
> Below is the master log when I reproduce: (the whole log file attached in 
> attachement)
> {code}
> 2016-11-29 22:49:51.976864 
> PST,"gpadmin","postgres",p769199,th-2123704032,"[local]",,2016-11-29 22:46:14 
> PST,1260,con72,cmd10,seg-1,,,x1260,sx1,"ERROR","XX000","Two or more external 
> tables use the same error table ""err_ext_pdr_cdci_pivotal_request_43448"" in 
> a statement (execMain.c:274)",,"select current_account_nbr,yearmonthint, 
> bank_name, first_date_open, max_cr_limit, care_credit_flag, cc1_flag, 
> partition_value, 'US' as loc from pdr_cdci_pivotal_request_43448 where 
> care_credit_flag<1
> union
> select current_account_nbr,yearmonthint, bank_name, first_date_open, 
> max_cr_limit, care_credit_flag, cc1_flag, partition_value, 'Non-US' as loc 
> from pdr_cdci_pivotal_request_43448 where 
> care_credit_flag=1;",0,,"execMain.c",274,"Stack trace:
> 10x8c5858 postgres errstart (??:0)
> 20x8c75db postgres elog_finish (??:0)
> 30x65f669 postgres  (??:0)
> 40x77d06a postgres walk_plan_node_fields (??:0)
> 50x77e3ee postgres plan_tree_walker (??:0)
> 60x77c70a postgres expression_tree_walker (??:0)
> 70x77e35d postgres plan_tree_walker (??:0)
> 80x77d06a postgres walk_plan_node_fields (??:0)
> 90x77dfe6 postgres plan_tree_walker (??:0)
> 10   0x77d06a postgres walk_plan_node_fields (??:0)
> 11   0x77e1e5 postgres plan_tree_walker (??:0)
> 12   0x77d06a postgres walk_plan_node_fields (??:0)
> 13   0x77dfe6 postgres plan_tree_walker (??:0)
> 14   0x77d06a postgres walk_plan_node_fields (??:0)
> 15   0x77e1e5 postgres plan_tree_walker (??:0)
> 16   0x66079b postgres ExecutorStart (??:0)
> 17   0x7ebf1d postgres PortalStart (??:0)
> 18   0x7e4288 postgres  (??:0)
> 19   0x7e54c2 postgres PostgresMain (??:0)
> 20   0x797d50 postgres  (??:0)
> 21   0x79ab19 postgres PostmasterMain (??:0)
> 22   0x4a4069 postgres main (??:0)
> 23   0x7fd97d486d5d libc.so.6 __libc_start_main (??:0)
> 24   0x4a40e9 postgres  (??:0)
> "
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #1047: HAWQ-1003. Implement batched ACL check th...

2016-12-12 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1047#discussion_r91898169
  
--- Diff: src/backend/utils/misc/guc.c ---
@@ -8154,6 +8174,15 @@ static struct config_string ConfigureNamesString[] =
},
 
{
+{"hawq_rps_address_host", PGC_POSTMASTER, PRESET_OPTIONS,
+  gettext_noop("rps server address hostname"),
+  NULL
+},
+_addr_host,
+"localhost", NULL, NULL
+  },
+
+   {
--- End diff --

Same as before.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1047: HAWQ-1003. Implement batched ACL check th...

2016-12-12 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1047#discussion_r91897627
  
--- Diff: src/backend/parser/parse_relation.c ---
@@ -2713,15 +2712,90 @@ warnAutoRange(ParseState *pstate, RangeVar 
*relation, int location)
 void
 ExecCheckRTPerms(List *rangeTable)
 {
+  if (enable_ranger)
+  {
+if(rangeTable!=NULL)
+  ExecCheckRTPermsWithRanger(rangeTable);
+return;
+  }
ListCell   *l;
-
foreach(l, rangeTable)
{
ExecCheckRTEPerms((RangeTblEntry *) lfirst(l));
}
 }
 
 /*
+ * ExecCheckRTPerms
+ *   Batch implementation: Check access permissions for all relations 
listed in a range table with enable_ranger is true.
+ */
+void
+ExecCheckRTPermsWithRanger(List *rangeTable)
+{
+  List *ranger_check_args = NIL;
+  ListCell *l;
+  foreach(l, rangeTable)
+  {
+
+AclMode requiredPerms;
+Oid relOid;
+Oid userid;
+RangeTblEntry *rte = (RangeTblEntry *) lfirst(l);
+
+if (rte->rtekind != RTE_RELATION)
+  return;
+requiredPerms = rte->requiredPerms;
+if (requiredPerms == 0)
+  return;
+
+relOid = rte->relid;
+userid = rte->checkAsUser ? rte->checkAsUser : GetUserId();
+
+RangerPrivilegeArgs *ranger_check_arg = (RangerPrivilegeArgs *) 
palloc(sizeof(RangerPrivilegeArgs));
+ranger_check_arg->objkind = ACL_KIND_CLASS;
+ranger_check_arg->object_oid = relOid;
+ranger_check_arg->roleid = userid;
+ranger_check_arg->mask = requiredPerms;
+ranger_check_arg->how = ACLMASK_ALL;
+ranger_check_args = lappend(ranger_check_args, ranger_check_arg);
+
+  } // foreach
+
+  // ranger ACL check with package Oids
+  List *aclresults = NIL;
+  aclresults = pg_rangercheck_batch(ranger_check_args);
+  if (aclresults == NIL)
+  {
+elog(ERROR, "ERROR\n");
--- End diff --

I would think this log does not help debugging much. Could we provide more 
info?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1047: HAWQ-1003. Implement batched ACL check th...

2016-12-12 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1047#discussion_r91898078
  
--- Diff: src/include/utils/rangerrest.h ---
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*-
+ *
+ * rangerrest.h
+ * routines to interact with Ranger REST API
--- End diff --

REST API for Ranger interaction?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1047: HAWQ-1003. Implement batched ACL check th...

2016-12-12 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1047#discussion_r91898147
  
--- Diff: src/backend/utils/misc/guc.c ---
@@ -6250,6 +6261,15 @@ static struct config_int ConfigureNamesInt[] =
},
 
{
+{"hawq_rps_address_port", PGC_POSTMASTER, PRESET_OPTIONS,
+  gettext_noop("rps server address port number"),
+  NULL
+},
+_addr_port,
+1, 1, 65535, NULL, NULL
+  },
+
+   {
--- End diff --

What is rps? If this is Ranger specific, please specify in the descriptions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---