[jira] [Updated] (HAWQ-1161) Refactor PXF to use new Hadoop MapReduce APIs

2016-11-16 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1161:

Assignee: Shivram Mani  (was: Lei Chang)

> Refactor PXF to use new Hadoop MapReduce APIs
> -
>
> Key: HAWQ-1161
> URL: https://issues.apache.org/jira/browse/HAWQ-1161
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Kyle R Dunn
>Assignee: Shivram Mani
> Fix For: backlog
>
>
> Several classes in PXF make use of the older `org.apache.hadoop.mapred` API 
> rather than the new `org.apache.hadoop.mapreduce` one. As a plugin developer, 
> this has been the source of a significant headache. Other HAWQ libraries, 
> like hawq-hadoop use the newer `org.apache.hadoop.mapreduce` API, creating 
> unnecessary friction between these two things. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1134) Add Bigtop layout specific pxf-private classpath

2016-11-04 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638179#comment-15638179
 ] 

Goden Yao commented on HAWQ-1134:
-

Sure

> Add Bigtop layout specific pxf-private classpath
> 
>
> Key: HAWQ-1134
> URL: https://issues.apache.org/jira/browse/HAWQ-1134
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
>
> Currently PXF ships with HDP and PHD specific classpath files. It would be 
> great to have Bigtop specific one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1134) Add Bigtop layout specific pxf-private classpath

2016-11-01 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15627138#comment-15627138
 ] 

Goden Yao commented on HAWQ-1134:
-

I think the plan is to remove PHD/HDP specific classpath files as we only have 
1 hadoop distribution to work with at the moment. the bigtop one you added 
could be the standard one for all. 

> Add Bigtop layout specific pxf-private classpath
> 
>
> Key: HAWQ-1134
> URL: https://issues.apache.org/jira/browse/HAWQ-1134
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
>
> Currently PXF ships with HDP and PHD specific classpath files. It would be 
> great to have Bigtop specific one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1130) Make HCatalog integration work with non-superusers

2016-10-31 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15624380#comment-15624380
 ] 

Goden Yao commented on HAWQ-1130:
-

[~nhorn] [~jimmida] may know the history and rationale behind that.

> Make HCatalog integration work with non-superusers
> --
>
> Key: HAWQ-1130
> URL: https://issues.apache.org/jira/browse/HAWQ-1130
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.1.0-incubating
>
>
> According to current implementation user who uses HCatalog integration 
> feature should have SELECT privileges for pg_authid, pg_user_mapping tables.
> It's fine for superusers but we shouldn't expose them to non-superusers 
> because they store hashed user passwords.
> Basically, the problem is how to determine max oid among all oid-having 
> tables.
> Possible solutions:
> * Creating view returning max oid and grant select privilege to public.
> ** Cons:
> *** Requires catalog upgrade;
> * Reading current oid from shared memory.
> ** Pros:
> *** No catalog upgrade needed.
> ** Cons:
> *** Additional exclusive locks needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1130) Make HCatalog integration work with non-superusers

2016-10-31 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1130:

Assignee: Oleksandr Diachenko  (was: Lei Chang)

> Make HCatalog integration work with non-superusers
> --
>
> Key: HAWQ-1130
> URL: https://issues.apache.org/jira/browse/HAWQ-1130
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.1.0-incubating
>
>
> According to current implementation user who uses HCatalog integration 
> feature should have SELECT privileges for pg_authid, pg_user_mapping tables.
> It's fine for superusers but we shouldn't expose them to non-superusers 
> because they store hashed user passwords.
> Basically, the problem is how to determine max oid among all oid-having 
> tables.
> Possible solutions:
> * Creating view returning max oid and grant select privilege to public.
> ** Cons:
> *** Requires catalog upgrade;
> * Reading current oid from shared memory.
> ** Pros:
> *** No catalog upgrade needed.
> ** Cons:
> *** Additional exclusive locks needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1108) Add JDBC PXF Plugin

2016-10-25 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605498#comment-15605498
 ] 

Goden Yao commented on HAWQ-1108:
-

use this one is fine, just assign the JIRA to you Devin. Appreciate your 
contribution

> Add JDBC PXF Plugin
> ---
>
> Key: HAWQ-1108
> URL: https://issues.apache.org/jira/browse/HAWQ-1108
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Devin Jia
>
> On the back of the work in :
> https://issues.apache.org/jira/browse/HAWQ-779
> We would like to add to Hawq Plugins a JDBC implementation.
> There are currently two noted implementations in the openly available in 
> GitHub.
> 1) https://github.com/kojec/pxf-field/tree/master/jdbc-pxf-ext
> 2) https://github.com/inspur-insight/pxf-plugin/tree/master/pxf-jdbc
> The latter (2) is an improved version of the former (1) and also what 
> HAWQ-779 changes were to support.
> [~jiadx] would you be happy to contribute the source as apache 2 license open 
> source?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1108) Add JDBC PXF Plugin

2016-10-25 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1108:

Assignee: Devin Jia  (was: Lei Chang)

> Add JDBC PXF Plugin
> ---
>
> Key: HAWQ-1108
> URL: https://issues.apache.org/jira/browse/HAWQ-1108
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Devin Jia
>
> On the back of the work in :
> https://issues.apache.org/jira/browse/HAWQ-779
> We would like to add to Hawq Plugins a JDBC implementation.
> There are currently two noted implementations in the openly available in 
> GitHub.
> 1) https://github.com/kojec/pxf-field/tree/master/jdbc-pxf-ext
> 2) https://github.com/inspur-insight/pxf-plugin/tree/master/pxf-jdbc
> The latter (2) is an improved version of the former (1) and also what 
> HAWQ-779 changes were to support.
> [~jiadx] would you be happy to contribute the source as apache 2 license open 
> source?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1110) Optimize LIKE operator on storage layer

2016-10-17 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1110:

Assignee: Oleksandr Diachenko  (was: Lei Chang)

> Optimize LIKE operator on storage layer
> ---
>
> Key: HAWQ-1110
> URL: https://issues.apache.org/jira/browse/HAWQ-1110
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: backlog
>
>
> As for now HiveORC profile doesn't use any storage layer optimizations for 
> LIKE operator.
> Current optimization could be applied:
> 1) Parse first token of LIKE clause before first occurrence of "%" symbol.
> 2) Apply ORC filter: >= AND <=
> Where NEXT_TOKEN = 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1110) Optimize LIKE operator on storage layer

2016-10-17 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1110:

Fix Version/s: backlog

> Optimize LIKE operator on storage layer
> ---
>
> Key: HAWQ-1110
> URL: https://issues.apache.org/jira/browse/HAWQ-1110
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Lei Chang
> Fix For: backlog
>
>
> As for now HiveORC profile doesn't use any storage layer optimizations for 
> LIKE operator.
> Current optimization could be applied:
> 1) Parse first token of LIKE clause before first occurrence of "%" symbol.
> 2) Apply ORC filter: >= AND <=



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1087) Need clarification on Wiki page for Build Instructions

2016-10-07 Thread Goden Yao (JIRA)
Goden Yao created HAWQ-1087:
---

 Summary: Need clarification on Wiki page for Build Instructions
 Key: HAWQ-1087
 URL: https://issues.apache.org/jira/browse/HAWQ-1087
 Project: Apache HAWQ
  Issue Type: Task
  Components: Documentation
Reporter: Goden Yao
Assignee: David Yozie
 Fix For: backlog


>From [~jmclean], this is during a VOTE review. 
{quote}
Also currently looks like openssl may be a little broken and info on the wiki 
may need updating:
{code}
brew link --force openssl
Warning: Refusing to link: openssl
{code}
Linking keg-only openssl means you may end up linking against the insecure,
deprecated system OpenSSL while using the headers from Homebrew's openssl.
Instead, pass the full include/library paths to your compiler e.g.:
 {code} -I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib{code}

I needed up doing this to make it compile:
make CFLAGS="-I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib"
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1087) Need clarification on Wiki page for Build Instructions

2016-10-07 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1087:

Priority: Minor  (was: Major)

> Need clarification on Wiki page for Build Instructions
> --
>
> Key: HAWQ-1087
> URL: https://issues.apache.org/jira/browse/HAWQ-1087
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Goden Yao
>Assignee: David Yozie
>Priority: Minor
> Fix For: backlog
>
>
> From [~jmclean], this is during a VOTE review. 
> {quote}
> Also currently looks like openssl may be a little broken and info on the wiki 
> may need updating:
> {code}
> brew link --force openssl
> Warning: Refusing to link: openssl
> {code}
> Linking keg-only openssl means you may end up linking against the insecure,
> deprecated system OpenSSL while using the headers from Homebrew's openssl.
> Instead, pass the full include/library paths to your compiler e.g.:
>  {code} -I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib{code}
> I needed up doing this to make it compile:
> make CFLAGS="-I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib"
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1078) Implement hawqsync-falcon DR utility.

2016-10-05 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1078:

Issue Type: New Feature  (was: Improvement)

> Implement hawqsync-falcon DR utility.
> -
>
> Key: HAWQ-1078
> URL: https://issues.apache.org/jira/browse/HAWQ-1078
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Command Line Tools
>Reporter: Kyle R Dunn
>Assignee: Lei Chang
> Fix For: backlog
>
> Attachments: hawq-dr-design.pdf
>
>
> HAWQ currently offers no DR functionality. This JIRA is for tracking the 
> design and development of a hawqsync-falcon utility, which uses a combination 
> of Falcon-based HDFS replication and custom automation in Python for allowing 
> both the HAWQ master catalog and corresponding HDFS data to be replicated to 
> a remote cluster for DR functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1083) Do not use CURLOPT_RESOLVE when call curl

2016-10-05 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1083:

Assignee: Oleksandr Diachenko  (was: Lei Chang)

> Do not use CURLOPT_RESOLVE when call curl
> -
>
> Key: HAWQ-1083
> URL: https://issues.apache.org/jira/browse/HAWQ-1083
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.1.0-incubating
>
>
> CURLOPT_RESOLVE option is available since 7.21.3 curl release, so if user has 
> lower version of curl it won't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1080) make unittest-check fails on local dev environment

2016-09-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1080:

Assignee: Oleksandr Diachenko  (was: Lei Chang)

> make unittest-check fails on local dev environment
> --
>
> Key: HAWQ-1080
> URL: https://issues.apache.org/jira/browse/HAWQ-1080
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.1.0-incubating
>
>
> {code}
> $ make unittest-check
> {code}
> ...
> {code}
> emanager  -c -o pxfheaders_test.o pxfheaders_test.c
> pxfheaders_test.c:124:1: error: conflicting types for 'expect_churl_headers'
> expect_churl_headers(const char *key, const char *value)
> ^
> pxfheaders_test.c:40:2: note: previous implicit declaration is here
> expect_churl_headers("X-GP-SEGMENT-ID", mock_extvar->GP_SEGMENT_ID);
> ^
> pxfheaders_test.c:139:1: error: conflicting types for 
> 'expect_churl_headers_alignment'
> expect_churl_headers_alignment()
> ^
> pxfheaders_test.c:43:2: note: previous implicit declaration is here
> expect_churl_headers_alignment();
> ^
> pxfheaders_test.c:159:1: error: conflicting types for 'store_gucs'
> store_gucs()
> ^
> pxfheaders_test.c:152:2: note: previous implicit declaration is here
> store_gucs();
> ^
> pxfheaders_test.c:166:1: error: conflicting types for 'setup_gphd_uri'
> setup_gphd_uri()
> ^
> pxfheaders_test.c:153:2: note: previous implicit declaration is here
> setup_gphd_uri();
> ^
> pxfheaders_test.c:176:1: error: conflicting types for 'setup_input_data'
> setup_input_data()
> ^
> pxfheaders_test.c:154:2: note: previous implicit declaration is here
> setup_input_data();
> ^
> pxfheaders_test.c:184:1: error: conflicting types for 'setup_external_vars'
> setup_external_vars()
> ^
> pxfheaders_test.c:155:2: note: previous implicit declaration is here
> setup_external_vars();
> ^
> pxfheaders_test.c:193:6: error: conflicting types for 'expect_external_vars'
> void expect_external_vars()
>  ^
> pxfheaders_test.c:38:2: note: previous implicit declaration is here
> expect_external_vars();
> ^
> pxfheaders_test.c:220:6: error: conflicting types for 'restore_gucs'
> void restore_gucs()
>  ^
> pxfheaders_test.c:217:2: note: previous implicit declaration is here
> restore_gucs();
> ^
> 8 errors generated.
> make[3]: *** [pxfheaders_test.o] Error 1
> make[2]: *** [unittest-check] Error 2
> make[1]: *** [unittest-check] Error 2
> make: *** [unittest-check] Error 2
> {code}
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1080) make unittest-check fails on local dev environment

2016-09-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1080:

Fix Version/s: 2.0.1.0-incubating

> make unittest-check fails on local dev environment
> --
>
> Key: HAWQ-1080
> URL: https://issues.apache.org/jira/browse/HAWQ-1080
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Lei Chang
> Fix For: 2.0.1.0-incubating
>
>
> {code}
> $ make unittest-check
> {code}
> ...
> {code}
> emanager  -c -o pxfheaders_test.o pxfheaders_test.c
> pxfheaders_test.c:124:1: error: conflicting types for 'expect_churl_headers'
> expect_churl_headers(const char *key, const char *value)
> ^
> pxfheaders_test.c:40:2: note: previous implicit declaration is here
> expect_churl_headers("X-GP-SEGMENT-ID", mock_extvar->GP_SEGMENT_ID);
> ^
> pxfheaders_test.c:139:1: error: conflicting types for 
> 'expect_churl_headers_alignment'
> expect_churl_headers_alignment()
> ^
> pxfheaders_test.c:43:2: note: previous implicit declaration is here
> expect_churl_headers_alignment();
> ^
> pxfheaders_test.c:159:1: error: conflicting types for 'store_gucs'
> store_gucs()
> ^
> pxfheaders_test.c:152:2: note: previous implicit declaration is here
> store_gucs();
> ^
> pxfheaders_test.c:166:1: error: conflicting types for 'setup_gphd_uri'
> setup_gphd_uri()
> ^
> pxfheaders_test.c:153:2: note: previous implicit declaration is here
> setup_gphd_uri();
> ^
> pxfheaders_test.c:176:1: error: conflicting types for 'setup_input_data'
> setup_input_data()
> ^
> pxfheaders_test.c:154:2: note: previous implicit declaration is here
> setup_input_data();
> ^
> pxfheaders_test.c:184:1: error: conflicting types for 'setup_external_vars'
> setup_external_vars()
> ^
> pxfheaders_test.c:155:2: note: previous implicit declaration is here
> setup_external_vars();
> ^
> pxfheaders_test.c:193:6: error: conflicting types for 'expect_external_vars'
> void expect_external_vars()
>  ^
> pxfheaders_test.c:38:2: note: previous implicit declaration is here
> expect_external_vars();
> ^
> pxfheaders_test.c:220:6: error: conflicting types for 'restore_gucs'
> void restore_gucs()
>  ^
> pxfheaders_test.c:217:2: note: previous implicit declaration is here
> restore_gucs();
> ^
> 8 errors generated.
> make[3]: *** [pxfheaders_test.o] Error 1
> make[2]: *** [unittest-check] Error 2
> make[1]: *** [unittest-check] Error 2
> make: *** [unittest-check] Error 2
> {code}
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1054) Real/float4 rounding issues for HiveORC profile

2016-09-27 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15527236#comment-15527236
 ] 

Goden Yao commented on HAWQ-1054:
-

this is a little bit counter intuitive.
What's table definition in pxf external table vs. Hive table? Any difference?

> Real/float4 rounding issues for HiveORC profile
> ---
>
> Key: HAWQ-1054
> URL: https://issues.apache.org/jira/browse/HAWQ-1054
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>Priority: Critical
> Fix For: 2.0.1.0-incubating
>
>
> Looks like real values are being incorrectly rounded:
> {code}
>  SELECT t1, r FROM pxf_hive_orc_types WHERE r = 7.7;
>  t1 | r 
> +---
> (0 rows)
> SELECT t1, r FROM pxf_hive_orc_types WHERE r > 7.6;
>   t1  |  r   
> --+--
>  row1 |  7.7
>  row2 |  8.7
>  row3 |  9.7
>  row4 | 10.7
>  row5 | 11.7
>  row6 | 12.7
>  row7 |  7.7
>  row8 |  7.7
>  row9 |  7.7
>  row10|  7.7
>  row11|  7.7
>  row12_text_null  |  7.7
>  row13_int_null   |  7.7
>  row14_double_null|  7.7
>  row15_decimal_null   |  7.7
>  row16_timestamp_null |  7.7
>  row18_bigint_null|  7.7
>  row19_bool_null  |  7.7
>  row20_tinyint_null   |  7.7
>  row21_smallint_null  |  7.7
>  row22_date_null  |  7.7
>  row23_varchar_null   |  7.7
>  row24_char_null  |  7.7
>  row25_binary_null|  7.7
> (24 rows)
> {code}
> The same query works fine in Hive:
> {code}
> hive> select f from hive_orc_all_types where f = 7.7;
> OK
> 7.7
> 7.7
> 7.7
> 7.7
> 7.7
> 7.7
> 7.7
> 7.7
> 7.7
> 7.7
> 7.7
> 7.7
> 7.7
> 7.7
> 7.7
> 7.7
> 7.7
> 7.7
> 7.7
> Time taken: 0.032 seconds, Fetched: 19 row(s)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1077) Table insert hangs due to stack overwrite which is caused by a bug in ao snappy code

2016-09-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1077:

Fix Version/s: 2.0.1.0-incubating

> Table insert hangs due to stack overwrite which is caused by a bug in ao 
> snappy code
> 
>
> Key: HAWQ-1077
> URL: https://issues.apache.org/jira/browse/HAWQ-1077
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.0.1.0-incubating
>
>
> Found this issue during testing. The test tries to insert some large data 
> into a table with ao compression, however it seems to never end. After quick 
> check, we found that QE process spins and further gdb debugging shows that it 
> is because of a ao snappy code which leads to stack overwrite.
> The stack is similar to this:
> #0  0x008607c4 in gp_trycompress_new (sourceData=0x2eec5b0 
> "\340\352\356\002", sourceLen=0,
> compressedBuffer=0xa2b3b4 "\311\303UH\211\345H\211}\370H\213E\370\307@<", 
> compressedBufferWithOverrrunLen=38253,
> maxCompressedLen=32767, compressedLen=0xf8c800a29636, compressLevel=0, 
> compressor=0x5d9cdf ,
> compressionState=0x2eee690) at gp_compress.c:56
> #1  0x00a2a1c6 in AppendOnlyStorageWrite_CompressAppend 
> (storageWrite=0x2eec558,
> sourceData=0x33efa68 "ata value for text data typelarge data value for 
> text data typelarge data value for text data typelarge data value for text 
> data typelarge data value for text data typelarge data value for text data 
> t"..., sourceLen=0,
> executorBlockKind=2, itemCount=0, compressedLen=0x7fffa7804f74, 
> bufferLen=0x7fffa7804f70) at cdbappendonlystoragewrite.c:1255
> #2  0x00a2ae1b in AppendOnlyStorageWrite_Content 
> (storageWrite=0x2eec558, content=0x33e7a70 "\242\241:\200\351\003",
> contentLen=3842464, executorBlockKind=2, rowCount=1) at 
> cdbappendonlystoragewrite.c:1868
> #3  0x0056b805 in appendonly_insert (aoInsertDesc=0x2eec460, 
> instup=0x33e7a70, tupleOid=0x7fffa7805094,
> aoTupleId=0x7fffa7805080) at appendonlyam.c:2030



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1075) Restore default behavior of client side(PXF) checksum validation when reading blocks from HDFS

2016-09-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1075:

Fix Version/s: 2.0.1.0-incubating

> Restore default behavior of client side(PXF) checksum validation when reading 
> blocks from HDFS
> --
>
> Key: HAWQ-1075
> URL: https://issues.apache.org/jira/browse/HAWQ-1075
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: 2.0.1.0-incubating
>
>
> Currently HdfsTextSimple profile which is the optimized PXF profile to read 
> Text/CSV uses ChunkRecordReader to read chunks of records (as opposed to 
> individual records). Here dfs.client.read.shortcircuit.skip.checksum is 
> explicitly set to true to avoid incurring any delays with checksum check 
> while opening/reading the file/block. 
> Background Information:
> PXF uses a 2 stage process to access HDFS data. 
> Stage 1, it fetches all the target blocks for the given file (along with 
> replica information). 
> Stage 2 (after HAWQ prepares an optimized access plan based on locality), PXF 
> agents reads the blocks in parallel.
> In almost all scenarios hadoop internally catches block corruption issues and 
> such blocks are never returned to any client requesting for block locations 
> (Stage 1). In certain scenarios such as a block corruption without change in 
> size, Stage1 can still return the location of the corrupted block as well, 
> and hence Stage 2 will need to perform an additional checksum check.
> With client side checksum check on read (default behavior), we are resilient 
> to such checksum errors on read as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1075) Make checksum verification configurable in PXF HdfsTextSimple profile

2016-09-25 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15521064#comment-15521064
 ] 

Goden Yao commented on HAWQ-1075:
-

1) configuration exposed to Ambari ? or PXF config file?
2) how much performance impact are we talking about?

> Make checksum verification configurable in PXF HdfsTextSimple profile
> -
>
> Key: HAWQ-1075
> URL: https://issues.apache.org/jira/browse/HAWQ-1075
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Goden Yao
>
> Currently HdfsTextSimple profile which is the optimized profile to read 
> Text/CSV uses ChunkRecordReader to read chunks of records (as opposed to 
> individual records). Here dfs.client.read.shortcircuit.skip.checksum is 
> explicitly set to true to avoid incurring any delays with checksum check 
> while opening/reading the file/block. 
> This configuration needs to be exposed as an option and by default client 
> side checksum check must occur in order to be resilient to any data 
> corruption issues which aren't caught internally by the datanode block 
> reporting mechanism (even fsck doesn't catch certain block corruption issues).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1074) General LICENSE cleanup and synchronization with pom.xml

2016-09-24 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15519055#comment-15519055
 ] 

Goden Yao commented on HAWQ-1074:
-

awesome! 

> General LICENSE cleanup and synchronization with pom.xml
> 
>
> Key: HAWQ-1074
> URL: https://issues.apache.org/jira/browse/HAWQ-1074
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Ed Espino
>Assignee: Ed Espino
> Fix For: 2.0.0.0-incubating, 2.0.1.0-incubating
>
> Attachments: rat.txt
>
>
> During the Apache HAWQ 2.0.0.0-incubator review (guided by Apache project 
> mentor Roman Shaposhnik), we identified inconsistencies with LICENSE file.  
> * Moved sections covered by PostgreSQL License to appropriate section
> * Add simplejson license
> * Add PyYAML license
> * Add sha2 license
> * Remove unneeded license files covered by PostgreSQL License
> * Synchronize the component order in LICENSE and pom.xml.  This helps in the 
> IP review.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1058) Create a separated tarball for libhdfs3

2016-09-21 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1058:

Fix Version/s: 2.0.1.0-incubating

> Create a separated tarball for libhdfs3
> ---
>
> Key: HAWQ-1058
> URL: https://issues.apache.org/jira/browse/HAWQ-1058
> Project: Apache HAWQ
>  Issue Type: Test
>  Components: libhdfs
>Affects Versions: 2.0.0.0-incubating
>Reporter: Zhanwei Wang
>Assignee: Lei Chang
> Fix For: 2.0.1.0-incubating
>
>
> As discussed in the dev mail list. Proposed by Ramon that create a separated 
> tarball for libhdfs3 at HAWQ release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1063) HAWQ Python library missing import

2016-09-21 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1063:

Fix Version/s: 2.0.1.0-incubating

> HAWQ Python library missing import
> --
>
> Key: HAWQ-1063
> URL: https://issues.apache.org/jira/browse/HAWQ-1063
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Kyle R Dunn
>Assignee: Lei Chang
> Fix For: 2.0.1.0-incubating
>
>
> The file: `tools/bin/hawqpylib/hawqlib.py` is missing a required import for 
> catching a DatabaseError exception. This exception is raised when HAWQ is 
> stopped and a tool like `gppkg` is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1066) Improper handling of install name for shared library on OS X

2016-09-21 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1066:

Fix Version/s: backlog

> Improper handling of install name for shared library on OS X
> 
>
> Key: HAWQ-1066
> URL: https://issues.apache.org/jira/browse/HAWQ-1066
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: Kyle R Dunn
>Assignee: Lei Chang
>Priority: Minor
> Fix For: backlog
>
>
> Created as a carryover for [libhdfs3 Github 
> #40|https://github.com/Pivotal-Data-Attic/pivotalrd-libhdfs3/issues/46] on 
> behalf of [elfprince13|https://github.com/elfprince13]:
> I am working on a project that has libhdfs3 as a submodule in our git repo. 
> Since we want to keep the build process contained in a single (user-owned) 
> directory tree, we configure with {{cmake 
> -DCMAKE_INSTALL_PREFIX:PATH=$(pwd)/usr}}. However, after running {{make && 
> make install}}, I then find the following incorrect behavior when I run 
> {{otool}}.
> {code}
> [thomas@Mithlond] libhdfs3-cmake $ otool -D usr/lib/libhdfs3.dylib
> usr/lib/libhdfs3.dylib:
> libhdfs3.1.dylib
> {code}
> Note that since the install name is incorrectly set, linking against this 
> copy of the library, even by absolute path, will produce a binary that can't 
> find libhdfs3.dylib without manually altering LD_LIBRARY_PATH.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1057) LIKE operator is broken for HiveOrc profile

2016-09-15 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15494942#comment-15494942
 ] 

Goden Yao commented on HAWQ-1057:
-

yep. HAWQ-779 introduced "LIKE" operator - for plugins in the code base, we 
should pass through this operator but do nothing as no plugins support this 
(only custom plugins might need it)

> LIKE operator is broken for HiveOrc profile
> ---
>
> Key: HAWQ-1057
> URL: https://issues.apache.org/jira/browse/HAWQ-1057
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
>Priority: Critical
> Fix For: 2.0.1.0-incubating
>
>
> {code}
> # \d pxf_hive_small_data
> External table "public.pxf_hive_small_data"
>  Column |   Type   | Modifiers 
> +--+---
>  t1 | text | 
>  t2 | text | 
>  num1   | integer  | 
>  dub1   | double precision | 
> Type: readable
> Encoding: UTF8
> Format type: custom
> Format options: formatter 'pxfwritable_import' 
> External location: 
> pxf://localhost:51200/hive_orc_table?PROFILE=HiveORC=
> {code}
> {code}
> SELECT * FROM pxf_hive_small_data WHERE t1 LIKE '%ro%1';
> ERROR:  remote component error (500) from '192.168.98.232:51200':  type  
> Exception report   message   Can't create expression (and) with no children.  
>   description   The server encountered an internal error that prevented it 
> from fulfilling this request.exception   java.io.IOException: Can't 
> create expression (and) with no children. (libchurl.c:884)  (seg2 
> localhost:4 pid=72391)
> DETAIL:  External table pxf_hive_small_data
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1057) LIKE operator is broken for HiveOrc profile

2016-09-15 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1057:

Assignee: Oleksandr Diachenko  (was: Goden Yao)

> LIKE operator is broken for HiveOrc profile
> ---
>
> Key: HAWQ-1057
> URL: https://issues.apache.org/jira/browse/HAWQ-1057
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>Priority: Critical
> Fix For: 2.0.1.0-incubating
>
>
> {code}
> # \d pxf_hive_small_data
> External table "public.pxf_hive_small_data"
>  Column |   Type   | Modifiers 
> +--+---
>  t1 | text | 
>  t2 | text | 
>  num1   | integer  | 
>  dub1   | double precision | 
> Type: readable
> Encoding: UTF8
> Format type: custom
> Format options: formatter 'pxfwritable_import' 
> External location: 
> pxf://localhost:51200/hive_orc_table?PROFILE=HiveORC=
> {code}
> {code}
> SELECT * FROM pxf_hive_small_data WHERE t1 LIKE '%ro%1';
> ERROR:  remote component error (500) from '192.168.98.232:51200':  type  
> Exception report   message   Can't create expression (and) with no children.  
>   description   The server encountered an internal error that prevented it 
> from fulfilling this request.exception   java.io.IOException: Can't 
> create expression (and) with no children. (libchurl.c:884)  (seg2 
> localhost:4 pid=72391)
> DETAIL:  External table pxf_hive_small_data
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1046) Document migration of LibHDFS3 library to HAWQ

2016-09-15 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1046:

Fix Version/s: backlog

> Document migration of LibHDFS3 library to HAWQ
> --
>
> Key: HAWQ-1046
> URL: https://issues.apache.org/jira/browse/HAWQ-1046
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: libhdfs
>Reporter: Matthew Rocklin
>Assignee: hongwu
> Fix For: backlog
>
>
> Some people used to depend on the libhdfs3 library maintained alongside HAWQ. 
>  This library was merged into the HAWQ codebase, making the situation a bit 
> more confusing.
> Is independent use of libhdfs3 still supported by this community?  If so what 
> is the best way for packagers to reason about versions and releases of this 
> component?  It would be convenient to see documentation on how people can 
> best depend on libhdfs3 separately from HAWQ if this is an intention.
> It looks like people have actually submitted work to the old version
> See: https://github.com/Pivotal-Data-Attic/pivotalrd-libhdfs3/pull/28
> It looks like the warning that the library had moved has been removed:
> https://github.com/Pivotal-Data-Attic/pivotalrd-libhdfs3/commit/ddcb2404a5a67e0f39fe49ed20591545c48ff426
> This removal may lead to some frustration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-883) hawq check "hawq_re_memory_overcommit_max" error

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao closed HAWQ-883.
--

> hawq check "hawq_re_memory_overcommit_max" error
> 
>
> Key: HAWQ-883
> URL: https://issues.apache.org/jira/browse/HAWQ-883
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: liuguo
>Assignee: Lei Chang
> Fix For: backlog
>
>
> [ERROR]:-host(kmaster): HAWQ master host memory size '3824' is less than the 
> 'hawq_re_memory_overcommit_max' size '8192'
> When I set 'hawq_re_memory_overcommit_max=3000',then get an error:
> [ERROR]:-host(kmaster): HAWQ master's hawq_re_memory_overcommit_max GUC value 
> is 3000, expected 8192



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-883) hawq check "hawq_re_memory_overcommit_max" error

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao resolved HAWQ-883.

Resolution: Invalid

> hawq check "hawq_re_memory_overcommit_max" error
> 
>
> Key: HAWQ-883
> URL: https://issues.apache.org/jira/browse/HAWQ-883
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: liuguo
>Assignee: Lei Chang
> Fix For: backlog
>
>
> [ERROR]:-host(kmaster): HAWQ master host memory size '3824' is less than the 
> 'hawq_re_memory_overcommit_max' size '8192'
> When I set 'hawq_re_memory_overcommit_max=3000',then get an error:
> [ERROR]:-host(kmaster): HAWQ master's hawq_re_memory_overcommit_max GUC value 
> is 3000, expected 8192



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-886) Investigation of HAWQ/PXF support for ORC

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-886:
---
Fix Version/s: (was: backlog)
   2.0.1.0-incubating

> Investigation of HAWQ/PXF support for ORC
> -
>
> Key: HAWQ-886
> URL: https://issues.apache.org/jira/browse/HAWQ-886
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: 2.0.1.0-incubating
>
>
> Currently HAWQ when reading ORC files via PXF (using the default Hive 
> profile) doesn’t push down any of the filter information down to the 
> underlying ORC reader. The only filter that is possible right now is at the 
> level of partition and is generically done for all Hive tables.
> ORC internally contains file level, stripe level and row level statistics 
> including information such as min,max values etc. For more information refer 
> to https://orc.apache.org/docs/indexes.html
> The proposal here is to introduce a new PXF profile optimized for ORC files 
> which leverages these stats to improve the performance of HAWQ queries with 
> predicates. We will also use the Vectorized approach (VectorizedRowBatch) 
> while reading along with SearchArgument to build the filter as opposed to the 
> existing expensive reader which is row based.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HAWQ-883) hawq check "hawq_re_memory_overcommit_max" error

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao reopened HAWQ-883:


> hawq check "hawq_re_memory_overcommit_max" error
> 
>
> Key: HAWQ-883
> URL: https://issues.apache.org/jira/browse/HAWQ-883
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: liuguo
>Assignee: Lei Chang
> Fix For: 2.0.1.0-incubating
>
>
> [ERROR]:-host(kmaster): HAWQ master host memory size '3824' is less than the 
> 'hawq_re_memory_overcommit_max' size '8192'
> When I set 'hawq_re_memory_overcommit_max=3000',then get an error:
> [ERROR]:-host(kmaster): HAWQ master's hawq_re_memory_overcommit_max GUC value 
> is 3000, expected 8192



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-967) Extend Projection info to include filter attributes

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-967:
---
Fix Version/s: (was: backlog)
   2.0.1.0-incubating

> Extend Projection info to include filter attributes
> ---
>
> Key: HAWQ-967
> URL: https://issues.apache.org/jira/browse/HAWQ-967
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Oleksandr Diachenko
>Priority: Blocker
> Fix For: 2.0.1.0-incubating
>
>
> HAWQ-927 includes query projection columns as part of the projection info 
> sent from HAWQ to PXF.
> For queries where filter attributes are different from projection attributes, 
> PXF would return data with NULL values in the filter attributes.
> e.g. a table "test" has 2 columns say: c1 int, c2 int
> {code}
> select c1 from test where c2 > 0;
> {code}
> In the case above, as c2 is not in column projection, pxf will return records 
> like (1, NULL), (2, NULL) ... as part of the implementation in HAWQ-927 
> Due to this HAWQ wouldn't have the necessary data to carry out filters once 
> it receives data back from the underlying external dataset. via PXF and wrong 
> result will be returned to users
> The projection information must be a union of the internal HAWQ projection 
> info and the attributes in the filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-992) PXF Hive data type check in Fragmenter too restrictive

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-992:
---
Fix Version/s: (was: backlog)
   2.0.1.0-incubating

> PXF Hive data type check in Fragmenter too restrictive
> --
>
> Key: HAWQ-992
> URL: https://issues.apache.org/jira/browse/HAWQ-992
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.1.0-incubating
>
>
> HiveDataFragmenter used by both HiveText and HiveRC profiles has a very 
> strict type check. 
> Hawq type numeric(10,10) is compatible with hive's decimal(10,10)
> But Hawq type numeric is not compatible with hive's decimal(10,10)
> Similar issue exits with other data types which have variable optional 
> arguments. The type check should be modified to allow hawq type that is a 
> compabitle type but without optional precision/length arguments to work with 
> the corresponding hive type.
> Support following additional hive data types: date, varchar, char



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-904) CLI help output for hawq config is different depending on which help option is used

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-904:
---
Fix Version/s: (was: backlog)
   2.0.1.0-incubating

> CLI help output for hawq config is different depending on which help option 
> is used
> ---
>
> Key: HAWQ-904
> URL: https://issues.apache.org/jira/browse/HAWQ-904
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Severine Tymon
>Assignee: Radar Lei
>Priority: Minor
> Fix For: 2.0.1.0-incubating
>
>
> hawq config and hawq config --help output the following:
> [gpadmin@centos7-namenode hawq]$ hawq --version
> HAWQ version 2.0.1.0 build dev
> [gpadmin@centos7-namenode hawq]$ hawq config
> usage: hawq config [--options]
> The "options" are:
>-c --change Changes a configuration parameter setting.
>-s --show   Shows the value for a specified configuration 
> parameter.
>-l --list   Lists all configuration parameters.
>-q --quiet  Run in quiet mode.
>-v --verboseDisplays detailed status.
>-r --remove HAWQ GUC name to be removed.
>--skipvalidationSkip the system validation checks.
>--ignore-bad-hosts  Skips copying configuration files on host on which SSH 
> fails
> See 'hawq --help' for more information on other commands.
> [gpadmin@centos7-namenode hawq]$ hawq config --help
> usage: hawq config [--options]
> The "options" are:
>-c --change Changes a configuration parameter setting.
>-s --show   Shows the value for a specified configuration 
> parameter.
>-l --list   Lists all configuration parameters.
>-q --quiet  Run in quiet mode.
>-v --verboseDisplays detailed status.
>-r --remove HAWQ GUC name to be removed.
>--skipvalidationSkip the system validation checks.
>--ignore-bad-hosts  Skips copying configuration files on host on which SSH 
> fails
> See 'hawq --help' for more information on other commands.
> **while hawq config -h outputs the following:
> [gpadmin@centos7-namenode hawq]$ hawq config -h
> Usage: HAWQ config options.
> Options:
>   -h, --helpshow this help message and exit
>   -c CHANGE, --change=CHANGE
> Change HAWQ Property.
>   -r REMOVE, --remove=REMOVE
> Remove HAWQ Property.
>   -s SHOW, --show=SHOW  Change HAWQ Property name.
>   -l, --listList all HAWQ Properties.
>   --skipvalidation  
>   --ignore-bad-hostsSkips copying configuration files on host on which SSH
> fails
>   -q, --quiet   
>   -v PROPERTY_VALUE, --value=PROPERTY_VALUE
> Set HAWQ Property value.
>   -d HAWQ_HOME  HAWQ home directory.
> The latter (hawq config -h) seems more up-to-date. In particular, the first 
> output contains errors (-v should be used to supply the value of a changed 
> parameter, not switch to verbose mode.) There are some minor issues in the 
> latter output too though. `CHANGE`, `REMOVE`, and `SHOW` placeholders should 
> be replaced with  or HAWQ_PROPERTY



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-997) HAWQ doesn't send PXF data type with precision

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-997:
---
Fix Version/s: (was: backlog)
   2.0.1.0-incubating

> HAWQ doesn't send PXF data type with precision 
> ---
>
> Key: HAWQ-997
> URL: https://issues.apache.org/jira/browse/HAWQ-997
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.1.0-incubating
>
>
> HAWQ/PXF sends via the Rest api information about ATTR and types using 
> x-gp-attr-typename. Attributes such as varchar(3) char(3) are sent as varchar 
> and char. This causes HAWQ-992 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1013) Move HAWQ Ambari plugin to Apache HAWQ

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1013:

Fix Version/s: (was: backlog)
   2.0.1.0-incubating

> Move HAWQ Ambari plugin to Apache HAWQ
> --
>
> Key: HAWQ-1013
> URL: https://issues.apache.org/jira/browse/HAWQ-1013
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Ambari
>Reporter: Matt
>Assignee: Alexander Denissov
>  Labels: UX
> Fix For: 2.0.1.0-incubating
>
>
> To add HAWQ and PXF to Ambari, users have to follow certain manual steps. 
> These manual steps include:
> - Adding HAWQ and PXF metainfo.xml files (containing metadata about the 
> service) under the stack to be installed.
> - Adding repositories, where HAWQ and PXF rpms reside so that Ambari can use 
> it during installation. This requires updating repoinfo.xml under the stack 
> HAWQ and PXF is being added to.
> - Adding repositories to an existing stack managed by Ambari requires adding 
> the repositories using the Ambari REST endpoint.
> The HAWQ Ambari plugin automates the above steps using a script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1023) Incorrect usage of java.lang.String.replaceAll

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1023:

Fix Version/s: (was: backlog)
   2.0.1.0-incubating

> Incorrect usage of java.lang.String.replaceAll
> --
>
> Key: HAWQ-1023
> URL: https://issues.apache.org/jira/browse/HAWQ-1023
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: hongwu
>Assignee: hongwu
>Priority: Minor
> Fix For: 2.0.1.0-incubating
>
>
> Incorrect usage for java.lang.String.replaceAll generate useless calls:
> https://github.com/apache/incubator-hawq/blob/master/contrib/hawq-hadoop/hawq-mapreduce-common/src/main/java/com/pivotal/hawq/mapreduce/datatype/HAWQPath.java#L51
> https://github.com/apache/incubator-hawq/blob/master/contrib/hawq-hadoop/hawq-mapreduce-common/src/main/java/com/pivotal/hawq/mapreduce/datatype/HAWQPoint.java#L45
> https://github.com/apache/incubator-hawq/blob/master/contrib/hawq-hadoop/hawq-mapreduce-common/src/main/java/com/pivotal/hawq/mapreduce/datatype/HAWQPolygon.java#L46



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1042) PXF throws NPE on quering HBase table

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1042:

Fix Version/s: (was: backlog)
   2.0.1.0-incubating

> PXF throws NPE on quering HBase table
> -
>
> Key: HAWQ-1042
> URL: https://issues.apache.org/jira/browse/HAWQ-1042
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.1.0-incubating
>
>
> Steps to reproduce:
> 1) Having external table base on HBase table:
> {code}# \d pxf_smoke_small_data;
> External table "public.pxf_smoke_small_data"
>  Column  |   Type   | Modifiers 
> -+--+---
>  name| text | 
>  num | integer  | 
>  dub | double precision | 
>  longnum | bigint   | 
>  bool| boolean  | 
> Type: readable
> Encoding: UTF8
> Format type: custom
> Format options: formatter 'pxfwritable_import' 
> External location: pxf://localhost:51200/hbase_table?PROFILE=HBase
> {code}
> 2) Run simple query:
> {code}
> select * from public.pxf_smoke_small_data;
> ERROR:  remote component error (500) from '192.168.99.69:51200':  type  
> Exception report   message   java.lang.Exception: 
> java.lang.NullPointerExceptiondescription   The server encountered an 
> internal error that prevented it from fulfilling this request.exception   
> javax.servlet.ServletException: java.lang.Exception: 
> java.lang.NullPointerException (libchurl.c:884)  (seg3 localhost:4 
> pid=28671)
> DETAIL:  External table pxf_smoke_small_data
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1052) SELECT from PXF/ORC table fails for boolean and varchar datatypes

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1052:

Fix Version/s: (was: backlog)
   2.0.1.0-incubating

> SELECT from PXF/ORC table fails for boolean and varchar datatypes
> -
>
> Key: HAWQ-1052
> URL: https://issues.apache.org/jira/browse/HAWQ-1052
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
>Priority: Critical
> Fix For: 2.0.1.0-incubating
>
>
> {code}
> \d pxf_hive_orc_types
> External table "public.pxf_hive_orc_types"
>  Column |Type | Modifiers 
> +-+---
>  t1 | text| 
>  t2 | text| 
>  num1   | integer | 
>  dub1   | double precision| 
>  dec1   | numeric | 
>  tm | timestamp without time zone | 
>  r  | real| 
>  bg | bigint  | 
>  b  | boolean | 
>  tn | smallint| 
>  sml| smallint| 
>  dt | date| 
>  vc1| character varying(5)| 
>  c1 | character(3)| 
>  bin| bytea   | 
> Type: readable
> Encoding: UTF8
> Format type: custom
> Format options: formatter 'pxfwritable_import' 
> External location: 
> pxf://localhost:51200/hive_orc_all_types?PROFILE=HiveORC=^A
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1052) SELECT from PXF/ORC table fails for boolean and varchar datatypes

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1052:

Priority: Critical  (was: Major)

> SELECT from PXF/ORC table fails for boolean and varchar datatypes
> -
>
> Key: HAWQ-1052
> URL: https://issues.apache.org/jira/browse/HAWQ-1052
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
>Priority: Critical
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-963) Enhance PXF to support additional operators

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-963:
---
Assignee: Shivram Mani  (was: Goden Yao)

> Enhance PXF to support additional operators
> ---
>
> Key: HAWQ-963
> URL: https://issues.apache.org/jira/browse/HAWQ-963
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: backlog
>
>
> Supported operations in PXF only include
> <, >, <=, >=, =, !=. 
> Will need to add support for more operators in the PXF framework
> between(), in(), isNull()
> Add logical operator codes in PXF_LOGICAL_OPERATOR_CODE and handle these 
> operators when the FilterString is being parsed on the pxf service side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1045) Have RPM installation path contain version number and virtual RPM changes

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1045:

Summary: Have RPM installation path contain version number and virtual RPM 
changes  (was: Have RPM installation path contain version number)

> Have RPM installation path contain version number and virtual RPM changes
> -
>
> Key: HAWQ-1045
> URL: https://issues.apache.org/jira/browse/HAWQ-1045
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build, PXF
>Reporter: Goden Yao
>Assignee: Goden Yao
> Fix For: 2.0.1.0-incubating
>
>
> This is a requirement from Ambari integration. They'll need side by side 
> installation scenario to do upgrade and verification.
> The following rpm names and packaging strategy was agreed upon during the 
> meeting with Ambari team members and Roman:
> o pxf-3.0.1.0-1088.el6.noarch.rpm (vrpm, creates symlink, dependency: 
> pxf-service_3_0_1_0-3.0.1.0-1088)
> o pxf-service_3_0_1_0-3.0.1.0-1088.el6.noarch.rpm (dependency: 
> apache-tomcat-7.0.62, pxf-hdfs_3_0_1_0-3.0.1.0-1088, 
> pxf-json_3_0_1_0-3.0.1.0-1088)
> o pxf-hdfs_3_0_1_0-3.0.1.0-1088.el6.noarch.rpm
> o pxf-json_3_0_1_0-3.0.1.0-1088.el6.noarch.rpm
> o pxf-hbase-3.0.1.0-1088.el6.noarch.rpm (vrpm, dependency: 
> pxf-hbase_3_0_1_0-3.0.1.0-1088)
> o pxf-hbase_3_0_1_0-3.0.1.0-1088.el6.noarch.rpm
> o pxf-hive-3.0.1.0-1088.el6.noarch.rpm (vrpm, dependency: 
> pxf-hive_3_0_1_0-3.0.1.0-1088)
> o pxf-hive_3_0_1_0-3.0.1.0-1088.el6.noarch.rpm
> Create and update the necessary rpm spec files to produce the above mentioned 
> rpms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1045) Have RPM installation path contain version number

2016-09-14 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1045:

Description: 
This is a requirement from Ambari integration. They'll need side by side 
installation scenario to do upgrade and verification.
The following rpm names and packaging strategy was agreed upon during the 
meeting with Ambari team members and Roman:

o pxf-3.0.1.0-1088.el6.noarch.rpm (vrpm, creates symlink, dependency: 
pxf-service_3_0_1_0-3.0.1.0-1088)
o pxf-service_3_0_1_0-3.0.1.0-1088.el6.noarch.rpm (dependency: 
apache-tomcat-7.0.62, pxf-hdfs_3_0_1_0-3.0.1.0-1088, 
pxf-json_3_0_1_0-3.0.1.0-1088)
o pxf-hdfs_3_0_1_0-3.0.1.0-1088.el6.noarch.rpm
o pxf-json_3_0_1_0-3.0.1.0-1088.el6.noarch.rpm

o pxf-hbase-3.0.1.0-1088.el6.noarch.rpm (vrpm, dependency: 
pxf-hbase_3_0_1_0-3.0.1.0-1088)
o pxf-hbase_3_0_1_0-3.0.1.0-1088.el6.noarch.rpm

o pxf-hive-3.0.1.0-1088.el6.noarch.rpm (vrpm, dependency: 
pxf-hive_3_0_1_0-3.0.1.0-1088)
o pxf-hive_3_0_1_0-3.0.1.0-1088.el6.noarch.rpm

Create and update the necessary rpm spec files to produce the above mentioned 
rpms.

  was:
This is a requirement from Ambari integration. They'll need side by side 
installation scenario to do upgrade and verification.



> Have RPM installation path contain version number
> -
>
> Key: HAWQ-1045
> URL: https://issues.apache.org/jira/browse/HAWQ-1045
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build, PXF
>Reporter: Goden Yao
>Assignee: Goden Yao
> Fix For: 2.0.1.0-incubating
>
>
> This is a requirement from Ambari integration. They'll need side by side 
> installation scenario to do upgrade and verification.
> The following rpm names and packaging strategy was agreed upon during the 
> meeting with Ambari team members and Roman:
> o pxf-3.0.1.0-1088.el6.noarch.rpm (vrpm, creates symlink, dependency: 
> pxf-service_3_0_1_0-3.0.1.0-1088)
> o pxf-service_3_0_1_0-3.0.1.0-1088.el6.noarch.rpm (dependency: 
> apache-tomcat-7.0.62, pxf-hdfs_3_0_1_0-3.0.1.0-1088, 
> pxf-json_3_0_1_0-3.0.1.0-1088)
> o pxf-hdfs_3_0_1_0-3.0.1.0-1088.el6.noarch.rpm
> o pxf-json_3_0_1_0-3.0.1.0-1088.el6.noarch.rpm
> o pxf-hbase-3.0.1.0-1088.el6.noarch.rpm (vrpm, dependency: 
> pxf-hbase_3_0_1_0-3.0.1.0-1088)
> o pxf-hbase_3_0_1_0-3.0.1.0-1088.el6.noarch.rpm
> o pxf-hive-3.0.1.0-1088.el6.noarch.rpm (vrpm, dependency: 
> pxf-hive_3_0_1_0-3.0.1.0-1088)
> o pxf-hive_3_0_1_0-3.0.1.0-1088.el6.noarch.rpm
> Create and update the necessary rpm spec files to produce the above mentioned 
> rpms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1049) Enhance PXF Service to support AND,OR,NOT logical operators in Predicate Pushdown

2016-09-13 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1049:

Summary: Enhance PXF Service to support AND,OR,NOT logical operators in 
Predicate Pushdown  (was: Enhance PXF Service to support AND,OR,NOT logical 
operators in Predicate Push)

> Enhance PXF Service to support AND,OR,NOT logical operators in Predicate 
> Pushdown
> -
>
> Key: HAWQ-1049
> URL: https://issues.apache.org/jira/browse/HAWQ-1049
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Kavinder Dhaliwal
> Fix For: backlog
>
>
> Support additional logical operators OR, NOT along with currently supported 
> AND.
> Update the PXF ORC Accessor to support these opearators as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1047) Push limit clause to PXF

2016-09-13 Thread Goden Yao (JIRA)
Goden Yao created HAWQ-1047:
---

 Summary: Push limit clause to PXF
 Key: HAWQ-1047
 URL: https://issues.apache.org/jira/browse/HAWQ-1047
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: PXF
Reporter: Goden Yao
Assignee: Goden Yao
 Fix For: backlog


When user launches an external table query with "limit" clause, Hawq explicitly 
closes the remote connection when it has retrieved enough tuples and this 
raises an exception on the tomcat end. 

In those such queries Hawq doesn't pushdown the limit clause to Pxf so it's up 
to Hawq to know when it has enough tuples and end the request.

**expected behavior**
1. HAWQ should push down the limit clause so PXF doesn't need to return more 
than the limit number of records.
2. PXF needs handle hawq close connection gracefully.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1045) Have RPM installation path contain version number

2016-09-12 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1045:

Description: 
This is a requirement from Ambari integration. They'll need side by side 
installation scenario to do upgrade and verification.


  was:
This is a requirement from Amabrai integration. They'll need side by side 
installation scenario to do upgrade and verification.



> Have RPM installation path contain version number
> -
>
> Key: HAWQ-1045
> URL: https://issues.apache.org/jira/browse/HAWQ-1045
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build, PXF
>Reporter: Goden Yao
>Assignee: Goden Yao
> Fix For: 2.0.1.0-incubating
>
>
> This is a requirement from Ambari integration. They'll need side by side 
> installation scenario to do upgrade and verification.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1045) Have RPM installation path contain version number

2016-09-12 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1045:

Summary: Have RPM installation path contain version number  (was: Have RPM 
installation path contain versions)

> Have RPM installation path contain version number
> -
>
> Key: HAWQ-1045
> URL: https://issues.apache.org/jira/browse/HAWQ-1045
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build, PXF
>Reporter: Goden Yao
>Assignee: Goden Yao
> Fix For: 2.0.1.0-incubating
>
>
> This is a requirement from Amabrai integration. They'll need side by side 
> installation scenario to do upgrade and verification.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1045) Have RPM installation path contain versions

2016-09-12 Thread Goden Yao (JIRA)
Goden Yao created HAWQ-1045:
---

 Summary: Have RPM installation path contain versions
 Key: HAWQ-1045
 URL: https://issues.apache.org/jira/browse/HAWQ-1045
 Project: Apache HAWQ
  Issue Type: Task
  Components: Build, PXF
Reporter: Goden Yao
Assignee: Goden Yao
 Fix For: 2.0.1.0-incubating


This is a requirement from Amabrai integration. They'll need side by side 
installation scenario to do upgrade and verification.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1036) Support user impersonation in PXF for external tables

2016-09-06 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1036:

Description: 
Currently HAWQ executes all queries as the user running the HAWQ process or the 
user running the PXF process, not as the user who issued the query via 
ODBC/JDBC/... This restricts the options available for integrating with 
existing security defined in HDFS, Hive, etc.

Impersonation provides an alternative Ranger integration (as discussed in 
HAWQ-256 ) for consistent security across HAWQ, HDFS, Hive...

Implementation High Level steps:
1) HAWQ needs to integrate with existing authentication components for the user 
who invokes the query
2) HAWQ needs to pass down the user id to PXF after authorization is passed 
3) PXF needs to do "run as ..." the user id to execute APIs to access Hive/HDFS 

  was:
Currently HAWQ executes all queries as the user running the HAWQ process or the 
user running the PXF process, not as the user who issued the query via 
ODBC/JDBC/... This restricts the options available for integrating with 
existing security defined in HDFS, Hive, etc.

Impersonation provides an alternative Ranger integration (as discussed in 
HAWQ-256 ) for consistent security across HAWQ, HDFS, Hive...


> Support user impersonation in PXF for external tables
> -
>
> Key: HAWQ-1036
> URL: https://issues.apache.org/jira/browse/HAWQ-1036
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Security
>Reporter: Alastair "Bell" Turner
>Assignee: Goden Yao
>Priority: Critical
> Fix For: backlog
>
> Attachments: HAWQ_Impersonation_rationale.txt
>
>
> Currently HAWQ executes all queries as the user running the HAWQ process or 
> the user running the PXF process, not as the user who issued the query via 
> ODBC/JDBC/... This restricts the options available for integrating with 
> existing security defined in HDFS, Hive, etc.
> Impersonation provides an alternative Ranger integration (as discussed in 
> HAWQ-256 ) for consistent security across HAWQ, HDFS, Hive...
> Implementation High Level steps:
> 1) HAWQ needs to integrate with existing authentication components for the 
> user who invokes the query
> 2) HAWQ needs to pass down the user id to PXF after authorization is passed 
> 3) PXF needs to do "run as ..." the user id to execute APIs to access 
> Hive/HDFS 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HAWQ-1036) Support user impersonation in PXF for external tables

2016-09-06 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468123#comment-15468123
 ] 

Goden Yao edited comment on HAWQ-1036 at 9/6/16 6:54 PM:
-

this request has nothing to do with database objects privilege management or 
ranger integration.
Alastair‘s attached text file well elaborated on the implementation for this 
JIRA.

I'll make sure I describe that in the JIRA description as well.


was (Author: godenyao):
this request has nothing to do with database objects privilege management or 
ranger integration.
I'll post a detailed discussion with Alastair before.

> Support user impersonation in PXF for external tables
> -
>
> Key: HAWQ-1036
> URL: https://issues.apache.org/jira/browse/HAWQ-1036
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Security
>Reporter: Alastair "Bell" Turner
>Assignee: Goden Yao
>Priority: Critical
> Fix For: backlog
>
> Attachments: HAWQ_Impersonation_rationale.txt
>
>
> Currently HAWQ executes all queries as the user running the HAWQ process or 
> the user running the PXF process, not as the user who issued the query via 
> ODBC/JDBC/... This restricts the options available for integrating with 
> existing security defined in HDFS, Hive, etc.
> Impersonation provides an alternative Ranger integration (as discussed in 
> HAWQ-256 ) for consistent security across HAWQ, HDFS, Hive...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1036) Support user impersonation in PXF for external tables

2016-09-06 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468138#comment-15468138
 ] 

Goden Yao commented on HAWQ-1036:
-

a) - yes
b) not sure if that's a statement or question - user impersonation should be 
only exercised if/when HAWQ chose to integrate with HADOOP user identification. 
so there will be 1> default - hawq manages users as dbms separately so no 
behavior changes 2> integrated with hadoop , so all db users should be in 
kerberos or LDAP, through ranger or other centralized user authentication 
system.
c) no matter which mode Hive chooses, HDFS layer you still have hdfs users 
(OS/Hadoop users) specific ACL.

impersonation is not our side to do authentication, we just need to trust hive 
APIs and pass down the real user ID who's invoking the query and run as that 
user to make sure no illegal access during the process.

> Support user impersonation in PXF for external tables
> -
>
> Key: HAWQ-1036
> URL: https://issues.apache.org/jira/browse/HAWQ-1036
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Security
>Reporter: Alastair "Bell" Turner
>Assignee: Goden Yao
>Priority: Critical
> Fix For: backlog
>
> Attachments: HAWQ_Impersonation_rationale.txt
>
>
> Currently HAWQ executes all queries as the user running the HAWQ process or 
> the user running the PXF process, not as the user who issued the query via 
> ODBC/JDBC/... This restricts the options available for integrating with 
> existing security defined in HDFS, Hive, etc.
> Impersonation provides an alternative Ranger integration (as discussed in 
> HAWQ-256 ) for consistent security across HAWQ, HDFS, Hive...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1036) Support user impersonation in PXF for external tables

2016-09-06 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468123#comment-15468123
 ] 

Goden Yao commented on HAWQ-1036:
-

this request has nothing to do with database objects privilege management or 
ranger integration.
I'll post a detailed discussion with Alastair before.

> Support user impersonation in PXF for external tables
> -
>
> Key: HAWQ-1036
> URL: https://issues.apache.org/jira/browse/HAWQ-1036
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Security
>Reporter: Alastair "Bell" Turner
>Assignee: Goden Yao
>Priority: Critical
> Fix For: backlog
>
> Attachments: HAWQ_Impersonation_rationale.txt
>
>
> Currently HAWQ executes all queries as the user running the HAWQ process or 
> the user running the PXF process, not as the user who issued the query via 
> ODBC/JDBC/... This restricts the options available for integrating with 
> existing security defined in HDFS, Hive, etc.
> Impersonation provides an alternative Ranger integration (as discussed in 
> HAWQ-256 ) for consistent security across HAWQ, HDFS, Hive...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1038) Missing BPCHAR in Data Type

2016-08-31 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1038:

Fix Version/s: backlog

> Missing BPCHAR in Data Type
> ---
>
> Key: HAWQ-1038
> URL: https://issues.apache.org/jira/browse/HAWQ-1038
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Goden Yao
>Assignee: David Yozie
> Fix For: backlog
>
>
> referring to 3rd party site:
> http://hdb.docs.pivotal.io/20/reference/catalog/pg_type.html 
> and 
> http://hdb.docs.pivotal.io/20/reference/HAWQDataTypes.html
> It's quite out of date if you check source code:
> https://github.com/apache/incubator-hawq/blob/master/src/interfaces/ecpg/ecpglib/pg_type.h
> {code}
> ...
> #define BPCHAROID 1042
> ...
> {code}
> We at least miss BPCHAR in the type table, maybe more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1038) Missing BPCHAR in Data Type

2016-08-31 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1038:

Summary: Missing BPCHAR in Data Type  (was: Missing bpchar in Data Type)

> Missing BPCHAR in Data Type
> ---
>
> Key: HAWQ-1038
> URL: https://issues.apache.org/jira/browse/HAWQ-1038
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Goden Yao
>Assignee: David Yozie
> Fix For: backlog
>
>
> referring to 3rd party site:
> http://hdb.docs.pivotal.io/20/reference/catalog/pg_type.html 
> and 
> http://hdb.docs.pivotal.io/20/reference/HAWQDataTypes.html
> It's quite out of date if you check source code:
> https://github.com/apache/incubator-hawq/blob/master/src/interfaces/ecpg/ecpglib/pg_type.h
> {code}
> ...
> #define BPCHAROID 1042
> ...
> {code}
> We at least miss BPCHAR in the type table, maybe more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1038) Missing bpchar in Data Type

2016-08-31 Thread Goden Yao (JIRA)
Goden Yao created HAWQ-1038:
---

 Summary: Missing bpchar in Data Type
 Key: HAWQ-1038
 URL: https://issues.apache.org/jira/browse/HAWQ-1038
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Documentation
Reporter: Goden Yao
Assignee: David Yozie


referring to 3rd party site:
http://hdb.docs.pivotal.io/20/reference/catalog/pg_type.html 
and 
http://hdb.docs.pivotal.io/20/reference/HAWQDataTypes.html

It's quite out of date if you check source code:
https://github.com/apache/incubator-hawq/blob/master/src/interfaces/ecpg/ecpglib/pg_type.h
{code}
...
#define BPCHAROID   1042
...
{code}

We at least miss BPCHAR in the type table, maybe more.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1032) Bucket number of newly added partition is not consistent with parent table.

2016-08-31 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1032:

Summary: Bucket number of newly added partition is not consistent with 
parent table.  (was: Bucket number of new added partition is not consistent 
with parent table.)

> Bucket number of newly added partition is not consistent with parent table.
> ---
>
> Key: HAWQ-1032
> URL: https://issues.apache.org/jira/browse/HAWQ-1032
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
> Fix For: 2.0.1.0-incubating
>
>
> Failure Case
> {code}
> set deafult_hash_table_bucket_number = 12;
> CREATE TABLE sales3 (id int, date date, amt decimal(10,2)) 
> DISTRIBUTED BY (id)   
> PARTITION BY RANGE (date) 
> ( START (date '2008-01-01') INCLUSIVE 
>END (date '2009-01-01') EXCLUSIVE  
>EVERY (INTERVAL '1 day') );
> set deafult_hash_table_bucket_number = 16;
> ALTER TABLE sales3 ADD PARTITION   START 
> (date '2009-03-01') INCLUSIVE   END 
> (date '2009-04-01') EXCLUSIVE;
> {code}
> The newly added partition with buckcet number 16 is not consistent with 
> parent partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1036) Support user impersonation in PXF for external tables

2016-08-31 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1036:

Summary: Support user impersonation in PXF for external tables  (was: 
Support user impersonation in HAWQ)

> Support user impersonation in PXF for external tables
> -
>
> Key: HAWQ-1036
> URL: https://issues.apache.org/jira/browse/HAWQ-1036
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Security
>Reporter: Alastair "Bell" Turner
>Assignee: Goden Yao
>Priority: Critical
> Fix For: backlog
>
> Attachments: HAWQ_Impersonation_rationale.txt
>
>
> Currently HAWQ executes all queries as the user running the HAWQ process or 
> the user running the PXF process, not as the user who issued the query via 
> ODBC/JDBC/... This restricts the options available for integrating with 
> existing security defined in HDFS, Hive, etc.
> Impersonation provides an alternative Ranger integration (as discussed in 
> HAWQ-256 ) for consistent security across HAWQ, HDFS, Hive...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1036) Support user impersonation in HAWQ

2016-08-31 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1036:

Priority: Critical  (was: Major)

> Support user impersonation in HAWQ
> --
>
> Key: HAWQ-1036
> URL: https://issues.apache.org/jira/browse/HAWQ-1036
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Security
>Reporter: Alastair "Bell" Turner
>Assignee: Goden Yao
>Priority: Critical
> Fix For: backlog
>
> Attachments: HAWQ_Impersonation_rationale.txt
>
>
> Currently HAWQ executes all queries as the user running the HAWQ process or 
> the user running the PXF process, not as the user who issued the query via 
> ODBC/JDBC/... This restricts the options available for integrating with 
> existing security defined in HDFS, Hive, etc.
> Impersonation provides an alternative Ranger integration (as discussed in 
> HAWQ-256 ) for consistent security across HAWQ, HDFS, Hive...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1032) Bucket number of newly added partition is not consistent with parent table.

2016-08-31 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452924#comment-15452924
 ] 

Goden Yao commented on HAWQ-1032:
-

what's the partition number of newly added partition in this case? or do you 
see any errors?

> Bucket number of newly added partition is not consistent with parent table.
> ---
>
> Key: HAWQ-1032
> URL: https://issues.apache.org/jira/browse/HAWQ-1032
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
> Fix For: 2.0.1.0-incubating
>
>
> Failure Case
> {code}
> set deafult_hash_table_bucket_number = 12;
> CREATE TABLE sales3 (id int, date date, amt decimal(10,2)) 
> DISTRIBUTED BY (id)   
> PARTITION BY RANGE (date) 
> ( START (date '2008-01-01') INCLUSIVE 
>END (date '2009-01-01') EXCLUSIVE  
>EVERY (INTERVAL '1 day') );
> set deafult_hash_table_bucket_number = 16;
> ALTER TABLE sales3 ADD PARTITION   START 
> (date '2009-03-01') INCLUSIVE   END 
> (date '2009-04-01') EXCLUSIVE;
> {code}
> The newly added partition with buckcet number 16 is not consistent with 
> parent partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1032) Bucket number of new added partition is not consistent with parent table.

2016-08-31 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1032:

Description: 
Failure Case
{code}
set deafult_hash_table_bucket_number = 12;
CREATE TABLE sales3 (id int, date date, amt decimal(10,2)) DISTRIBUTED 
BY (id)   PARTITION BY 
RANGE (date) ( START (date 
'2008-01-01') INCLUSIVEEND (date 
'2009-01-01') EXCLUSIVE EVERY 
(INTERVAL '1 day') );

set deafult_hash_table_bucket_number = 16;
ALTER TABLE sales3 ADD PARTITION   START (date 
'2009-03-01') INCLUSIVE   END (date 
'2009-04-01') EXCLUSIVE;
{code}

The newly added partition with buckcet number 16 is not consistent with parent 
partition.

  was:
Failure Case
set deafult_hash_table_bucket_number = 12;
CREATE TABLE sales3 (id int, date date, amt decimal(10,2)) DISTRIBUTED 
BY (id)   PARTITION BY 
RANGE (date) ( START (date 
'2008-01-01') INCLUSIVEEND (date 
'2009-01-01') EXCLUSIVE EVERY 
(INTERVAL '1 day') );

set deafult_hash_table_bucket_number = 16;
ALTER TABLE sales3 ADD PARTITION   START (date 
'2009-03-01') INCLUSIVE   END (date 
'2009-04-01') EXCLUSIVE;

The new added partition with bukcet number 16 which is not consistent with 
parent partition.


> Bucket number of new added partition is not consistent with parent table.
> -
>
> Key: HAWQ-1032
> URL: https://issues.apache.org/jira/browse/HAWQ-1032
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
> Fix For: 2.0.1.0-incubating
>
>
> Failure Case
> {code}
> set deafult_hash_table_bucket_number = 12;
> CREATE TABLE sales3 (id int, date date, amt decimal(10,2)) 
> DISTRIBUTED BY (id)   
> PARTITION BY RANGE (date) 
> ( START (date '2008-01-01') INCLUSIVE 
>END (date '2009-01-01') EXCLUSIVE  
>EVERY (INTERVAL '1 day') );
> set deafult_hash_table_bucket_number = 16;
> ALTER TABLE sales3 ADD PARTITION   START 
> (date '2009-03-01') INCLUSIVE   END 
> (date '2009-04-01') EXCLUSIVE;
> {code}
> The newly added partition with buckcet number 16 is not consistent with 
> parent partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1032) Bucket number of new added partition is not consistent with parent table.

2016-08-31 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1032:

Fix Version/s: 2.0.1.0-incubating

> Bucket number of new added partition is not consistent with parent table.
> -
>
> Key: HAWQ-1032
> URL: https://issues.apache.org/jira/browse/HAWQ-1032
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
> Fix For: 2.0.1.0-incubating
>
>
> Failure Case
> set deafult_hash_table_bucket_number = 12;
> CREATE TABLE sales3 (id int, date date, amt decimal(10,2)) 
> DISTRIBUTED BY (id)   
> PARTITION BY RANGE (date) 
> ( START (date '2008-01-01') INCLUSIVE 
>END (date '2009-01-01') EXCLUSIVE  
>EVERY (INTERVAL '1 day') );
> set deafult_hash_table_bucket_number = 16;
> ALTER TABLE sales3 ADD PARTITION   START 
> (date '2009-03-01') INCLUSIVE   END 
> (date '2009-04-01') EXCLUSIVE;
> The new added partition with bukcet number 16 which is not consistent with 
> parent partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1036) Support user impersonation in HAWQ

2016-08-31 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1036:

Assignee: Goden Yao  (was: Lei Chang)

> Support user impersonation in HAWQ
> --
>
> Key: HAWQ-1036
> URL: https://issues.apache.org/jira/browse/HAWQ-1036
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Security
>Reporter: Alastair "Bell" Turner
>Assignee: Goden Yao
> Fix For: backlog
>
> Attachments: HAWQ_Impersonation_rationale.txt
>
>
> Currently HAWQ executes all queries as the user running the HAWQ process or 
> the user running the PXF process, not as the user who issued the query via 
> ODBC/JDBC/... This restricts the options available for integrating with 
> existing security defined in HDFS, Hive, etc.
> Impersonation provides an alternative Ranger integration (as discussed in 
> HAWQ-256 ) for consistent security across HAWQ, HDFS, Hive...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1037) modify way to get HDFS port in TestHawqRegister

2016-08-31 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1037:

Fix Version/s: backlog

> modify way to get HDFS port in TestHawqRegister
> ---
>
> Key: HAWQ-1037
> URL: https://issues.apache.org/jira/browse/HAWQ-1037
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Tests
>Reporter: Chunling Wang
>Assignee: Chunling Wang
> Fix For: backlog
>
>
> In test TestHawqRegister, the HDFS port is hard-coded. Now we get the HDFS 
> port from HdfsConfig.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-1006) Fix RPM compliance in Redhat Satellite

2016-08-29 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao resolved HAWQ-1006.
-
Resolution: Fixed

> Fix RPM compliance in Redhat Satellite
> --
>
> Key: HAWQ-1006
> URL: https://issues.apache.org/jira/browse/HAWQ-1006
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build, PXF
>Reporter: Goden Yao
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.1.0-incubating
>
>
> Current Package name: apache-tomcat-7.0.62-.noarch
>  
> Installed Package Info:
> {code}
> [root@gbthadoop1x ~]# rpm -qi apache-tomcat.noarch
> Name: apache-tomcatRelocations: (not relocatable)
> Version : 7.0.62Vendor: Apache HAWQ Incubating
> Release :   Build Date: Thu 18 Feb 2016 
> 05:17:05 PM EST
> Install Date: Mon 08 Aug 2016 02:23:58 PM EDT  Build Host: shivram
> Group   : (none)   Source RPM: apache-tomcat-7.0.62--src.rpm
> Size: 13574438 License: ASL 2.0
> Signature   : (none)
> Packager: shivram
> URL :
> Summary : Apache Tomcat RPM
> Description :
> {code}
>  
> This is what an installed package from big-top tomcat.
> {code}
> Package Name: bigtop-tomcat-6.0.41-1.el6.noarch
>  
> Installed Package info:
> [root@gbthadoop1x ~]# rpm -qi bigtop-tomcat.noarch
> Name: bigtop-tomcatRelocations: (not relocatable)
> Version : 6.0.41Vendor: (none)
> Release : 1.el6 Build Date: Tue 31 Mar 2015 
> 05:17:15 PM EDT
> Install Date: Fri 15 Jul 2016 10:25:00 AM EDT  Build Host: 
> ip-10-0-0-90.ec2.internal
> Group   : Development/Libraries Source RPM: 
> bigtop-tomcat-6.0.41-1.el6.src.rpm
> Size: 6398489  License: ASL 2.0
> Signature   : RSA/SHA1, Tue 31 Mar 2015 07:14:29 PM EDT, Key ID 
> b9733a7a07513cad
> URL : http://tomcat.apache.org/
> Summary : Apache Tomcat
> Description :
> Apache Tomcat is an open source software implementation of the
> Java Servlet and JavaServer Pages technologies.
> {code}
> We need to remove individual info and add release info, group info and 
> description if missing
> **Other RPMs to fix**
> * pxf-hbase-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at the 
> name of rpm
> * pxf-hdfs-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at the 
> name of rpm
> * pxf-hive-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at the 
> name of rpm
> * pxf-service-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at 
> the name of rpm
> * pxf-json  (not released yet)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-762) Hive aggregation queries through PXF sometimes hang

2016-08-29 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446608#comment-15446608
 ] 

Goden Yao commented on HAWQ-762:


I've got some other users reporting similar issues and will ask for logs to dig 
deeper.

> Hive aggregation queries through PXF sometimes hang
> ---
>
> Key: HAWQ-762
> URL: https://issues.apache.org/jira/browse/HAWQ-762
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Hcatalog, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
>  Labels: performance
> Fix For: backlog
>
>
> Reproduce Steps:
> {code}
> select count(*) from hcatalog.default.hivetable;
> {code}
> sometimes, this query will hang and we see from pxf logs that Hive thrift 
> server cannot be connected from PXF agent. 
> While users can still visit hive metastore (through HUE) and execute the same 
> query.
> After a restart of PXF agent, this query goes through without issues.
> *Troubleshooting Guide*
> - check catalina.out (tomcat) and pxf-service.log to see if the query request 
> gets to tomcat/pxf webapp, any exceptions happened during the time window
> - enable {code}log_min_messages=DEBUG2{code} to see at which step the query 
> is stuck
> - try:
> {code}
> curl http:///pxf/ProtocolVersion
> {code}
> where URI is the hostname or IP of the machine you installed PXF, port is 
> usually 51200 if you didn’t change it.
> The response you’ll get if PXF service is running OK:
> {code}
> {version: v14}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1030) User hang due to poor spin-lock/LWLock performance under high concurrency

2016-08-29 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1030:

Fix Version/s: 2.0.1.0-incubating

> User hang due to poor spin-lock/LWLock performance under high concurrency
> -
>
> Key: HAWQ-1030
> URL: https://issues.apache.org/jira/browse/HAWQ-1030
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.0.1.0-incubating
>
>
> Some clients have recently reported apparent hangs with their applications. 
> In all cases the symptoms were the same:
> * All sessions appear to be hung in LWLockAcquire or Release, specifically 
> s_lock
> * there is a high number of concurrent sessions (close to 100)
> * System is not actually hung, normally processing resumes after some period 
> of time when all sessions have completed their locking work
> The postgresql developer community has found several issues with performance 
> under high concurrency (> 32 sessions) in the spin-lock mechanism we've 
> inherited in HAWQ. This ultimately has been corrected in 9.5 with a 
> replacement to the spin-lock mechanism and appears to provide a significant 
> boost to query performance.
> The actual fix is in commit: ab5194e6f617a9a9e7aadb3dd1cee948a42d0755
> Only 1 line commit to s_lock.c could help address this and would be easy 
> enough to cherry-pick: b03d196be055450c7260749f17347c2d066b4254



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1027) Book configuration directory needs to be outside of content repo

2016-08-29 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1027:

Fix Version/s: backlog

> Book configuration directory needs to be outside of content repo
> 
>
> Key: HAWQ-1027
> URL: https://issues.apache.org/jira/browse/HAWQ-1027
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: David Yozie
>Assignee: David Yozie
> Fix For: backlog
>
>
> The incubator-hawq-docs repo includes a sample book configuration (hawq-docs) 
> for producing HTML.  Unfortunately, this configuration directory causes 
> problems with the latest middleman that will be used in an upcoming 
> bookbinder release, so it needs to be moved.  
> We can probably get away with putting it in a separate branch for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1028) Add '-d' option for hawq state to be compatible with Ambari

2016-08-29 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1028:

Fix Version/s: 2.0.1.0-incubating

> Add '-d' option for hawq state to be compatible with Ambari
> ---
>
> Key: HAWQ-1028
> URL: https://issues.apache.org/jira/browse/HAWQ-1028
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Radar Lei
> Fix For: 2.0.1.0-incubating
>
>
> Previously we removed the legacy option '-d' '--datadir' from 'hawq state' 
> command. This option is used to specify the master data directory, but we 
> never used it in our command line tools.
> Now we found this unused option is used by current version Ambari, and will 
> cause Ambari check HAWQ status failed if we removed it. So to be compatible 
> with Ambari, we need to add it back until Ambari do not use this option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1021) Need to log for some local_ssh function calls.

2016-08-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1021:

Fix Version/s: 2.0.1.0-incubating

> Need to log for some local_ssh function calls.
> --
>
> Key: HAWQ-1021
> URL: https://issues.apache.org/jira/browse/HAWQ-1021
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: hongwu
> Fix For: 2.0.1.0-incubating
>
>
> HAWQ management tools calls a lot of local_ssh() function to run some 
> external commands. The function is defined in hawqpylib/hawqlib.py. Many 
> callers does not set logger so we do not know any details about the command 
> running process. This is annoying when users/developers fail to run some 
> related commands and want to know the root cause quickly.
> Besides, there is two definitions of local_ssh(). Although they are not in 
> the same namespace, it is kind of annoying. We need to rename either one or 
> even both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-762) Hive aggregation queries through PXF sometimes hang

2016-08-24 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-762:
---
Description: 
Reproduce Steps:
{code}
select count(*) from hcatalog.default.hivetable;
{code}

sometimes, this query will hang and we see from pxf logs that Hive thrift 
server cannot be connected from PXF agent. 
While users can still visit hive metastore (through HUE) and execute the same 
query.

After a restart of PXF agent, this query goes through without issues.

*Troubleshooting Guide*
- check catalina.out (tomcat) and pxf-service.log to see if the query request 
gets to tomcat/pxf webapp, any exceptions happened during the time window
- enable {code}log_min_messages=DEBUG2{code} to see at which step the query is 
stuck
- try:
{code}
curl http:///pxf/ProtocolVersion
{code}
where URI is the hostname or IP of the machine you installed PXF, port is 
usually 51200 if you didn’t change it.
The response you’ll get if PXF service is running OK:
{code}
{version: v14}
{code}

  was:
Reproduce Steps:
{code}
select count(*) from hcatalog.default.hivetable;
{code}

sometimes, this query will hang and we see from pxf logs that Hive thrift 
server cannot be connected from PXF agent. 
While users can still visit hive metastore (through HUE) and execute the same 
query.

After a restart of PXF agent, this query goes through without issues.

*Troubleshooting Guide*
- check catalina.out (tomcat) logs to see if the query request gets to tomcat
- enable {code}log_min_messages=DEBUG2{code} to see at which step the query is 
stuck
- try:
{code}
curl http:///pxf/ProtocolVersion
{code}
where URI is the hostname or IP of the machine you installed PXF, port is 
usually 51200 if you didn’t change it.
The response you’ll get if PXF service is running OK:
{code}
{version: v14}
{code}


> Hive aggregation queries through PXF sometimes hang
> ---
>
> Key: HAWQ-762
> URL: https://issues.apache.org/jira/browse/HAWQ-762
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Hcatalog, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
>  Labels: performance
> Fix For: backlog
>
>
> Reproduce Steps:
> {code}
> select count(*) from hcatalog.default.hivetable;
> {code}
> sometimes, this query will hang and we see from pxf logs that Hive thrift 
> server cannot be connected from PXF agent. 
> While users can still visit hive metastore (through HUE) and execute the same 
> query.
> After a restart of PXF agent, this query goes through without issues.
> *Troubleshooting Guide*
> - check catalina.out (tomcat) and pxf-service.log to see if the query request 
> gets to tomcat/pxf webapp, any exceptions happened during the time window
> - enable {code}log_min_messages=DEBUG2{code} to see at which step the query 
> is stuck
> - try:
> {code}
> curl http:///pxf/ProtocolVersion
> {code}
> where URI is the hostname or IP of the machine you installed PXF, port is 
> usually 51200 if you didn’t change it.
> The response you’ll get if PXF service is running OK:
> {code}
> {version: v14}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-762) Hive aggregation queries through PXF sometimes hang

2016-08-24 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-762:
---
Description: 
Reproduce Steps:
{code}
select count(*) from hcatalog.default.hivetable;
{code}

sometimes, this query will hang and we see from pxf logs that Hive thrift 
server cannot be connected from PXF agent. 
While users can still visit hive metastore (through HUE) and execute the same 
query.

After a restart of PXF agent, this query goes through without issues.

*Troubleshooting Guide*
- check catalina.out (tomcat) logs to see if the query request gets to tomcat
- enable {code}log_min_messages=DEBUG2{code} to see at which step the query is 
stuck
- try:
{code}
curl http:///pxf/ProtocolVersion
{code}
where URI is the hostname or IP of the machine you installed PXF, port is 
usually 51200 if you didn’t change it.
The response you’ll get if PXF service is running OK:
{code}
{version: v14}
{code}

  was:
Reproduce Steps:
{code}
select count(*) from hcatalog.default.hivetable;
{code}

sometimes, this query will hang and we see from pxf logs that Hive thrift 
server cannot be connected from PXF agent. 
While users can still visit hive metastore (through HUE) and execute the same 
query.

After a restart of PXF agent, this query goes through without issues.

**troubleshooting guide**
- check catalina.out (tomcat) logs to see if the query request gets to tomcat
- enable {code}log_min_messages=DEBUG2{code} to see at which step the query is 
stuck
- try:
{code}
curl http:///pxf/ProtocolVersion
{code}
where URI is the hostname or IP of the machine you installed PXF, port is 
usually 51200 if you didn’t change it.
The response you’ll get if PXF service is running OK:
{code}
{version: v14}
{code}


> Hive aggregation queries through PXF sometimes hang
> ---
>
> Key: HAWQ-762
> URL: https://issues.apache.org/jira/browse/HAWQ-762
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Hcatalog, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
>  Labels: performance
> Fix For: backlog
>
>
> Reproduce Steps:
> {code}
> select count(*) from hcatalog.default.hivetable;
> {code}
> sometimes, this query will hang and we see from pxf logs that Hive thrift 
> server cannot be connected from PXF agent. 
> While users can still visit hive metastore (through HUE) and execute the same 
> query.
> After a restart of PXF agent, this query goes through without issues.
> *Troubleshooting Guide*
> - check catalina.out (tomcat) logs to see if the query request gets to tomcat
> - enable {code}log_min_messages=DEBUG2{code} to see at which step the query 
> is stuck
> - try:
> {code}
> curl http:///pxf/ProtocolVersion
> {code}
> where URI is the hostname or IP of the machine you installed PXF, port is 
> usually 51200 if you didn’t change it.
> The response you’ll get if PXF service is running OK:
> {code}
> {version: v14}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-762) Hive aggregation queries through PXF sometimes hang

2016-08-24 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-762:
---
Description: 
Reproduce Steps:
{code}
select count(*) from hcatalog.default.hivetable;
{code}

sometimes, this query will hang and we see from pxf logs that Hive thrift 
server cannot be connected from PXF agent. 
While users can still visit hive metastore (through HUE) and execute the same 
query.

After a restart of PXF agent, this query goes through without issues.

**troubleshooting guide**
- check catalina.out (tomcat) logs to see if the query request gets to tomcat
- enable {code}log_min_messages=DEBUG2{code} to see at which step the query is 
stuck
- try:
{code}
curl http:///pxf/ProtocolVersion
{code}
where URI is the hostname or IP of the machine you installed PXF, port is 
usually 51200 if you didn’t change it.
The response you’ll get if PXF service is running OK:
{code}
{version: v14}
{code}

  was:
Reproduce Steps:
{code}
select count(*) from hcatalog.default.hivetable;
{code}

sometimes, this query will hang and we see from pxf logs that Hive thrift 
server cannot be connected from PXF agent. 
While users can still visit hive metastore (through HUE) and execute the same 
query.

After a restart of PXF agent, this query goes through without issues.



> Hive aggregation queries through PXF sometimes hang
> ---
>
> Key: HAWQ-762
> URL: https://issues.apache.org/jira/browse/HAWQ-762
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Hcatalog, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
>  Labels: performance
> Fix For: backlog
>
>
> Reproduce Steps:
> {code}
> select count(*) from hcatalog.default.hivetable;
> {code}
> sometimes, this query will hang and we see from pxf logs that Hive thrift 
> server cannot be connected from PXF agent. 
> While users can still visit hive metastore (through HUE) and execute the same 
> query.
> After a restart of PXF agent, this query goes through without issues.
> **troubleshooting guide**
> - check catalina.out (tomcat) logs to see if the query request gets to tomcat
> - enable {code}log_min_messages=DEBUG2{code} to see at which step the query 
> is stuck
> - try:
> {code}
> curl http:///pxf/ProtocolVersion
> {code}
> where URI is the hostname or IP of the machine you installed PXF, port is 
> usually 51200 if you didn’t change it.
> The response you’ll get if PXF service is running OK:
> {code}
> {version: v14}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1013) Move HAWQ Ambari plugin to Apache HAWQ

2016-08-24 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435475#comment-15435475
 ] 

Goden Yao commented on HAWQ-1013:
-

Thanks for the clarification - in that case, I don't think it'd be necessary to 
package this python script into a separate RPM - -which is the case today I 
suppose?
The python script should be part of HAWQ utility and packaged within HAWQ RPM.

> Move HAWQ Ambari plugin to Apache HAWQ
> --
>
> Key: HAWQ-1013
> URL: https://issues.apache.org/jira/browse/HAWQ-1013
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Ambari
>Reporter: Matt
>Assignee: Alexander Denissov
>  Labels: UX
> Fix For: backlog
>
>
> To add HAWQ and PXF to Ambari, users have to follow certain manual steps. 
> These manual steps include:
> - Adding HAWQ and PXF metainfo.xml files (containing metadata about the 
> service) under the stack to be installed.
> - Adding repositories, where HAWQ and PXF rpms reside so that Ambari can use 
> it during installation. This requires updating repoinfo.xml under the stack 
> HAWQ and PXF is being added to.
> - Adding repositories to an existing stack managed by Ambari requires adding 
> the repositories using the Ambari REST endpoint.
> The HAWQ Ambari plugin automates the above steps using a script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HAWQ-1013) Move HAWQ Ambari plugin to Apache HAWQ

2016-08-23 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433804#comment-15433804
 ] 

Goden Yao edited comment on HAWQ-1013 at 8/23/16 10:56 PM:
---

can you be more specific if this is an effort to move Ambari plugin into HAWQ 
repo?
Please also list the source code directory where you intend to put this in.


was (Author: godenyao):
can you be more specific if this is an effort to move Ambari plugin into HAWQ 
repo?

> Move HAWQ Ambari plugin to Apache HAWQ
> --
>
> Key: HAWQ-1013
> URL: https://issues.apache.org/jira/browse/HAWQ-1013
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Ambari
>Reporter: Matt
>Assignee: Alexander Denissov
>  Labels: UX
> Fix For: backlog
>
>
> To add HAWQ and PXF to Ambari, users have to follow certain manual steps. 
> These manual steps include:
> - Adding HAWQ and PXF metainfo.xml files (containing metadata about the 
> service) under the stack to be installed.
> - Adding repositories, where HAWQ and PXF rpms reside so that Ambari can use 
> it during installation. This requires updating repoinfo.xml under the stack 
> HAWQ and PXF is being added to.
> - Adding repositories to an existing stack managed by Ambari requires adding 
> the repositories using the Ambari REST endpoint.
> The HAWQ Ambari plugin automates the above steps using a script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1013) Move HAWQ Ambari plugin to Apache HAWQ

2016-08-23 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433804#comment-15433804
 ] 

Goden Yao commented on HAWQ-1013:
-

can you be more specific if this is an effort to move Ambari plugin into HAWQ 
repo?

> Move HAWQ Ambari plugin to Apache HAWQ
> --
>
> Key: HAWQ-1013
> URL: https://issues.apache.org/jira/browse/HAWQ-1013
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Ambari
>Reporter: Matt
>Assignee: Alexander Denissov
>  Labels: UX
> Fix For: backlog
>
>
> To add HAWQ and PXF to Ambari, users have to follow certain manual steps. 
> These manual steps include:
> - Adding HAWQ and PXF metainfo.xml files (containing metadata about the 
> service) under the stack to be installed.
> - Adding repositories, where HAWQ and PXF rpms reside so that Ambari can use 
> it during installation. This requires updating repoinfo.xml under the stack 
> HAWQ and PXF is being added to.
> - Adding repositories to an existing stack managed by Ambari requires adding 
> the repositories using the Ambari REST endpoint.
> The HAWQ Ambari plugin automates the above steps using a script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1013) Add a utility to help users add HAWQ to Ambari

2016-08-23 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1013:

Labels: UX  (was: )

> Add a utility to help users add HAWQ to Ambari
> --
>
> Key: HAWQ-1013
> URL: https://issues.apache.org/jira/browse/HAWQ-1013
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Ambari
>Reporter: Matt
>Assignee: Alexander Denissov
>  Labels: UX
> Fix For: backlog
>
>
> To add HAWQ and PXF to Ambari, users have to follow certain manual steps. 
> These manual steps include:
> - Adding HAWQ and PXF metainfo.xml files (containing metadata about the 
> service) under the stack to be installed.
> - Adding repositories, where HAWQ and PXF rpms reside so that Ambari can use 
> it during installation. This requires updating repoinfo.xml under the stack 
> HAWQ and PXF is being added to.
> - Adding repositories to an existing stack managed by Ambari requires adding 
> the repositories using the Ambari REST endpoint.
> The above steps can be automated using a script to improve the user 
> experience while installing HAWQ using Ambari.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1013) Add a utility to help users add HAWQ to Ambari

2016-08-23 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1013:

Fix Version/s: backlog

> Add a utility to help users add HAWQ to Ambari
> --
>
> Key: HAWQ-1013
> URL: https://issues.apache.org/jira/browse/HAWQ-1013
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Ambari
>Reporter: Matt
>Assignee: Alexander Denissov
> Fix For: backlog
>
>
> To add HAWQ and PXF to Ambari, users have to follow certain manual steps. 
> These manual steps include:
> - Adding HAWQ and PXF metainfo.xml files (containing metadata about the 
> service) under the stack to be installed.
> - Adding repositories, where HAWQ and PXF rpms reside so that Ambari can use 
> it during installation. This requires updating repoinfo.xml under the stack 
> HAWQ and PXF is being added to.
> - Adding repositories to an existing stack managed by Ambari requires adding 
> the repositories using the Ambari REST endpoint.
> The above steps can be automated using a script to improve the user 
> experience while installing HAWQ using Ambari.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1007) Add the pgcrypto code into hawq

2016-08-18 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15426859#comment-15426859
 ] 

Goden Yao commented on HAWQ-1007:
-

I've converted HAWQ-1010 as sub-task for this jira and updated the release to 
be 2.0.0.0-incubating

> Add the pgcrypto code into hawq
> ---
>
> Key: HAWQ-1007
> URL: https://issues.apache.org/jira/browse/HAWQ-1007
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> We are using pgcrypto with a hacking solution by dynamically git-cloning 
> postgresql and patching the code. This is inefficient for development.
> We are doing like this just because of the Apache crypto process.
> http://www.apache.org/dev/crypto.html
> Recently the community decides to go through the apache crypto process.
> Now it is a good chance for us to add the code into hawq.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1010) Add the crypto notice in README

2016-08-18 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1010:

Issue Type: Sub-task  (was: Bug)
Parent: HAWQ-1007

> Add the crypto notice in README
> ---
>
> Key: HAWQ-1010
> URL: https://issues.apache.org/jira/browse/HAWQ-1010
> Project: Apache HAWQ
>  Issue Type: Sub-task
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> According to http://www.apache.org/dev/crypto.html
> A crypto notice is needed in README. We need to add it after we pass the 
> Apache cryptography process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1010) Add the crypto notice in README

2016-08-18 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1010:

Fix Version/s: (was: 2.0.1.0-incubating)
   2.0.0.0-incubating

> Add the crypto notice in README
> ---
>
> Key: HAWQ-1010
> URL: https://issues.apache.org/jira/browse/HAWQ-1010
> Project: Apache HAWQ
>  Issue Type: Sub-task
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> According to http://www.apache.org/dev/crypto.html
> A crypto notice is needed in README. We need to add it after we pass the 
> Apache cryptography process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1009) Remove requirement of environment value 'MASTER_DATA_DIRECTORY'

2016-08-17 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1009:

Fix Version/s: 2.0.1.0-incubating

> Remove requirement of environment value 'MASTER_DATA_DIRECTORY'
> ---
>
> Key: HAWQ-1009
> URL: https://issues.apache.org/jira/browse/HAWQ-1009
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: Radar Lei
>Assignee: Radar Lei
> Fix For: 2.0.1.0-incubating
>
>
> In old HAWQ command line tools, some tools will require 
> 'MASTER_DATA_DIRECTORY' environment to be set. 
> Since now we define 'MASTER_DATA_DIRECTORY' in hawq-site.xml, we do not need 
> to set it anymore. So we'd better remove these requirements to avoid user 
> confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1007) Add the pgcrypto code into hawq

2016-08-17 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15425024#comment-15425024
 ] 

Goden Yao commented on HAWQ-1007:
-

We may need to add a task:
INFORM USERS BY INCLUDING A CRYPTO NOTICE IN THE DISTRIBUTION'S README

according to the page.

> Add the pgcrypto code into hawq
> ---
>
> Key: HAWQ-1007
> URL: https://issues.apache.org/jira/browse/HAWQ-1007
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> We are using pgcrypto with a hacking solution by dynamically git-cloning 
> postgresql and patching the code. This is inefficient for development.
> We are doing like this just because of the Apache crypto process.
> http://www.apache.org/dev/crypto.html
> Recently the community decides to go through the apache crypto process.
> Now it is a good chance for us to add the code into hawq.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1007) Add the pgcrypto code into hawq

2016-08-16 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1007:

Component/s: Build

> Add the pgcrypto code into hawq
> ---
>
> Key: HAWQ-1007
> URL: https://issues.apache.org/jira/browse/HAWQ-1007
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> We are using pgcrypto with a hacking solution by dynamically git-cloning 
> postgresql and patching the code. This is inefficient for development.
> We are doing like this just because of the Apache crypto process.
> http://www.apache.org/dev/crypto.html
> Recently the community decides to go through the apache crypto process.
> Now it is a good chance for us to add the code into hawq.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1007) Add the pgcrypto code into hawq

2016-08-16 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1007:

Fix Version/s: 2.0.0.0-incubating

> Add the pgcrypto code into hawq
> ---
>
> Key: HAWQ-1007
> URL: https://issues.apache.org/jira/browse/HAWQ-1007
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> We are using pgcrypto with a hacking solution by dynamically git-cloning 
> postgresql and patching the code. This is inefficient for development.
> We are doing like this just because of the Apache crypto process.
> http://www.apache.org/dev/crypto.html
> Recently the community decides to go through the apache crypto process.
> Now it is a good chance for us to add the code into hawq.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1006) Fix RPM compliance in Redhat Satellite

2016-08-15 Thread Goden Yao (JIRA)
Goden Yao created HAWQ-1006:
---

 Summary: Fix RPM compliance in Redhat Satellite
 Key: HAWQ-1006
 URL: https://issues.apache.org/jira/browse/HAWQ-1006
 Project: Apache HAWQ
  Issue Type: Task
  Components: Build, PXF
Reporter: Goden Yao
Assignee: Goden Yao


Current Package name: apache-tomcat-7.0.62-.noarch
 
Installed Package Info:
{code}
[root@gbthadoop1x ~]# rpm -qi apache-tomcat.noarch
Name: apache-tomcatRelocations: (not relocatable)
Version : 7.0.62Vendor: Apache HAWQ Incubating
Release :   Build Date: Thu 18 Feb 2016 
05:17:05 PM EST
Install Date: Mon 08 Aug 2016 02:23:58 PM EDT  Build Host: shivram
Group   : (none)   Source RPM: apache-tomcat-7.0.62--src.rpm
Size: 13574438 License: ASL 2.0
Signature   : (none)
Packager: shivram
URL :
Summary : Apache Tomcat RPM
Description :
{code}
 
This is what an installed package from big-top tomcat.
{code}
Package Name: bigtop-tomcat-6.0.41-1.el6.noarch
 
Installed Package info:

[root@gbthadoop1x ~]# rpm -qi bigtop-tomcat.noarch
Name: bigtop-tomcatRelocations: (not relocatable)
Version : 6.0.41Vendor: (none)
Release : 1.el6 Build Date: Tue 31 Mar 2015 
05:17:15 PM EDT
Install Date: Fri 15 Jul 2016 10:25:00 AM EDT  Build Host: 
ip-10-0-0-90.ec2.internal
Group   : Development/Libraries Source RPM: 
bigtop-tomcat-6.0.41-1.el6.src.rpm
Size: 6398489  License: ASL 2.0
Signature   : RSA/SHA1, Tue 31 Mar 2015 07:14:29 PM EDT, Key ID b9733a7a07513cad
URL : http://tomcat.apache.org/
Summary : Apache Tomcat
Description :
Apache Tomcat is an open source software implementation of the
Java Servlet and JavaServer Pages technologies.
{code}

We need to remove individual info and add release info, group info and 
description if missing

**Other RPMs to fix**
* pxf-hbase-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at the 
name of rpm
* pxf-hdfs-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at the 
name of rpm
* pxf-hive-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at the 
name of rpm
* pxf-service-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at the 
name of rpm
* pxf-json  (not released yet)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1006) Fix RPM compliance in Redhat Satellite

2016-08-15 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1006:

Fix Version/s: 2.0.1.0-incubating

> Fix RPM compliance in Redhat Satellite
> --
>
> Key: HAWQ-1006
> URL: https://issues.apache.org/jira/browse/HAWQ-1006
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build, PXF
>Reporter: Goden Yao
>Assignee: Goden Yao
> Fix For: 2.0.1.0-incubating
>
>
> Current Package name: apache-tomcat-7.0.62-.noarch
>  
> Installed Package Info:
> {code}
> [root@gbthadoop1x ~]# rpm -qi apache-tomcat.noarch
> Name: apache-tomcatRelocations: (not relocatable)
> Version : 7.0.62Vendor: Apache HAWQ Incubating
> Release :   Build Date: Thu 18 Feb 2016 
> 05:17:05 PM EST
> Install Date: Mon 08 Aug 2016 02:23:58 PM EDT  Build Host: shivram
> Group   : (none)   Source RPM: apache-tomcat-7.0.62--src.rpm
> Size: 13574438 License: ASL 2.0
> Signature   : (none)
> Packager: shivram
> URL :
> Summary : Apache Tomcat RPM
> Description :
> {code}
>  
> This is what an installed package from big-top tomcat.
> {code}
> Package Name: bigtop-tomcat-6.0.41-1.el6.noarch
>  
> Installed Package info:
> [root@gbthadoop1x ~]# rpm -qi bigtop-tomcat.noarch
> Name: bigtop-tomcatRelocations: (not relocatable)
> Version : 6.0.41Vendor: (none)
> Release : 1.el6 Build Date: Tue 31 Mar 2015 
> 05:17:15 PM EDT
> Install Date: Fri 15 Jul 2016 10:25:00 AM EDT  Build Host: 
> ip-10-0-0-90.ec2.internal
> Group   : Development/Libraries Source RPM: 
> bigtop-tomcat-6.0.41-1.el6.src.rpm
> Size: 6398489  License: ASL 2.0
> Signature   : RSA/SHA1, Tue 31 Mar 2015 07:14:29 PM EDT, Key ID 
> b9733a7a07513cad
> URL : http://tomcat.apache.org/
> Summary : Apache Tomcat
> Description :
> Apache Tomcat is an open source software implementation of the
> Java Servlet and JavaServer Pages technologies.
> {code}
> We need to remove individual info and add release info, group info and 
> description if missing
> **Other RPMs to fix**
> * pxf-hbase-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at the 
> name of rpm
> * pxf-hdfs-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at the 
> name of rpm
> * pxf-hive-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at the 
> name of rpm
> * pxf-service-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at 
> the name of rpm
> * pxf-json  (not released yet)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1006) Fix RPM compliance in Redhat Satellite

2016-08-15 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1006:

Assignee: Oleksandr Diachenko  (was: Goden Yao)

> Fix RPM compliance in Redhat Satellite
> --
>
> Key: HAWQ-1006
> URL: https://issues.apache.org/jira/browse/HAWQ-1006
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build, PXF
>Reporter: Goden Yao
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.1.0-incubating
>
>
> Current Package name: apache-tomcat-7.0.62-.noarch
>  
> Installed Package Info:
> {code}
> [root@gbthadoop1x ~]# rpm -qi apache-tomcat.noarch
> Name: apache-tomcatRelocations: (not relocatable)
> Version : 7.0.62Vendor: Apache HAWQ Incubating
> Release :   Build Date: Thu 18 Feb 2016 
> 05:17:05 PM EST
> Install Date: Mon 08 Aug 2016 02:23:58 PM EDT  Build Host: shivram
> Group   : (none)   Source RPM: apache-tomcat-7.0.62--src.rpm
> Size: 13574438 License: ASL 2.0
> Signature   : (none)
> Packager: shivram
> URL :
> Summary : Apache Tomcat RPM
> Description :
> {code}
>  
> This is what an installed package from big-top tomcat.
> {code}
> Package Name: bigtop-tomcat-6.0.41-1.el6.noarch
>  
> Installed Package info:
> [root@gbthadoop1x ~]# rpm -qi bigtop-tomcat.noarch
> Name: bigtop-tomcatRelocations: (not relocatable)
> Version : 6.0.41Vendor: (none)
> Release : 1.el6 Build Date: Tue 31 Mar 2015 
> 05:17:15 PM EDT
> Install Date: Fri 15 Jul 2016 10:25:00 AM EDT  Build Host: 
> ip-10-0-0-90.ec2.internal
> Group   : Development/Libraries Source RPM: 
> bigtop-tomcat-6.0.41-1.el6.src.rpm
> Size: 6398489  License: ASL 2.0
> Signature   : RSA/SHA1, Tue 31 Mar 2015 07:14:29 PM EDT, Key ID 
> b9733a7a07513cad
> URL : http://tomcat.apache.org/
> Summary : Apache Tomcat
> Description :
> Apache Tomcat is an open source software implementation of the
> Java Servlet and JavaServer Pages technologies.
> {code}
> We need to remove individual info and add release info, group info and 
> description if missing
> **Other RPMs to fix**
> * pxf-hbase-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at the 
> name of rpm
> * pxf-hdfs-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at the 
> name of rpm
> * pxf-hive-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at the 
> name of rpm
> * pxf-service-3.0.0-22126.noarch.rpm - is the current one , .el6 missing at 
> the name of rpm
> * pxf-json  (not released yet)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-762) Hive aggregation queries through PXF sometimes hang

2016-08-15 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421493#comment-15421493
 ] 

Goden Yao commented on HAWQ-762:


[~michael.andre.pearce] - I heard the kerberos issue has been solved. Is this 
issue still there after that? please let me know.

> Hive aggregation queries through PXF sometimes hang
> ---
>
> Key: HAWQ-762
> URL: https://issues.apache.org/jira/browse/HAWQ-762
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Hcatalog, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
>  Labels: performance
> Fix For: backlog
>
>
> Reproduce Steps:
> {code}
> select count(*) from hcatalog.default.hivetable;
> {code}
> sometimes, this query will hang and we see from pxf logs that Hive thrift 
> server cannot be connected from PXF agent. 
> While users can still visit hive metastore (through HUE) and execute the same 
> query.
> After a restart of PXF agent, this query goes through without issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-743) RPM conflict between apache-tomcat and pxf-service

2016-08-12 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao resolved HAWQ-743.

Resolution: Fixed

> RPM conflict between apache-tomcat and pxf-service
> --
>
> Key: HAWQ-743
> URL: https://issues.apache.org/jira/browse/HAWQ-743
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.0.0.0-incubating
>Reporter: Zhanwei Wang
>Assignee: Shivram Mani
> Fix For: 2.0.1.0-incubating
>
>
> {code}
> ==
>  Package 架构   
> 版本源   
> 大小
> ==
> 正在安装:
>  pxf-service noarch   
>   3.0.0-22126 HDB 
> 212 k
> 事务概要
> ==
> 安装  1 软件包
> 总计:212 k
> 安装大小:371 k
> Is this ok [y/d/N]: y
> Downloading packages:
> Running transaction check
> Running transaction test
> Transaction check error:
>   file /opt/pivotal from install of pxf-service-0:3.0.0-22126.noarch 
> conflicts with file from package apache-tomcat-0:7.0.62-.noarch
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-743) RPM conflict between apache-tomcat and pxf-service

2016-08-12 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15419410#comment-15419410
 ] 

Goden Yao commented on HAWQ-743:


https://github.com/apache/incubator-hawq/pull/738/

> RPM conflict between apache-tomcat and pxf-service
> --
>
> Key: HAWQ-743
> URL: https://issues.apache.org/jira/browse/HAWQ-743
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.0.0.0-incubating
>Reporter: Zhanwei Wang
>Assignee: Shivram Mani
> Fix For: 2.0.1.0-incubating
>
>
> {code}
> ==
>  Package 架构   
> 版本源   
> 大小
> ==
> 正在安装:
>  pxf-service noarch   
>   3.0.0-22126 HDB 
> 212 k
> 事务概要
> ==
> 安装  1 软件包
> 总计:212 k
> 安装大小:371 k
> Is this ok [y/d/N]: y
> Downloading packages:
> Running transaction check
> Running transaction test
> Transaction check error:
>   file /opt/pivotal from install of pxf-service-0:3.0.0-22126.noarch 
> conflicts with file from package apache-tomcat-0:7.0.62-.noarch
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1000) Set dummy workfile pointer to NULL after calling ExecWorkFile_Close()

2016-08-11 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-1000:

Fix Version/s: 2.0.1.0-incubating

> Set dummy workfile pointer to NULL after calling ExecWorkFile_Close()
> -
>
> Key: HAWQ-1000
> URL: https://issues.apache.org/jira/browse/HAWQ-1000
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.0.1.0-incubating
>
>
> The parameter workfile for ExecWorkFile_Close() is freed in this function, 
> but in the calling function outside, the pointer variable still exists, we 
> need to set it to NULL pointer immediately, otherwise it will use some freed 
> pointer afterward.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-999) Treat hash table as random when file count is not in proportion to bucket number of table.

2016-08-11 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-999:
---
Fix Version/s: 2.0.1.0-incubating

> Treat hash table as random when file count is not in proportion to bucket 
> number of table.
> --
>
> Key: HAWQ-999
> URL: https://issues.apache.org/jira/browse/HAWQ-999
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
> Fix For: 2.0.1.0-incubating
>
>
> By definition, file count of a hash table should be equal to or a multiple of 
> the bucket number of the table. So if mismatch happens, we should not treat 
> it as hash table in data locality algorithm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-997) HAWQ doesn't send PXF data type with precision

2016-08-11 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-997:
---
Fix Version/s: backlog

> HAWQ doesn't send PXF data type with precision 
> ---
>
> Key: HAWQ-997
> URL: https://issues.apache.org/jira/browse/HAWQ-997
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Goden Yao
> Fix For: backlog
>
>
> HAWQ/PXF sends via the Rest api information about ATTR and types using 
> x-gp-attr-typename. Attributes such as varchar(3) char(3) are sent as varchar 
> and char. This causes HAWQ-992 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-996) gpfdist online help instructs user to download HAWQ Loader package from incorrect site

2016-08-10 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-996:
---
Priority: Minor  (was: Major)

> gpfdist online help instructs user to download HAWQ Loader package from 
> incorrect site
> --
>
> Key: HAWQ-996
> URL: https://issues.apache.org/jira/browse/HAWQ-996
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Lisa Owen
>Assignee: Lei Chang
>Priority: Minor
> Fix For: 2.0.1.0-incubating
>
>
> running "gpfdist --help" displays the following incorrect output:
> *
> RUNNING GPFDIST AS A WINDOWS SERVICE
> *
> HAWQ Loaders allow gpfdist to run as a Windows Service.
> Follow the instructions below to download, register and
> activate gpfdist as a service:
> 1. Update your HAWQ Loader package to the latest
>version. This package is available from the
>EMC Download Center (https://emc.subscribenet.com)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-996) gpfdist online help instructs user to download HAWQ Loader package from incorrect site

2016-08-10 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-996:
---
Fix Version/s: 2.0.1.0-incubating

> gpfdist online help instructs user to download HAWQ Loader package from 
> incorrect site
> --
>
> Key: HAWQ-996
> URL: https://issues.apache.org/jira/browse/HAWQ-996
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Lisa Owen
>Assignee: Lei Chang
> Fix For: 2.0.1.0-incubating
>
>
> running "gpfdist --help" displays the following incorrect output:
> *
> RUNNING GPFDIST AS A WINDOWS SERVICE
> *
> HAWQ Loaders allow gpfdist to run as a Windows Service.
> Follow the instructions below to download, register and
> activate gpfdist as a service:
> 1. Update your HAWQ Loader package to the latest
>version. This package is available from the
>EMC Download Center (https://emc.subscribenet.com)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-995) Bump PXF version to 3.0.1

2016-08-10 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao resolved HAWQ-995.

Resolution: Implemented

> Bump PXF version to 3.0.1
> -
>
> Key: HAWQ-995
> URL: https://issues.apache.org/jira/browse/HAWQ-995
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: PXF
>Reporter: Goden Yao
>Assignee: Goden Yao
> Fix For: 2.0.1.0-incubating
>
>
> this is to match HAWQ 2.0.1.0 release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-995) Bump PXF version to 3.0.1

2016-08-10 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao closed HAWQ-995.
--

> Bump PXF version to 3.0.1
> -
>
> Key: HAWQ-995
> URL: https://issues.apache.org/jira/browse/HAWQ-995
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: PXF
>Reporter: Goden Yao
>Assignee: Goden Yao
> Fix For: 2.0.1.0-incubating
>
>
> this is to match HAWQ 2.0.1.0 release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-995) Bump PXF version to 3.0.1.0

2016-08-10 Thread Goden Yao (JIRA)
Goden Yao created HAWQ-995:
--

 Summary: Bump PXF version to 3.0.1.0
 Key: HAWQ-995
 URL: https://issues.apache.org/jira/browse/HAWQ-995
 Project: Apache HAWQ
  Issue Type: Task
  Components: PXF
Reporter: Goden Yao
Assignee: Goden Yao


this is to match HAWQ 2.0.1.0 release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-995) Bump PXF version to 3.0.1.0

2016-08-10 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-995:
---
Fix Version/s: 2.0.1.0-incubating

> Bump PXF version to 3.0.1.0
> ---
>
> Key: HAWQ-995
> URL: https://issues.apache.org/jira/browse/HAWQ-995
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: PXF
>Reporter: Goden Yao
>Assignee: Goden Yao
> Fix For: 2.0.1.0-incubating
>
>
> this is to match HAWQ 2.0.1.0 release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-995) Bump PXF version to 3.0.1

2016-08-10 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-995:
---
Summary: Bump PXF version to 3.0.1  (was: Bump PXF version to 3.0.1.0)

> Bump PXF version to 3.0.1
> -
>
> Key: HAWQ-995
> URL: https://issues.apache.org/jira/browse/HAWQ-995
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: PXF
>Reporter: Goden Yao
>Assignee: Goden Yao
> Fix For: 2.0.1.0-incubating
>
>
> this is to match HAWQ 2.0.1.0 release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-994) PL/R UDF need to be separated from postgres process for robustness

2016-08-10 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-994:
---
Fix Version/s: backlog

> PL/R UDF need to be separated from postgres process for robustness
> --
>
> Key: HAWQ-994
> URL: https://issues.apache.org/jira/browse/HAWQ-994
> Project: Apache HAWQ
>  Issue Type: New Feature
>Reporter: Ming LI
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Background:
> In previous single node DB, user always deploy testing code on another 
> testing DB. Now the data maintained in HAWQ grows enormously, so it is hard 
> to deploy a testing hawq with the same test data. 
> So user need to run some testing UDF or deploy some UDFs which lack of 
> testing the whole data directly onto hawq in production env, which may crash 
> in PL/R or R code. Sometimes poorly written query leads to postmaster reset 
> causing all running jobs to be cancelled and rolled back. Customer often sees 
> this as a HAWQ issue even if it is a user code issue. So we need to separated 
> from postgres process, and change inter process communication from shared 
> memory to others(e.g. pipe, socket and so on).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   6   7   8   >