[jira] [Updated] (HAWQ-1644) Siwtch PXF to support keytab configuration via pxf-env and make delegation token optional

2018-07-31 Thread Shivram Mani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1644:
---
Description: 
1. For secure hadoop, configure all secure specific configurations in pxf-env 
as opposed to pxf-site.

2. Make delegation token property option for PXF when used against secure 
hadoop. Delegation token is a complementary means for authentication with 
hadoop along with Kerberos. There is not much value in using the delegation 
token with segment only PXF architecture, as each pxf jvm will need to 
establish the token anyway. Simply authenticating with Keberos should be good 
enough in such a deployment mode.

  was:
Make delegation token property option for PXF when used against secure hadoop

Delegation token is a complementary means for authentication with hadoop along 
with Kerberos. There is not much value in using the delegation token with 
segment only PXF architecture, as each pxf jvm will need to establish the token 
anyway. Simply authenticating with Keberos should be good enough in such a 
deployment mode.


> Siwtch PXF to support keytab configuration via pxf-env and make delegation 
> token optional
> -
>
> Key: HAWQ-1644
> URL: https://issues.apache.org/jira/browse/HAWQ-1644
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>Priority: Major
>
> 1. For secure hadoop, configure all secure specific configurations in pxf-env 
> as opposed to pxf-site.
> 2. Make delegation token property option for PXF when used against secure 
> hadoop. Delegation token is a complementary means for authentication with 
> hadoop along with Kerberos. There is not much value in using the delegation 
> token with segment only PXF architecture, as each pxf jvm will need to 
> establish the token anyway. Simply authenticating with Keberos should be good 
> enough in such a deployment mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1644) Siwtch PXF to support keytab configuration via pxf-env and make delegation token optional

2018-07-31 Thread Shivram Mani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1644:
---
Summary: Siwtch PXF to support keytab configuration via pxf-env and make 
delegation token optional  (was: Make delegation token optional for PXF)

> Siwtch PXF to support keytab configuration via pxf-env and make delegation 
> token optional
> -
>
> Key: HAWQ-1644
> URL: https://issues.apache.org/jira/browse/HAWQ-1644
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>Priority: Major
>
> Make delegation token property option for PXF when used against secure hadoop
> Delegation token is a complementary means for authentication with hadoop 
> along with Kerberos. There is not much value in using the delegation token 
> with segment only PXF architecture, as each pxf jvm will need to establish 
> the token anyway. Simply authenticating with Keberos should be good enough in 
> such a deployment mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1644) Make delegation token optional for PXF

2018-07-31 Thread Shivram Mani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1644:
---
Summary: Make delegation token optional for PXF  (was: Make delegationtoken 
optional for PXF)

> Make delegation token optional for PXF
> --
>
> Key: HAWQ-1644
> URL: https://issues.apache.org/jira/browse/HAWQ-1644
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Ed Espino
>Priority: Major
>
> Make delegation token property option for PXF
> Delegation token is a complementary means for authentication with hadoop 
> along with Kerberos. There is not much value in using the delegation token 
> with segment only PXF architecture, as each pxf jvm will need to establish 
> the token anyway. Simply authenticating with Keberos should be good enough in 
> such a deployment mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1644) Make delegation token optional for PXF

2018-07-31 Thread Shivram Mani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1644:
--

Assignee: Shivram Mani  (was: Ed Espino)

> Make delegation token optional for PXF
> --
>
> Key: HAWQ-1644
> URL: https://issues.apache.org/jira/browse/HAWQ-1644
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>Priority: Major
>
> Make delegation token property option for PXF when used against secure hadoop
> Delegation token is a complementary means for authentication with hadoop 
> along with Kerberos. There is not much value in using the delegation token 
> with segment only PXF architecture, as each pxf jvm will need to establish 
> the token anyway. Simply authenticating with Keberos should be good enough in 
> such a deployment mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1644) Make delegation token optional for PXF

2018-07-31 Thread Shivram Mani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1644:
---
Description: 
Make delegation token property option for PXF when used against secure hadoop

Delegation token is a complementary means for authentication with hadoop along 
with Kerberos. There is not much value in using the delegation token with 
segment only PXF architecture, as each pxf jvm will need to establish the token 
anyway. Simply authenticating with Keberos should be good enough in such a 
deployment mode.

  was:
Make delegation token property option for PXF

Delegation token is a complementary means for authentication with hadoop along 
with Kerberos. There is not much value in using the delegation token with 
segment only PXF architecture, as each pxf jvm will need to establish the token 
anyway. Simply authenticating with Keberos should be good enough in such a 
deployment mode.


> Make delegation token optional for PXF
> --
>
> Key: HAWQ-1644
> URL: https://issues.apache.org/jira/browse/HAWQ-1644
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Ed Espino
>Priority: Major
>
> Make delegation token property option for PXF when used against secure hadoop
> Delegation token is a complementary means for authentication with hadoop 
> along with Kerberos. There is not much value in using the delegation token 
> with segment only PXF architecture, as each pxf jvm will need to establish 
> the token anyway. Simply authenticating with Keberos should be good enough in 
> such a deployment mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1644) Make delegationtoken optional for PXF

2018-07-31 Thread Shivram Mani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1644:
---
Issue Type: Improvement  (was: Bug)

> Make delegationtoken optional for PXF
> -
>
> Key: HAWQ-1644
> URL: https://issues.apache.org/jira/browse/HAWQ-1644
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Ed Espino
>Priority: Major
>
> Make delegation token property option for PXF
> Delegation token is a complementary means for authentication with hadoop 
> along with Kerberos. There is not much value in using the delegation token 
> with segment only PXF architecture, as each pxf jvm will need to establish 
> the token anyway. Simply authenticating with Keberos should be good enough in 
> such a deployment mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (HAWQ-1622) Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be done on each request

2018-07-26 Thread Shivram Mani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani closed HAWQ-1622.
--

> Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be 
> done on each request
> ---
>
> Key: HAWQ-1622
> URL: https://issues.apache.org/jira/browse/HAWQ-1622
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Alexander Denissov
>Assignee: Lav Jain
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Closing PXF proxy UGIs on each request (implemented in HAWQ-1621) slows down 
> PXF request response time significantly when several threads work 
> concurrently as it locks FileSystem cache and holds the lock while the 
> cleanup of DFSClients is completed.
> This can be avoided by caching the proxy UGI for a given proxy user between 
> requests. Care must be taken to remove the cached entry after some 
> pre-defined TTL if and only if there are no current threads using any 
> FileSystem entries held by the cache. A combination of TTL-based cache with 
> ref-counting might be utilized to achieve this.
>  
> For some example of this, see: 
> https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/service/UserGroupInformationService.java
> Caching UGIs might be tricky when Kerberos support is implemented later, see: 
> https://issues.apache.org/jira/browse/HIVE-3098?focusedCommentId=13398979=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13398979



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1622) Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be done on each request

2018-07-26 Thread Shivram Mani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1622.

   Resolution: Fixed
Fix Version/s: 2.4.0.0-incubating

The PR has been merged to master.

Based on performance tests we see 17x speedup and 50x reduction in threads=

Closing JIRA...

> Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be 
> done on each request
> ---
>
> Key: HAWQ-1622
> URL: https://issues.apache.org/jira/browse/HAWQ-1622
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Alexander Denissov
>Assignee: Lav Jain
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Closing PXF proxy UGIs on each request (implemented in HAWQ-1621) slows down 
> PXF request response time significantly when several threads work 
> concurrently as it locks FileSystem cache and holds the lock while the 
> cleanup of DFSClients is completed.
> This can be avoided by caching the proxy UGI for a given proxy user between 
> requests. Care must be taken to remove the cached entry after some 
> pre-defined TTL if and only if there are no current threads using any 
> FileSystem entries held by the cache. A combination of TTL-based cache with 
> ref-counting might be utilized to achieve this.
>  
> For some example of this, see: 
> https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/service/UserGroupInformationService.java
> Caching UGIs might be tricky when Kerberos support is implemented later, see: 
> https://issues.apache.org/jira/browse/HIVE-3098?focusedCommentId=13398979=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13398979



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HAWQ-1614) Make Log error messages more generic

2018-05-16 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1614:
--

 Summary: Make Log error messages more generic
 Key: HAWQ-1614
 URL: https://issues.apache.org/jira/browse/HAWQ-1614
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Shivram Mani
Assignee: Ed Espino


Since PXF service is used by databases other than HAWQ, make the log messages 
more generic and not specific to HAWQ



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1587) Metadata not being handled correctly with PXF parameter isolation

2018-02-09 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1587.

Resolution: Fixed

> Metadata not being handled correctly with PXF parameter isolation
> -
>
> Key: HAWQ-1587
> URL: https://issues.apache.org/jira/browse/HAWQ-1587
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> https://issues.apache.org/jira/browse/HAWQ-1581 separated the system params 
> from the user param.s. installcheck-good reports the following error which is 
> due to the incorrect handling of the METADATA property in the PXF server
> {code:java}
>   SELECT * FROM pxf_get_item_fields('Hive', '*');
> ! ERROR:  remote component error (500) from '127.0.0.1:51200':  type  
> Exception report   message   Internal server error. Property 
> METADATA has no value in current requestdescription   The 
> server encountered an internal error that prevented it from fulfilling this 
> request.exception   java.lang.IllegalArgumentException: Internal server 
> error. Property METADATA has no value in current request 
>\d hcatalog.*.*
> + ERROR:  remote component error (500) from '127.0.0.1:51200':  type  
> Exception report   message   Internal server error. Property 
> METADATA has no value in current requestdescription   The 
> server encountered an internal error that prevented it from fulfilling this 
> request.exception   java.lang.IllegalArgumentException: Internal server 
> error. Property METADATA has no value in current request 
>   SELECT * FROM pxf_get_item_fields('Hive', '*abc*abc*');
> ! ERROR:  remote component error (500) from '127.0.0.1:51200':  type  
> Exception report   message   Internal server error. Property 
> METADATA has no value in current requestdescription   The 
> server encountered an internal error that prevented it from fulfilling this 
> request.exception   java.lang.IllegalArgumentException: Internal server 
> error. Property METADATA has no value in current request 
>   \d hcatalog.*abc*.*abc*
> + ERROR:  remote component error (500) from '127.0.0.1:51200':  type  
> Exception report   message   Internal server error. Property 
> METADATA has no value in current requestdescription   The 
> server encountered an internal error that prevented it from fulfilling this 
> request.exception   java.lang.IllegalArgumentException: Internal server 
> error. Property METADATA has no value in current request 
>   \d hcatalog
>   Invalid pattern provided.
>   \d hcatalog.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1587) Metadata not being handled correctly with PXF parameter isolation

2018-02-09 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1587:
--

Assignee: Shivram Mani  (was: Ed Espino)

> Metadata not being handled correctly with PXF parameter isolation
> -
>
> Key: HAWQ-1587
> URL: https://issues.apache.org/jira/browse/HAWQ-1587
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> https://issues.apache.org/jira/browse/HAWQ-1581 separated the system params 
> from the user param.s. installcheck-good reports the following error which is 
> due to the incorrect handling of the METADATA property in the PXF server
> {code:java}
>   SELECT * FROM pxf_get_item_fields('Hive', '*');
> ! ERROR:  remote component error (500) from '127.0.0.1:51200':  type  
> Exception report   message   Internal server error. Property 
> METADATA has no value in current requestdescription   The 
> server encountered an internal error that prevented it from fulfilling this 
> request.exception   java.lang.IllegalArgumentException: Internal server 
> error. Property METADATA has no value in current request 
>\d hcatalog.*.*
> + ERROR:  remote component error (500) from '127.0.0.1:51200':  type  
> Exception report   message   Internal server error. Property 
> METADATA has no value in current requestdescription   The 
> server encountered an internal error that prevented it from fulfilling this 
> request.exception   java.lang.IllegalArgumentException: Internal server 
> error. Property METADATA has no value in current request 
>   SELECT * FROM pxf_get_item_fields('Hive', '*abc*abc*');
> ! ERROR:  remote component error (500) from '127.0.0.1:51200':  type  
> Exception report   message   Internal server error. Property 
> METADATA has no value in current requestdescription   The 
> server encountered an internal error that prevented it from fulfilling this 
> request.exception   java.lang.IllegalArgumentException: Internal server 
> error. Property METADATA has no value in current request 
>   \d hcatalog.*abc*.*abc*
> + ERROR:  remote component error (500) from '127.0.0.1:51200':  type  
> Exception report   message   Internal server error. Property 
> METADATA has no value in current requestdescription   The 
> server encountered an internal error that prevented it from fulfilling this 
> request.exception   java.lang.IllegalArgumentException: Internal server 
> error. Property METADATA has no value in current request 
>   \d hcatalog
>   Invalid pattern provided.
>   \d hcatalog.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1587) Metadata not being handled correctly with PXF parameter isolation

2018-02-09 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1587:
---
Description: 
https://issues.apache.org/jira/browse/HAWQ-1581 separated the system params 
from the user param.s. installcheck-good reports the following error which is 
due to the incorrect handling of the METADATA property in the PXF server
{code:java}
  SELECT * FROM pxf_get_item_fields('Hive', '*');
! ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   Internal server error. Property METADATA has no 
value in current requestdescription   The server encountered an internal 
error that prevented it from fulfilling this request.exception   
java.lang.IllegalArgumentException: Internal server error. Property 
METADATA has no value in current request 
   \d hcatalog.*.*
+ ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   Internal server error. Property METADATA has no 
value in current requestdescription   The server encountered an internal 
error that prevented it from fulfilling this request.exception   
java.lang.IllegalArgumentException: Internal server error. Property 
METADATA has no value in current request 
  SELECT * FROM pxf_get_item_fields('Hive', '*abc*abc*');
! ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   Internal server error. Property METADATA has no 
value in current requestdescription   The server encountered an internal 
error that prevented it from fulfilling this request.exception   
java.lang.IllegalArgumentException: Internal server error. Property 
METADATA has no value in current request 
  \d hcatalog.*abc*.*abc*
+ ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   Internal server error. Property METADATA has no 
value in current requestdescription   The server encountered an internal 
error that prevented it from fulfilling this request.exception   
java.lang.IllegalArgumentException: Internal server error. Property 
METADATA has no value in current request 
  \d hcatalog
  Invalid pattern provided.
  \d hcatalog.
{code}

  was:
https://issues.apache.org/jira/browse/HAWQ-1581 separated the system params 
from the user param.s. installcheck-good reports the following error
{code:java}
  SELECT * FROM pxf_get_item_fields('Hive', '*');
! ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   Internal server error. Property METADATA has no 
value in current requestdescription   The server encountered an internal 
error that prevented it from fulfilling this request.exception   
java.lang.IllegalArgumentException: Internal server error. Property 
METADATA has no value in current request 
   \d hcatalog.*.*
+ ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   Internal server error. Property METADATA has no 
value in current requestdescription   The server encountered an internal 
error that prevented it from fulfilling this request.exception   
java.lang.IllegalArgumentException: Internal server error. Property 
METADATA has no value in current request 
  SELECT * FROM pxf_get_item_fields('Hive', '*abc*abc*');
! ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   Internal server error. Property METADATA has no 
value in current requestdescription   The server encountered an internal 
error that prevented it from fulfilling this request.exception   
java.lang.IllegalArgumentException: Internal server error. Property 
METADATA has no value in current request 
  \d hcatalog.*abc*.*abc*
+ ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   Internal server error. Property METADATA has no 
value in current requestdescription   The server encountered an internal 
error that prevented it from fulfilling this request.exception   
java.lang.IllegalArgumentException: Internal server error. Property 
METADATA has no value in current request 
  \d hcatalog
  Invalid pattern provided.
  \d hcatalog.
{code}


> Metadata not being handled correctly with PXF parameter isolation
> -
>
> Key: HAWQ-1587
> URL: https://issues.apache.org/jira/browse/HAWQ-1587
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Ed Espino
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> https://issues.apache.org/jira/browse/HAWQ-1581 separated the system params 
> from the user param.s. installcheck-good reports the following error which is 
> due to the incorrect handling of the METADATA 

[jira] [Created] (HAWQ-1587) Metadata not being handled correctly with PXF parameter isolation

2018-02-09 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1587:
--

 Summary: Metadata not being handled correctly with PXF parameter 
isolation
 Key: HAWQ-1587
 URL: https://issues.apache.org/jira/browse/HAWQ-1587
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Shivram Mani
Assignee: Ed Espino
 Fix For: 2.3.0.0-incubating


https://issues.apache.org/jira/browse/HAWQ-1581 separated the system params 
from the user param.s. installcheck-good reports the following error
{code:java}
  SELECT * FROM pxf_get_item_fields('Hive', '*');
! ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   Internal server error. Property METADATA has no 
value in current requestdescription   The server encountered an internal 
error that prevented it from fulfilling this request.exception   
java.lang.IllegalArgumentException: Internal server error. Property 
METADATA has no value in current request 
   \d hcatalog.*.*
+ ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   Internal server error. Property METADATA has no 
value in current requestdescription   The server encountered an internal 
error that prevented it from fulfilling this request.exception   
java.lang.IllegalArgumentException: Internal server error. Property 
METADATA has no value in current request 
  SELECT * FROM pxf_get_item_fields('Hive', '*abc*abc*');
! ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   Internal server error. Property METADATA has no 
value in current requestdescription   The server encountered an internal 
error that prevented it from fulfilling this request.exception   
java.lang.IllegalArgumentException: Internal server error. Property 
METADATA has no value in current request 
  \d hcatalog.*abc*.*abc*
+ ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   Internal server error. Property METADATA has no 
value in current requestdescription   The server encountered an internal 
error that prevented it from fulfilling this request.exception   
java.lang.IllegalArgumentException: Internal server error. Property 
METADATA has no value in current request 
  \d hcatalog
  Invalid pattern provided.
  \d hcatalog.
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1581) Separate PXF system parameters from user configurable visible parameters

2018-02-06 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1581.

   Resolution: Fixed
Fix Version/s: 2.4.0.0-incubating

> Separate PXF system parameters from user configurable visible parameters
> 
>
> Key: HAWQ-1581
> URL: https://issues.apache.org/jira/browse/HAWQ-1581
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> We need to modify our system such that user configurable options are kept 
> distinct form the internal parameters. The custom parameters are configured 
> in the {{LOCATION}} section of the external table DDL, is exposed to PXF 
> server as {{X-GP-}}.
> {{X-GP-USER}} is an internal parameter used to set the user information. When 
> the DDL has a custom parameter named {{user}} it ends up updating X-GP-USER 
> to also include the user configured in the DDL Location. This causes the JDBC 
> connector to fail.
> We will instead use \{X-GP-OPTIONS-} as the prefix for all user configurable 
> parameters to keep them isolated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1581) Separate PXF system parameters from user configurable visible parameters

2018-01-23 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1581:
--

Assignee: Shivram Mani  (was: Ed Espino)

> Separate PXF system parameters from user configurable visible parameters
> 
>
> Key: HAWQ-1581
> URL: https://issues.apache.org/jira/browse/HAWQ-1581
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>Priority: Major
>
> We need to modify our system such that user configurable options are kept 
> distinct form the internal parameters. The custom parameters are configured 
> in the {{LOCATION}} section of the external table DDL, is exposed to PXF 
> server as {{X-GP-}}.
> {{X-GP-USER}} is an internal parameter used to set the user information. When 
> the DDL has a custom parameter named {{user}} it ends up updating X-GP-USER 
> to also include the user configured in the DDL Location. This causes the JDBC 
> connector to fail.
> We will instead use \{X-GP-OPTIONS-} as the prefix for all user configurable 
> parameters to keep them isolated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HAWQ-1581) Separate PXF system parameters from user configurable visible parameters

2018-01-23 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1581:
--

 Summary: Separate PXF system parameters from user configurable 
visible parameters
 Key: HAWQ-1581
 URL: https://issues.apache.org/jira/browse/HAWQ-1581
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Shivram Mani
Assignee: Ed Espino


We need to modify our system such that user configurable options are kept 
distinct form the internal parameters. The custom parameters are configured in 
the {{LOCATION}} section of the external table DDL, is exposed to PXF server as 
{{X-GP-}}.

{{X-GP-USER}} is an internal parameter used to set the user information. When 
the DDL has a custom parameter named {{user}} it ends up updating X-GP-USER to 
also include the user configured in the DDL Location. This causes the JDBC 
connector to fail.

We will instead use \{X-GP-OPTIONS-} as the prefix for all user configurable 
parameters to keep them isolated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1579) x When you enable pxf DEBUG logging you might get annoying exceptions

2018-01-16 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1579.

   Resolution: Fixed
Fix Version/s: 2.4.0.0-incubating

> x When you enable pxf DEBUG logging you might get annoying exceptions
> -
>
> Key: HAWQ-1579
> URL: https://issues.apache.org/jira/browse/HAWQ-1579
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables
>Reporter: Dmitriy Dorofeev
>Assignee: Shivram Mani
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> When you enable DEBUG logging you might get annoying exceptions:
>  
> {{SEVERE: The RuntimeException could not be mapped to a response, re-throwing 
> to the HTTP container java.lang.NullPointerException at 
> java.lang.String.(String.java:566) at 
> org.apache.hawq.pxf.service.FragmentsResponseFormatter.printList(FragmentsResponseFormatter.java:147)
>  at 
> org.apache.hawq.pxf.service.FragmentsResponseFormatter.formatResponse(FragmentsResponseFormatter.java:54)
>  at 
> org.apache.hawq.pxf.service.rest.FragmenterResource.getFragments(FragmenterResource.java:88)
>  }}
> {{ }}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1579) x When you enable pxf DEBUG logging you might get annoying exceptions

2018-01-16 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328021#comment-16328021
 ] 

Shivram Mani commented on HAWQ-1579:


Merged to master 
https://github.com/apache/incubator-hawq/commit/0d620e431026834dd70c9e0d63edf8bb28b38227

> x When you enable pxf DEBUG logging you might get annoying exceptions
> -
>
> Key: HAWQ-1579
> URL: https://issues.apache.org/jira/browse/HAWQ-1579
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables
>Reporter: Dmitriy Dorofeev
>Assignee: Radar Lei
>Priority: Major
>
> When you enable DEBUG logging you might get annoying exceptions:
>  
> {{SEVERE: The RuntimeException could not be mapped to a response, re-throwing 
> to the HTTP container java.lang.NullPointerException at 
> java.lang.String.(String.java:566) at 
> org.apache.hawq.pxf.service.FragmentsResponseFormatter.printList(FragmentsResponseFormatter.java:147)
>  at 
> org.apache.hawq.pxf.service.FragmentsResponseFormatter.formatResponse(FragmentsResponseFormatter.java:54)
>  at 
> org.apache.hawq.pxf.service.rest.FragmenterResource.getFragments(FragmenterResource.java:88)
>  }}
> {{ }}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1579) x When you enable pxf DEBUG logging you might get annoying exceptions

2018-01-16 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1579:
--

Assignee: Shivram Mani  (was: Radar Lei)

> x When you enable pxf DEBUG logging you might get annoying exceptions
> -
>
> Key: HAWQ-1579
> URL: https://issues.apache.org/jira/browse/HAWQ-1579
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables
>Reporter: Dmitriy Dorofeev
>Assignee: Shivram Mani
>Priority: Major
>
> When you enable DEBUG logging you might get annoying exceptions:
>  
> {{SEVERE: The RuntimeException could not be mapped to a response, re-throwing 
> to the HTTP container java.lang.NullPointerException at 
> java.lang.String.(String.java:566) at 
> org.apache.hawq.pxf.service.FragmentsResponseFormatter.printList(FragmentsResponseFormatter.java:147)
>  at 
> org.apache.hawq.pxf.service.FragmentsResponseFormatter.formatResponse(FragmentsResponseFormatter.java:54)
>  at 
> org.apache.hawq.pxf.service.rest.FragmenterResource.getFragments(FragmenterResource.java:88)
>  }}
> {{ }}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1571) Support Hive's OpenCSVSerde

2017-12-12 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1571:
--

Assignee: Shivram Mani  (was: Ed Espino)

> Support Hive's OpenCSVSerde
> ---
>
> Key: HAWQ-1571
> URL: https://issues.apache.org/jira/browse/HAWQ-1571
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> Support reading Hive tables created with OpenCSVSerde



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HAWQ-1571) Support Hive's OpenCSVSerde

2017-12-12 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1571:
--

 Summary: Support Hive's OpenCSVSerde
 Key: HAWQ-1571
 URL: https://issues.apache.org/jira/browse/HAWQ-1571
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Shivram Mani
Assignee: Ed Espino


Support reading Hive tables created with OpenCSVSerde



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1543) PXF port,heap size etc should be configurable through pxf-env

2017-11-06 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16240723#comment-16240723
 ] 

Shivram Mani commented on HAWQ-1543:


Introduced PXF_JVM_OPTS which allows users to configure the heap size. The 
default value of max memory allocation is 2G and initial value of mem 
allocation is 1G.

> PXF port,heap size etc should be configurable through pxf-env
> -
>
> Key: HAWQ-1543
> URL: https://issues.apache.org/jira/browse/HAWQ-1543
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>Priority: Trivial
> Fix For: 2.3.0.0-incubating
>
>
> Currently the configs in pxf-env take effect in the pxf service only upon pxf 
> init. This should be consumed upon pxf start as well.
> Also introduce configuration for pxf heapsize



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1543) PXF port,heap size etc should be configurable through pxf-env

2017-11-03 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1543.

   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

> PXF port,heap size etc should be configurable through pxf-env
> -
>
> Key: HAWQ-1543
> URL: https://issues.apache.org/jira/browse/HAWQ-1543
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: 2.3.0.0-incubating
>
>
> Currently the configs in pxf-env take effect in the pxf service only upon pxf 
> init. This should be consumed upon pxf start as well.
> Also introduce configuration for pxf heapsize



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1543) PXF port,heap size etc should be configurable through pxf-env

2017-11-02 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1543:
--

Assignee: Shivram Mani  (was: Ed Espino)

> PXF port,heap size etc should be configurable through pxf-env
> -
>
> Key: HAWQ-1543
> URL: https://issues.apache.org/jira/browse/HAWQ-1543
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> Currently the configs in pxf-env take effect in the pxf service only upon pxf 
> init. This should be consumed upon pxf start as well.
> Also introduce configuration for pxf heapsize



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1543) PXF port,heap size etc should be configurable through pxf-env

2017-11-02 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1543:
---
Issue Type: Improvement  (was: Bug)

> PXF port,heap size etc should be configurable through pxf-env
> -
>
> Key: HAWQ-1543
> URL: https://issues.apache.org/jira/browse/HAWQ-1543
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Ed Espino
>
> Currently the configs in pxf-env take effect in the pxf service only upon pxf 
> init. This should be consumed upon pxf start as well.
> Also introduce configuration for pxf heapsize



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HAWQ-1543) PXF port,heap size etc should be configurable through pxf-env

2017-11-02 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1543:
--

 Summary: PXF port,heap size etc should be configurable through 
pxf-env
 Key: HAWQ-1543
 URL: https://issues.apache.org/jira/browse/HAWQ-1543
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Shivram Mani
Assignee: Ed Espino


Currently the configs in pxf-env take effect in the pxf service only upon pxf 
init. This should be consumed upon pxf start as well.
Also introduce configuration for pxf heapsize



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1535) PXF make install should also generate non versioned jars

2017-11-02 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani closed HAWQ-1535.
--

> PXF make install should also generate non versioned jars
> 
>
> Key: HAWQ-1535
> URL: https://issues.apache.org/jira/browse/HAWQ-1535
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> Applications that intend to use PXF wouldn't necessarily be aware of the 
> version number of the pxf component jars (eg: pxf-service, pxf-hdfs, etc). It 
> would be useful if make install command generates non version symlinks to the 
> versioned jars to make client applications use non versioned pxf jars



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1539) PXF Support for HBase 1.2.0

2017-11-02 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani closed HAWQ-1539.
--

> PXF Support for HBase 1.2.0
> ---
>
> Key: HAWQ-1539
> URL: https://issues.apache.org/jira/browse/HAWQ-1539
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Ed Espino
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> As part of HBase 1.2.x when PXF connects to hbase we see the following error 
> due to the missing metrics-core jar.
> {code}
> SEVERE: The exception contained within MappableContainerException could not 
> be mapped to a response, re-throwing to the HTTP container
> org.apache.hadoop.hbase.MasterNotRunningException: 
> com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: 
> com/yammer/metrics/core/Gauge
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1681)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1701)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1858)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:962)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:3030)
> at 
> org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter.getFragments(HBaseDataFragmenter.java:85)
> at 
> org.apache.hawq.pxf.service.rest.FragmenterResource.getFragments(FragmenterResource.java:84)
> {code}
> This jar needs to be added to the pxf-private classpath file



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1297) Make PXF install ready from source code

2017-11-02 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani closed HAWQ-1297.
--

> Make PXF install ready from source code
> ---
>
> Key: HAWQ-1297
> URL: https://issues.apache.org/jira/browse/HAWQ-1297
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>Priority: Major
> Fix For: 2.1.0.0-incubating
>
>
> Currently PXF has certain strict assumptions about the underlying hadoop 
> distributions and the paths used by the distributions. The goal of this task 
> is to make PXF build and service scripts usable even from source code without 
> any assumptions on the underlying hadoop distribution.
> Introduce 'make install' functionality which invokes './gradlew install'. The 
> user will need to configure PXF_HOME to a location intended to serve as the 
> deploy PXF path. If this is not configured it would use GPHOME/pxf as the 
> default deploy path.
> The purpose of this functionality will be to the following:
> 1. Download apache tomcat (tomcatGet) and copy under $PXF_HOME/apache-tomcat
> 2. Consolidate all PXF components jars under $PXF_HOME/lib
> 3. Conslidate all PXF configuration files under $PXF_HOME/conf
> 4. Conslidate PXF scripts under $PXF_HOME/bin
> 5. Tomcat template files under tomcat-templates/
> Following this the user would need to update the configuration files under 
> $PXF_HOME/conf based on the user environment and hadoop directly layout. 
> Specifically update the following files 
> -> pxf-private.classpath
> -> pxf-log4j.properties
> -> pxf-env.sh
> Init PXF
> $PXF_HOME/bin/pxf init
> Start PXF
> $PXF_HOME/bin/pxf start



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1541) PXF configs shouldn't be part of pxf webapp

2017-11-02 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani closed HAWQ-1541.
--

> PXF configs shouldn't be part of pxf webapp
> ---
>
> Key: HAWQ-1541
> URL: https://issues.apache.org/jira/browse/HAWQ-1541
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> The following config files are currently present as part of the pxf.war file 
> under webapps/pxf/WEB-INF/classes/. 
> {code}
> pxf-log4j.properties pxf-private.classpath
> pxf-profiles-default.xml pxf-profiles.xml pxf-public.classpath
> {code}
> This conflicts with updates from the user in pxf/conf as the webapp picks the 
> internal conf files when pxf restarts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1541) PXF configs shouldn't be part of pxf webapp

2017-11-02 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1541.

   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

> PXF configs shouldn't be part of pxf webapp
> ---
>
> Key: HAWQ-1541
> URL: https://issues.apache.org/jira/browse/HAWQ-1541
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> The following config files are currently present as part of the pxf.war file 
> under webapps/pxf/WEB-INF/classes/. 
> {code}
> pxf-log4j.properties pxf-private.classpath
> pxf-profiles-default.xml pxf-profiles.xml pxf-public.classpath
> {code}
> This conflicts with updates from the user in pxf/conf as the webapp picks the 
> internal conf files when pxf restarts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1535) PXF make install should also generate non versioned jars

2017-11-02 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1535.

   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

> PXF make install should also generate non versioned jars
> 
>
> Key: HAWQ-1535
> URL: https://issues.apache.org/jira/browse/HAWQ-1535
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> Applications that intend to use PXF wouldn't necessarily be aware of the 
> version number of the pxf component jars (eg: pxf-service, pxf-hdfs, etc). It 
> would be useful if make install command generates non version symlinks to the 
> versioned jars to make client applications use non versioned pxf jars



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1539) PXF Support for HBase 1.2.0

2017-11-02 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1539.

   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

> PXF Support for HBase 1.2.0
> ---
>
> Key: HAWQ-1539
> URL: https://issues.apache.org/jira/browse/HAWQ-1539
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Ed Espino
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> As part of HBase 1.2.x when PXF connects to hbase we see the following error 
> due to the missing metrics-core jar.
> {code}
> SEVERE: The exception contained within MappableContainerException could not 
> be mapped to a response, re-throwing to the HTTP container
> org.apache.hadoop.hbase.MasterNotRunningException: 
> com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: 
> com/yammer/metrics/core/Gauge
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1681)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1701)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1858)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:962)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:3030)
> at 
> org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter.getFragments(HBaseDataFragmenter.java:85)
> at 
> org.apache.hawq.pxf.service.rest.FragmenterResource.getFragments(FragmenterResource.java:84)
> {code}
> This jar needs to be added to the pxf-private classpath file



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1541) PXF configs shouldn't be part of pxf webapp

2017-10-26 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1541:
--

Assignee: Shivram Mani  (was: Radar Lei)

> PXF configs shouldn't be part of pxf webapp
> ---
>
> Key: HAWQ-1541
> URL: https://issues.apache.org/jira/browse/HAWQ-1541
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> The following config files are currently present as part of the pxf.war file 
> under webapps/pxf/WEB-INF/classes/. 
> {code}
> pxf-log4j.properties pxf-private.classpath
> pxf-profiles-default.xml pxf-profiles.xml pxf-public.classpath
> {code}
> This conflicts with updates from the user in pxf/conf as the webapp picks the 
> internal conf files when pxf restarts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HAWQ-1541) PXF configs shouldn't be part of pxf webapp

2017-10-26 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1541:
--

 Summary: PXF configs shouldn't be part of pxf webapp
 Key: HAWQ-1541
 URL: https://issues.apache.org/jira/browse/HAWQ-1541
 Project: Apache HAWQ
  Issue Type: Improvement
Reporter: Shivram Mani
Assignee: Radar Lei


The following config files are currently present as part of the pxf.war file 
under webapps/pxf/WEB-INF/classes/. 
{code}
pxf-log4j.properties pxf-private.classpath
pxf-profiles-default.xml pxf-profiles.xml pxf-public.classpath
{code}

This conflicts with updates from the user in pxf/conf as the webapp picks the 
internal conf files when pxf restarts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1539) PXF Support for HBase 1.2.0

2017-10-18 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1539:
---
Description: 
As part of HBase 1.2.x when PXF connects to hbase we see the following error 
due to the missing metrics-core jar.

{code}
SEVERE: The exception contained within MappableContainerException could not be 
mapped to a response, re-throwing to the HTTP container
org.apache.hadoop.hbase.MasterNotRunningException: 
com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: 
com/yammer/metrics/core/Gauge
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1681)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1701)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1858)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:962)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:3030)
at 
org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter.getFragments(HBaseDataFragmenter.java:85)
at 
org.apache.hawq.pxf.service.rest.FragmenterResource.getFragments(FragmenterResource.java:84)
{code}

This jar needs to be added to the pxf-private classpath file

  was:
As part of HBase 1.2.x when PXF connects to hbase we see the following error 
due to the missing metrics-core jar.
{code}
SEVERE: The exception contained within MappableContainerException could not be 
mapped to a response, re-throwing to the HTTP container
org.apache.hadoop.hbase.MasterNotRunningException: 
com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: 
com/yammer/metrics/core/Gauge
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1681)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1701)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1858)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:962)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:3030)
at 
org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter.getFragments(HBaseDataFragmenter.java:85)
at 
org.apache.hawq.pxf.service.rest.FragmenterResource.getFragments(FragmenterResource.java:84)
{code}

This jar needs to be added to the pxf-private classpath file


> PXF Support for HBase 1.2.0
> ---
>
> Key: HAWQ-1539
> URL: https://issues.apache.org/jira/browse/HAWQ-1539
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Ed Espino
>
> As part of HBase 1.2.x when PXF connects to hbase we see the following error 
> due to the missing metrics-core jar.
> {code}
> SEVERE: The exception contained within MappableContainerException could not 
> be mapped to a response, re-throwing to the HTTP container
> org.apache.hadoop.hbase.MasterNotRunningException: 
> com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: 
> com/yammer/metrics/core/Gauge
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1681)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1701)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1858)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:962)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:3030)
> at 
> org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter.getFragments(HBaseDataFragmenter.java:85)
> at 
> org.apache.hawq.pxf.service.rest.FragmenterResource.getFragments(FragmenterResource.java:84)
> {code}
> This jar needs to be added to the pxf-private classpath file



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1539) PXF Support for HBase 1.2.0

2017-10-18 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1539:
---
Description: 
As part of HBase 1.2.x when PXF connects to hbase we see the following error 
due to the missing metrics-core jar.
{code}
SEVERE: The exception contained within MappableContainerException could not be 
mapped to a response, re-throwing to the HTTP container
org.apache.hadoop.hbase.MasterNotRunningException: 
com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: 
com/yammer/metrics/core/Gauge
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1681)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1701)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1858)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:962)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:3030)
at 
org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter.getFragments(HBaseDataFragmenter.java:85)
at 
org.apache.hawq.pxf.service.rest.FragmenterResource.getFragments(FragmenterResource.java:84)
{code}

This jar needs to be added to the pxf-private classpath file

  was:
As part of HBase 1.2.x when PXF connects to hbase we see the following error 
due to the missing metrics-core jar.

SEVERE: The exception contained within MappableContainerException could not be 
mapped to a response, re-throwing to the HTTP container
org.apache.hadoop.hbase.MasterNotRunningException: 
com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: 
com/yammer/metrics/core/Gauge
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1681)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1701)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1858)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:962)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:3030)
at 
org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter.getFragments(HBaseDataFragmenter.java:85)
at 
org.apache.hawq.pxf.service.rest.FragmenterResource.getFragments(FragmenterResource.java:84)


> PXF Support for HBase 1.2.0
> ---
>
> Key: HAWQ-1539
> URL: https://issues.apache.org/jira/browse/HAWQ-1539
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Ed Espino
>
> As part of HBase 1.2.x when PXF connects to hbase we see the following error 
> due to the missing metrics-core jar.
> {code}
> SEVERE: The exception contained within MappableContainerException could not 
> be mapped to a response, re-throwing to the HTTP container
> org.apache.hadoop.hbase.MasterNotRunningException: 
> com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: 
> com/yammer/metrics/core/Gauge
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1681)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1701)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1858)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:962)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:3030)
> at 
> org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter.getFragments(HBaseDataFragmenter.java:85)
> at 
> org.apache.hawq.pxf.service.rest.FragmenterResource.getFragments(FragmenterResource.java:84)
> {code}
> This jar needs to be added to the pxf-private classpath file



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1539) PXF Support for HBase 1.2.0

2017-10-18 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1539:
---
Summary: PXF Support for HBase 1.2.0  (was: PXF Support for HBase 1.2.0 )

> PXF Support for HBase 1.2.0
> ---
>
> Key: HAWQ-1539
> URL: https://issues.apache.org/jira/browse/HAWQ-1539
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Ed Espino
>
> As part of HBase 1.2.x when PXF connects to hbase we see the following error 
> due to the missing metrics-core jar.
> SEVERE: The exception contained within MappableContainerException could not 
> be mapped to a response, re-throwing to the HTTP container
> org.apache.hadoop.hbase.MasterNotRunningException: 
> com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: 
> com/yammer/metrics/core/Gauge
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1681)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1701)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1858)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:962)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:3030)
> at 
> org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter.getFragments(HBaseDataFragmenter.java:85)
> at 
> org.apache.hawq.pxf.service.rest.FragmenterResource.getFragments(FragmenterResource.java:84)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1539) PXF Support for HBase 1.2.0

2017-10-18 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1539:
---
Description: 
As part of HBase 1.2.x when PXF connects to hbase we see the following error 
due to the missing metrics-core jar.

SEVERE: The exception contained within MappableContainerException could not be 
mapped to a response, re-throwing to the HTTP container
org.apache.hadoop.hbase.MasterNotRunningException: 
com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: 
com/yammer/metrics/core/Gauge
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1681)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1701)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1858)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:962)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:3030)
at 
org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter.getFragments(HBaseDataFragmenter.java:85)
at 
org.apache.hawq.pxf.service.rest.FragmenterResource.getFragments(FragmenterResource.java:84)

> PXF Support for HBase 1.2.0 
> 
>
> Key: HAWQ-1539
> URL: https://issues.apache.org/jira/browse/HAWQ-1539
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Ed Espino
>
> As part of HBase 1.2.x when PXF connects to hbase we see the following error 
> due to the missing metrics-core jar.
> SEVERE: The exception contained within MappableContainerException could not 
> be mapped to a response, re-throwing to the HTTP container
> org.apache.hadoop.hbase.MasterNotRunningException: 
> com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: 
> com/yammer/metrics/core/Gauge
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1681)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1701)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1858)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:962)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:3030)
> at 
> org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter.getFragments(HBaseDataFragmenter.java:85)
> at 
> org.apache.hawq.pxf.service.rest.FragmenterResource.getFragments(FragmenterResource.java:84)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1539) PXF Support for HBase 1.2.0

2017-10-18 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1539:
---
Issue Type: Improvement  (was: Bug)

> PXF Support for HBase 1.2.0 
> 
>
> Key: HAWQ-1539
> URL: https://issues.apache.org/jira/browse/HAWQ-1539
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Ed Espino
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HAWQ-1539) PXF Support for HBase 1.2.0

2017-10-18 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1539:
--

 Summary: PXF Support for HBase 1.2.0 
 Key: HAWQ-1539
 URL: https://issues.apache.org/jira/browse/HAWQ-1539
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Shivram Mani
Assignee: Ed Espino






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1538) Install internal profiles definition file in conf directory

2017-10-12 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1538:
--

Assignee: Shivram Mani  (was: Ed Espino)

> Install internal profiles definition file in conf directory
> ---
>
> Key: HAWQ-1538
> URL: https://issues.apache.org/jira/browse/HAWQ-1538
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Shivram Mani
> Fix For: 2.3.0.0-incubating
>
>
> As I user who installs PXF under some directory I want to have an empty file 
> installed under conf, which I can edit when I want to define my custom 
> profiles.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1536) pxf-json unit test failure due to json list comparison

2017-10-09 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani closed HAWQ-1536.
--

> pxf-json unit test failure due to json list comparison
> --
>
> Key: HAWQ-1536
> URL: https://issues.apache.org/jira/browse/HAWQ-1536
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: backlog
>
>
> We see the following error while unit testing PXF JSON profile on certain 
> environments.
> {code}
> org.apache.hawq.pxf.plugins.json.parser.PartitionedJsonParserNoSeekTest > 
> testNoSeek FAILED
> org.junit.ComparisonFailure at PartitionedJsonParserNoSeekTest.java:71
> 16 tests completed, 1 failed
> :pxf-json:test FAILED
> {code}
> The reason for the failure is since the List of elements in the JSON data 
> seems to be different from the expected response. In non GCP environments the 
> parser always returned objects in the same order as it appeared in the RAW 
> JSON file. On the GCP environment that doesn't seem to be the case. This 
> could be related to the JVM version or the jackson library version.
> Will be fixing the unit test to not have any order order expectation in the 
> results returned



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1536) pxf-json unit test failure due to json list comparison

2017-10-09 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1536.

   Resolution: Duplicate
Fix Version/s: backlog

> pxf-json unit test failure due to json list comparison
> --
>
> Key: HAWQ-1536
> URL: https://issues.apache.org/jira/browse/HAWQ-1536
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: backlog
>
>
> We see the following error while unit testing PXF JSON profile on certain 
> environments.
> {code}
> org.apache.hawq.pxf.plugins.json.parser.PartitionedJsonParserNoSeekTest > 
> testNoSeek FAILED
> org.junit.ComparisonFailure at PartitionedJsonParserNoSeekTest.java:71
> 16 tests completed, 1 failed
> :pxf-json:test FAILED
> {code}
> The reason for the failure is since the List of elements in the JSON data 
> seems to be different from the expected response. In non GCP environments the 
> parser always returned objects in the same order as it appeared in the RAW 
> JSON file. On the GCP environment that doesn't seem to be the case. This 
> could be related to the JVM version or the jackson library version.
> Will be fixing the unit test to not have any order order expectation in the 
> results returned



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1536) pxf-json unit test failure due to json list comparison

2017-10-09 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1536:
--

Assignee: Shivram Mani  (was: Ed Espino)

> pxf-json unit test failure due to json list comparison
> --
>
> Key: HAWQ-1536
> URL: https://issues.apache.org/jira/browse/HAWQ-1536
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> We see the following error while unit testing PXF JSON profile on certain 
> environments.
> {code}
> org.apache.hawq.pxf.plugins.json.parser.PartitionedJsonParserNoSeekTest > 
> testNoSeek FAILED
> org.junit.ComparisonFailure at PartitionedJsonParserNoSeekTest.java:71
> 16 tests completed, 1 failed
> :pxf-json:test FAILED
> {code}
> The reason for the failure is since the List of elements in the JSON data 
> seems to be different from the expected response. In non GCP environments the 
> parser always returned objects in the same order as it appeared in the RAW 
> JSON file. On the GCP environment that doesn't seem to be the case. This 
> could be related to the JVM version or the jackson library version.
> Will be fixing the unit test to not have any order order expectation in the 
> results returned



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HAWQ-1536) pxf-json unit test failure due to json list comparison

2017-10-09 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1536:
--

 Summary: pxf-json unit test failure due to json list comparison
 Key: HAWQ-1536
 URL: https://issues.apache.org/jira/browse/HAWQ-1536
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Shivram Mani
Assignee: Ed Espino


We see the following error while unit testing PXF JSON profile on certain 
environments.
{code}
org.apache.hawq.pxf.plugins.json.parser.PartitionedJsonParserNoSeekTest > 
testNoSeek FAILED
org.junit.ComparisonFailure at PartitionedJsonParserNoSeekTest.java:71

16 tests completed, 1 failed
:pxf-json:test FAILED
{code}
The reason for the failure is since the List of elements in the JSON data seems 
to be different from the expected response. In non GCP environments the parser 
always returned objects in the same order as it appeared in the RAW JSON file. 
On the GCP environment that doesn't seem to be the case. This could be related 
to the JVM version or the jackson library version.

Will be fixing the unit test to not have any order order expectation in the 
results returned



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1535) PXF make install should also generate non versioned jars

2017-10-03 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1535:
--

Assignee: Shivram Mani  (was: Ed Espino)

> PXF make install should also generate non versioned jars
> 
>
> Key: HAWQ-1535
> URL: https://issues.apache.org/jira/browse/HAWQ-1535
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> Applications that intend to use PXF wouldn't necessarily be aware of the 
> version number of the pxf component jars (eg: pxf-service, pxf-hdfs, etc). It 
> would be useful if make install command generates non version symlinks to the 
> versioned jars to make client applications use non versioned pxf jars



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HAWQ-1535) PXF make install should also generate non versioned jars

2017-10-03 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1535:
--

 Summary: PXF make install should also generate non versioned jars
 Key: HAWQ-1535
 URL: https://issues.apache.org/jira/browse/HAWQ-1535
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: PXF
Reporter: Shivram Mani
Assignee: Ed Espino


Applications that intend to use PXF wouldn't necessarily be aware of the 
version number of the pxf component jars (eg: pxf-service, pxf-hdfs, etc). It 
would be useful if make install command generates non version symlinks to the 
versioned jars to make client applications use non versioned pxf jars



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1233) Turn PXF into Postgres eXtension Framework

2017-09-14 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16166936#comment-16166936
 ] 

Shivram Mani commented on HAWQ-1233:


I think a more viable option would be to make PXF a Database agnostic extension 
framework and not necessarily only catering to only HAWQ or Postgres. Greenplum 
DB also has the use case of leveraging PXF for external hadoop data 
connectivity.

> Turn PXF into Postgres eXtension Framework
> --
>
> Key: HAWQ-1233
> URL: https://issues.apache.org/jira/browse/HAWQ-1233
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: PXF
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
>
> Even though PXF got developed as a external tables framework for HAWQ it 
> seems that it could be an extremely useful standalone service connecting 
> Postgres ecosystem to Hadoop ecosystem. 
> This is an umbrella JIRA for exploring a possibility of turning PXF into a 
> Postgres eXtension Framework



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1490) PXF dummy text profile for non-hadoop testing.

2017-07-12 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1490.

Resolution: Fixed

> PXF dummy text profile for non-hadoop testing.
> --
>
> Key: HAWQ-1490
> URL: https://issues.apache.org/jira/browse/HAWQ-1490
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Tests
>Reporter: John Gaskin
>Assignee: Shivram Mani
> Fix For: 2.3.0.0-incubating
>
>
> A user should be able to create a PXF ext table with Demo profile that allows 
> her to check whether PXF service is functional in a given system without 
> relying on any backend data source.
> This would serve as a profile to test PXF service/framework. The demo profile 
> could return some configured static data while invoking all the necessary 
> APIs/functions for the smoke test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1490) PXF dummy text profile for non-hadoop testing.

2017-07-12 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani closed HAWQ-1490.
--

> PXF dummy text profile for non-hadoop testing.
> --
>
> Key: HAWQ-1490
> URL: https://issues.apache.org/jira/browse/HAWQ-1490
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Tests
>Reporter: John Gaskin
>Assignee: Shivram Mani
> Fix For: 2.3.0.0-incubating
>
>
> A user should be able to create a PXF ext table with Demo profile that allows 
> her to check whether PXF service is functional in a given system without 
> relying on any backend data source.
> This would serve as a profile to test PXF service/framework. The demo profile 
> could return some configured static data while invoking all the necessary 
> APIs/functions for the smoke test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1490) PXF dummy text profile for non-hadoop testing.

2017-06-27 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1490:
---
Description: 
A user should be able to create a PXF ext table with Demo profile that allows 
her to check whether PXF service is functional in a given system without 
relying on any backend data source.
This would serve as a profile to test PXF service/framework. The demo profile 
could return some configured static data while invoking all the necessary 
APIs/functions for the smoke test.

  was:
A user should be able to create a PXF ext table with Demo profile that allows 
her to check whether PXF service is functional in a given system without 
relying on any backend data source. 
The demo profile could return some configured static data while invoking all 
the necessary APIs/functions for the smoke test.


> PXF dummy text profile for non-hadoop testing.
> --
>
> Key: HAWQ-1490
> URL: https://issues.apache.org/jira/browse/HAWQ-1490
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Tests
>Reporter: John Gaskin
>Assignee: Shivram Mani
> Fix For: 2.3.0.0-incubating
>
>
> A user should be able to create a PXF ext table with Demo profile that allows 
> her to check whether PXF service is functional in a given system without 
> relying on any backend data source.
> This would serve as a profile to test PXF service/framework. The demo profile 
> could return some configured static data while invoking all the necessary 
> APIs/functions for the smoke test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1490) PXF dummy text profile for non-hadoop testing.

2017-06-27 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1490:
--

Assignee: Shivram Mani  (was: Vineet Goel)

> PXF dummy text profile for non-hadoop testing.
> --
>
> Key: HAWQ-1490
> URL: https://issues.apache.org/jira/browse/HAWQ-1490
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Tests
>Reporter: John Gaskin
>Assignee: Shivram Mani
> Fix For: 2.3.0.0-incubating
>
>
> A user should be able to create a PXF ext table with Demo profile that allows 
> her to check whether PXF service is functional in a given system without 
> relying on any backend data source. The demo profile could return some 
> configured static data while invoking all the necessary APIs/functions for 
> the smoke test. User may wish to modify this preconfigured data file such 
> that the query then returns the modified content of the file



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1484) Spin PXF into a Separate Project for Data Access

2017-06-15 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051065#comment-16051065
 ] 

Shivram Mani commented on HAWQ-1484:


[~sirinath] when you say separate project, do you mean a separate top level 
project with its own repository ?

> Spin PXF into a Separate Project for Data Access
> 
>
> Key: HAWQ-1484
> URL: https://issues.apache.org/jira/browse/HAWQ-1484
> Project: Apache HAWQ
>  Issue Type: New Feature
>Reporter: Suminda Dharmasena
>Assignee: Radar Lei
>
> Can the PXF be spinned into a seperate projects here they can be used as a 
> basis for other data access projects.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1427) PXF JSON Profile lang3 dependency error

2017-04-07 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani closed HAWQ-1427.
--

> PXF JSON Profile lang3 dependency error
> ---
>
> Key: HAWQ-1427
> URL: https://issues.apache.org/jira/browse/HAWQ-1427
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: 2.3.0.0-incubating
>
>
> Issue with querying JSON data with PXF 
> Getting the following error
> ```
> ERROR:  remote component error (500) from '10.101.170.74:51200':  type  
> Exception report   message   java.lang.Exception: 
> java.lang.NoClassDefFoundError: org/apache/commons/lang3/StringUtils
> description   The server encountered an internal error that prevented it from 
> fulfilling this request.exception   javax.servlet.ServletException: 
> java.lang.Exception: java.lang.NoClassDefFoundError: 
> org/apache/commons/lang3/StringUtils (libchurl.c:897)  (seg9 
> AYCAPSU01TS205:4 pid=500623) (dispatcher.c:1801)
> ```
> We solved it by manually adding `/usr/local/lib/commons-lang3-3.1.jar` on 
> Advanced pxf-public-classpath through Ambari. We were also to reproduce it 
> internally just by following the example from the documentation 
> (http://hdb.docs.pivotal.io/212/hawq/pxf/JsonPXF.html).
> Also noticed the libraries loaded for pxf-service are 2.6 instead:
> ```[root@hdp1 ~]# pgrep -fl pxf-service | cut -f1 -d' ' | xargs lsof -p  | 
> grep commons-lang
> java372246  pxf  memREG8,2   284220  1319168 
> /usr/hdp/2.5.0.0-1245/hadoop/lib/commons-lang-2.6.jar
> java372246  pxf   62r   REG8,2   284220  1319168 
> /usr/hdp/2.5.0.0-1245/hadoop/lib/commons-lang-2.6.jar
> ```
> So for some reason PXF needs `commons-lang3-3.1.jar`?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1427) PXF JSON Profile lang3 dependency error

2017-04-07 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1427.

   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

Switched to using the available commons.lang instead of lang3

> PXF JSON Profile lang3 dependency error
> ---
>
> Key: HAWQ-1427
> URL: https://issues.apache.org/jira/browse/HAWQ-1427
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: 2.3.0.0-incubating
>
>
> Issue with querying JSON data with PXF 
> Getting the following error
> ```
> ERROR:  remote component error (500) from '10.101.170.74:51200':  type  
> Exception report   message   java.lang.Exception: 
> java.lang.NoClassDefFoundError: org/apache/commons/lang3/StringUtils
> description   The server encountered an internal error that prevented it from 
> fulfilling this request.exception   javax.servlet.ServletException: 
> java.lang.Exception: java.lang.NoClassDefFoundError: 
> org/apache/commons/lang3/StringUtils (libchurl.c:897)  (seg9 
> AYCAPSU01TS205:4 pid=500623) (dispatcher.c:1801)
> ```
> We solved it by manually adding `/usr/local/lib/commons-lang3-3.1.jar` on 
> Advanced pxf-public-classpath through Ambari. We were also to reproduce it 
> internally just by following the example from the documentation 
> (http://hdb.docs.pivotal.io/212/hawq/pxf/JsonPXF.html).
> Also noticed the libraries loaded for pxf-service are 2.6 instead:
> ```[root@hdp1 ~]# pgrep -fl pxf-service | cut -f1 -d' ' | xargs lsof -p  | 
> grep commons-lang
> java372246  pxf  memREG8,2   284220  1319168 
> /usr/hdp/2.5.0.0-1245/hadoop/lib/commons-lang-2.6.jar
> java372246  pxf   62r   REG8,2   284220  1319168 
> /usr/hdp/2.5.0.0-1245/hadoop/lib/commons-lang-2.6.jar
> ```
> So for some reason PXF needs `commons-lang3-3.1.jar`?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1427) PXF JSON Profile lang3 dependency error

2017-04-07 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15961084#comment-15961084
 ] 

Shivram Mani commented on HAWQ-1427:


This class is available as part of hive-exec jar which is configured as a run 
time dependancy in pxf-private.classapth. On clusters which don't have hive 
rpms this issue would occur.
Will switch to using the preexisting org.apache.commons.lang utility instead.

> PXF JSON Profile lang3 dependency error
> ---
>
> Key: HAWQ-1427
> URL: https://issues.apache.org/jira/browse/HAWQ-1427
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> Issue with querying JSON data with PXF 
> Getting the following error
> ```
> ERROR:  remote component error (500) from '10.101.170.74:51200':  type  
> Exception report   message   java.lang.Exception: 
> java.lang.NoClassDefFoundError: org/apache/commons/lang3/StringUtils
> description   The server encountered an internal error that prevented it from 
> fulfilling this request.exception   javax.servlet.ServletException: 
> java.lang.Exception: java.lang.NoClassDefFoundError: 
> org/apache/commons/lang3/StringUtils (libchurl.c:897)  (seg9 
> AYCAPSU01TS205:4 pid=500623) (dispatcher.c:1801)
> ```
> We solved it by manually adding `/usr/local/lib/commons-lang3-3.1.jar` on 
> Advanced pxf-public-classpath through Ambari. We were also to reproduce it 
> internally just by following the example from the documentation 
> (http://hdb.docs.pivotal.io/212/hawq/pxf/JsonPXF.html).
> Also noticed the libraries loaded for pxf-service are 2.6 instead:
> ```[root@hdp1 ~]# pgrep -fl pxf-service | cut -f1 -d' ' | xargs lsof -p  | 
> grep commons-lang
> java372246  pxf  memREG8,2   284220  1319168 
> /usr/hdp/2.5.0.0-1245/hadoop/lib/commons-lang-2.6.jar
> java372246  pxf   62r   REG8,2   284220  1319168 
> /usr/hdp/2.5.0.0-1245/hadoop/lib/commons-lang-2.6.jar
> ```
> So for some reason PXF needs `commons-lang3-3.1.jar`?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1427) PXF JSON Profile lang3 dependency error

2017-04-07 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1427:
--

Assignee: Shivram Mani  (was: Ed Espino)

> PXF JSON Profile lang3 dependency error
> ---
>
> Key: HAWQ-1427
> URL: https://issues.apache.org/jira/browse/HAWQ-1427
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> Issue with querying JSON data with PXF 
> Getting the following error
> ```
> ERROR:  remote component error (500) from '10.101.170.74:51200':  type  
> Exception report   message   java.lang.Exception: 
> java.lang.NoClassDefFoundError: org/apache/commons/lang3/StringUtils
> description   The server encountered an internal error that prevented it from 
> fulfilling this request.exception   javax.servlet.ServletException: 
> java.lang.Exception: java.lang.NoClassDefFoundError: 
> org/apache/commons/lang3/StringUtils (libchurl.c:897)  (seg9 
> AYCAPSU01TS205:4 pid=500623) (dispatcher.c:1801)
> ```
> We solved it by manually adding `/usr/local/lib/commons-lang3-3.1.jar` on 
> Advanced pxf-public-classpath through Ambari. We were also to reproduce it 
> internally just by following the example from the documentation 
> (http://hdb.docs.pivotal.io/212/hawq/pxf/JsonPXF.html).
> Also noticed the libraries loaded for pxf-service are 2.6 instead:
> ```[root@hdp1 ~]# pgrep -fl pxf-service | cut -f1 -d' ' | xargs lsof -p  | 
> grep commons-lang
> java372246  pxf  memREG8,2   284220  1319168 
> /usr/hdp/2.5.0.0-1245/hadoop/lib/commons-lang-2.6.jar
> java372246  pxf   62r   REG8,2   284220  1319168 
> /usr/hdp/2.5.0.0-1245/hadoop/lib/commons-lang-2.6.jar
> ```
> So for some reason PXF needs `commons-lang3-3.1.jar`?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1427) PXF JSON Profile lang3 dependency error

2017-04-07 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1427:
--

 Summary: PXF JSON Profile lang3 dependency error
 Key: HAWQ-1427
 URL: https://issues.apache.org/jira/browse/HAWQ-1427
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Shivram Mani
Assignee: Ed Espino


Issue with querying JSON data with PXF 
Getting the following error
```
ERROR:  remote component error (500) from '10.101.170.74:51200':  type  
Exception report   message   java.lang.Exception: 
java.lang.NoClassDefFoundError: org/apache/commons/lang3/StringUtils
description   The server encountered an internal error that prevented it from 
fulfilling this request.exception   javax.servlet.ServletException: 
java.lang.Exception: java.lang.NoClassDefFoundError: 
org/apache/commons/lang3/StringUtils (libchurl.c:897)  (seg9 
AYCAPSU01TS205:4 pid=500623) (dispatcher.c:1801)
```
We solved it by manually adding `/usr/local/lib/commons-lang3-3.1.jar` on 
Advanced pxf-public-classpath through Ambari. We were also to reproduce it 
internally just by following the example from the documentation 
(http://hdb.docs.pivotal.io/212/hawq/pxf/JsonPXF.html).
Also noticed the libraries loaded for pxf-service are 2.6 instead:
```[root@hdp1 ~]# pgrep -fl pxf-service | cut -f1 -d' ' | xargs lsof -p  | grep 
commons-lang
java372246  pxf  memREG8,2   284220  1319168 
/usr/hdp/2.5.0.0-1245/hadoop/lib/commons-lang-2.6.jar
java372246  pxf   62r   REG8,2   284220  1319168 
/usr/hdp/2.5.0.0-1245/hadoop/lib/commons-lang-2.6.jar
```
So for some reason PXF needs `commons-lang3-3.1.jar`?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1421) Improve PXF rpm package name format and dependencies

2017-03-31 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951674#comment-15951674
 ] 

Shivram Mani commented on HAWQ-1421:


Just to summarize the discussion within the pull request.
An apache complaint rpm will be the default result of running make rpm
If you need it to be hdp distribution compliant, make sure the HDP rpms (along 
with the virtual rpms) are installed or made available in a repo.
This is to keep the dependancy generic across apache and every other hadoop 
distribution.

> Improve PXF rpm package name format and dependencies
> 
>
> Key: HAWQ-1421
> URL: https://issues.apache.org/jira/browse/HAWQ-1421
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build, PXF
>Reporter: Radar Lei
>Assignee: Shivram Mani
> Fix For: 2.2.0.0-incubating
>
>
> If we build pxf rpm package by 'make rpm', we will get below pxf packages:
> {quote}
>   apache-tomcat-7.0.62-el6.noarch.rpm
>   pxf-3.2.1.0-root.el6.noarch.rpm
>   pxf-hbase_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hive_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-jdbc_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-json_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-service_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> {quote}
> These rpm packages have dependencies on Apache Hadoop components only, some 
> other Hadoop distributes can't satisfy it. E.g. :
> {quote}
> rpm -ivh pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> error: Failed dependencies:
>   pxf-service_3_2_1_0 >= 3.2.1.0 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop >= 2.7.1 is needed by pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop-mapreduce >= 2.7.1 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
> {quote}
> We'd better make the rpm package name format and dependencies better. 
>   1. Remove the version string like '3_2_1_0'.
>   2. Remove the user name from the build environment.
>   3. Consider do we need to include the apache-tomcat rpm package into HAWQ 
> rpm release tarball.
>   4. Improve the hard code 'el6' string. (This might be optinal)
>   5. Improve the dependencies, including the dependencies between these pxf 
> rpm packages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HAWQ-1421) Improve PXF rpm package name format and dependencies

2017-03-30 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949834#comment-15949834
 ] 

Shivram Mani edited comment on HAWQ-1421 at 3/30/17 9:11 PM:
-

[~rlei] if you need to build the RPM to work with hdp distribution, you will 
need to set the following variable
{code}
export HD=hdp
make rpm
{code}
The default behavior when you run make rpm would be to produce apache hadoop 
compliant rpms.

If the user plans to run on hadoop without rpm, he can simply resort to 
building from PXF source using make install as mentioned in 
https://cwiki.apache.org/confluence/display/HAWQ/PXF+Build+and+Install
Are you suggesting another approach ?


was (Author: shivram):
[~rlei] if you need to build the RPM to work with hdp distribution, you will 
need to set the following variable
{code}
HD=hdp
make rpm
{code}
The default behavior when you run make rpm would be to produce apache hadoop 
compliant rpms.

If the user plans to run on hadoop without rpm, he can simply resort to 
building from PXF source using make install as mentioned in 
https://cwiki.apache.org/confluence/display/HAWQ/PXF+Build+and+Install
Are you suggesting another approach ?

> Improve PXF rpm package name format and dependencies
> 
>
> Key: HAWQ-1421
> URL: https://issues.apache.org/jira/browse/HAWQ-1421
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build, PXF
>Reporter: Radar Lei
>Assignee: Shivram Mani
> Fix For: 2.2.0.0-incubating
>
>
> If we build pxf rpm package by 'make rpm', we will get below pxf packages:
> {quote}
>   apache-tomcat-7.0.62-el6.noarch.rpm
>   pxf-3.2.1.0-root.el6.noarch.rpm
>   pxf-hbase_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hive_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-jdbc_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-json_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-service_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> {quote}
> These rpm packages have dependencies on Apache Hadoop components only, some 
> other Hadoop distributes can't satisfy it. E.g. :
> {quote}
> rpm -ivh pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> error: Failed dependencies:
>   pxf-service_3_2_1_0 >= 3.2.1.0 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop >= 2.7.1 is needed by pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop-mapreduce >= 2.7.1 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
> {quote}
> We'd better make the rpm package name format and dependencies better. 
>   1. Remove the version string like '3_2_1_0'.
>   2. Remove the user name from the build environment.
>   3. Consider do we need to include the apache-tomcat rpm package into HAWQ 
> rpm release tarball.
>   4. Improve the hard code 'el6' string. (This might be optinal)
>   5. Improve the dependencies, including the dependencies between these pxf 
> rpm packages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1421) Improve PXF rpm package name format and dependencies

2017-03-30 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949834#comment-15949834
 ] 

Shivram Mani commented on HAWQ-1421:


[~rlei] if you need to build the RPM to work with hdp distribution, you will 
need to set the following variable
{code}
HD=hdp
make rpm
{code}
The default behavior when you run make rpm would be to produce apache hadoop 
compliant rpms.

If the user plans to run on hadoop without rpm, he can simply resort to 
building from PXF source using make install as mentioned in 
https://cwiki.apache.org/confluence/display/HAWQ/PXF+Build+and+Install
Are you suggesting another approach ?

> Improve PXF rpm package name format and dependencies
> 
>
> Key: HAWQ-1421
> URL: https://issues.apache.org/jira/browse/HAWQ-1421
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build, PXF
>Reporter: Radar Lei
>Assignee: Shivram Mani
> Fix For: 2.2.0.0-incubating
>
>
> If we build pxf rpm package by 'make rpm', we will get below pxf packages:
> {quote}
>   apache-tomcat-7.0.62-el6.noarch.rpm
>   pxf-3.2.1.0-root.el6.noarch.rpm
>   pxf-hbase_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hive_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-jdbc_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-json_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-service_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> {quote}
> These rpm packages have dependencies on Apache Hadoop components only, some 
> other Hadoop distributes can't satisfy it. E.g. :
> {quote}
> rpm -ivh pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> error: Failed dependencies:
>   pxf-service_3_2_1_0 >= 3.2.1.0 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop >= 2.7.1 is needed by pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop-mapreduce >= 2.7.1 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
> {quote}
> We'd better make the rpm package name format and dependencies better. 
>   1. Remove the version string like '3_2_1_0'.
>   2. Remove the user name from the build environment.
>   3. Consider do we need to include the apache-tomcat rpm package into HAWQ 
> rpm release tarball.
>   4. Improve the hard code 'el6' string. (This might be optinal)
>   5. Improve the dependencies, including the dependencies between these pxf 
> rpm packages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HAWQ-1421) Improve PXF rpm package name format and dependencies

2017-03-29 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15948210#comment-15948210
 ] 

Shivram Mani edited comment on HAWQ-1421 at 3/30/17 1:18 AM:
-

The following is the output on running make rpm for PXF
{code}
BUILD SUCCESSFUL
Total time: 48.714 secs
shivram:pxf shivram$ vim build/distributions/pxf-
pxf-3.2.1.0-shivram.el6.noarch.rpm  
pxf-hbase-3.2.1.0-shivram.el6.noarch.rpm
pxf-hdfs-3.2.1.0-shivram.el6.noarch.rpm 
pxf-hive-3.2.1.0-shivram.el6.noarch.rpm 
pxf-jdbc-3.2.1.0-shivram.el6.noarch.rpm 
pxf-json-3.2.1.0-shivram.el6.noarch.rpm 
pxf-service-3.2.1.0-shivram.el6.noarch.rpm
{code}
When the build number is set:
{code}
export BUILD_NUMBER=1
BUILD SUCCESSFUL
Total time: 43.94 secs
shivram:pxf shivram$ vim build/distributions/pxf-
pxf-3.2.1.0-1.el6.noarch.rpm  pxf-hbase-3.2.1.0-1.el6.noarch.rpm
pxf-hdfs-3.2.1.0-1.el6.noarch.rpm pxf-hive-3.2.1.0-1.el6.noarch.rpm 
pxf-jdbc-3.2.1.0-1.el6.noarch.rpm pxf-json-3.2.1.0-1.el6.noarch.rpm 
pxf-service-3.2.1.0-1.el6.noarch.rpm
{code}


was (Author: shivram):
The following is the output on running make rpm for PXF
```
BUILD SUCCESSFUL
Total time: 48.714 secs
shivram:pxf shivram$ vim build/distributions/pxf-
pxf-3.2.1.0-shivram.el6.noarch.rpm  
pxf-hbase-3.2.1.0-shivram.el6.noarch.rpm
pxf-hdfs-3.2.1.0-shivram.el6.noarch.rpm 
pxf-hive-3.2.1.0-shivram.el6.noarch.rpm 
pxf-jdbc-3.2.1.0-shivram.el6.noarch.rpm 
pxf-json-3.2.1.0-shivram.el6.noarch.rpm 
pxf-service-3.2.1.0-shivram.el6.noarch.rpm
```
When the build number is set:
```
export BUILD_NUMBER=1
BUILD SUCCESSFUL
Total time: 43.94 secs
shivram:pxf shivram$ vim build/distributions/pxf-
pxf-3.2.1.0-1.el6.noarch.rpm  pxf-hbase-3.2.1.0-1.el6.noarch.rpm
pxf-hdfs-3.2.1.0-1.el6.noarch.rpm pxf-hive-3.2.1.0-1.el6.noarch.rpm 
pxf-jdbc-3.2.1.0-1.el6.noarch.rpm pxf-json-3.2.1.0-1.el6.noarch.rpm 
pxf-service-3.2.1.0-1.el6.noarch.rpm
```

> Improve PXF rpm package name format and dependencies
> 
>
> Key: HAWQ-1421
> URL: https://issues.apache.org/jira/browse/HAWQ-1421
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build, PXF
>Reporter: Radar Lei
>Assignee: Shivram Mani
> Fix For: 2.2.0.0-incubating
>
>
> If we build pxf rpm package by 'make rpm', we will get below pxf packages:
> {quote}
>   apache-tomcat-7.0.62-el6.noarch.rpm
>   pxf-3.2.1.0-root.el6.noarch.rpm
>   pxf-hbase_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hive_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-jdbc_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-json_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-service_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> {quote}
> These rpm packages have dependencies on Apache Hadoop components only, some 
> other Hadoop distributes can't satisfy it. E.g. :
> {quote}
> rpm -ivh pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> error: Failed dependencies:
>   pxf-service_3_2_1_0 >= 3.2.1.0 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop >= 2.7.1 is needed by pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop-mapreduce >= 2.7.1 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
> {quote}
> We'd better make the rpm package name format and dependencies better. 
>   1. Remove the version string like '3_2_1_0'.
>   2. Remove the user name from the build environment.
>   3. Consider do we need to include the apache-tomcat rpm package into HAWQ 
> rpm release tarball.
>   4. Improve the hard code 'el6' string. (This might be optinal)
>   5. Improve the dependencies, including the dependencies between these pxf 
> rpm packages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1421) Improve PXF rpm package name format and dependencies

2017-03-29 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15948210#comment-15948210
 ] 

Shivram Mani commented on HAWQ-1421:


The following is the output on running make rpm for PXF
```
BUILD SUCCESSFUL
Total time: 48.714 secs
shivram:pxf shivram$ vim build/distributions/pxf-
pxf-3.2.1.0-shivram.el6.noarch.rpm  
pxf-hbase-3.2.1.0-shivram.el6.noarch.rpm
pxf-hdfs-3.2.1.0-shivram.el6.noarch.rpm 
pxf-hive-3.2.1.0-shivram.el6.noarch.rpm 
pxf-jdbc-3.2.1.0-shivram.el6.noarch.rpm 
pxf-json-3.2.1.0-shivram.el6.noarch.rpm 
pxf-service-3.2.1.0-shivram.el6.noarch.rpm
```
When the build number is set:
```
export BUILD_NUMBER=1
BUILD SUCCESSFUL
Total time: 43.94 secs
shivram:pxf shivram$ vim build/distributions/pxf-
pxf-3.2.1.0-1.el6.noarch.rpm  pxf-hbase-3.2.1.0-1.el6.noarch.rpm
pxf-hdfs-3.2.1.0-1.el6.noarch.rpm pxf-hive-3.2.1.0-1.el6.noarch.rpm 
pxf-jdbc-3.2.1.0-1.el6.noarch.rpm pxf-json-3.2.1.0-1.el6.noarch.rpm 
pxf-service-3.2.1.0-1.el6.noarch.rpm
```

> Improve PXF rpm package name format and dependencies
> 
>
> Key: HAWQ-1421
> URL: https://issues.apache.org/jira/browse/HAWQ-1421
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build, PXF
>Reporter: Radar Lei
>Assignee: Shivram Mani
> Fix For: 2.2.0.0-incubating
>
>
> If we build pxf rpm package by 'make rpm', we will get below pxf packages:
> {quote}
>   apache-tomcat-7.0.62-el6.noarch.rpm
>   pxf-3.2.1.0-root.el6.noarch.rpm
>   pxf-hbase_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hive_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-jdbc_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-json_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-service_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> {quote}
> These rpm packages have dependencies on Apache Hadoop components only, some 
> other Hadoop distributes can't satisfy it. E.g. :
> {quote}
> rpm -ivh pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> error: Failed dependencies:
>   pxf-service_3_2_1_0 >= 3.2.1.0 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop >= 2.7.1 is needed by pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop-mapreduce >= 2.7.1 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
> {quote}
> We'd better make the rpm package name format and dependencies better. 
>   1. Remove the version string like '3_2_1_0'.
>   2. Remove the user name from the build environment.
>   3. Consider do we need to include the apache-tomcat rpm package into HAWQ 
> rpm release tarball.
>   4. Improve the hard code 'el6' string. (This might be optinal)
>   5. Improve the dependencies, including the dependencies between these pxf 
> rpm packages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1421) Improve PXF rpm package name format and dependencies

2017-03-29 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15948176#comment-15948176
 ] 

Shivram Mani commented on HAWQ-1421:


[~rlei] this is good feedback.
We can remove the version information from the rpm name.

With regards to the username, we will resort to using systems user.name only if 
the BUILD_NUMBER variable isn't set. Ideally the BUILD_NUMBER needs to be set. 
As a default i think it is fine if we stick to the username, otherwise we will 
have to default to a hardcoded value of 1 which isn't good.

We do need to include the apache tomcat rpm as this is not publicly available. 
tomcat is only distributed via source

I don't think we need to change any pxf rpm dependancies. Having dependancy on 
hadoop rpms works with all well known hadoop distributions

> Improve PXF rpm package name format and dependencies
> 
>
> Key: HAWQ-1421
> URL: https://issues.apache.org/jira/browse/HAWQ-1421
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build, PXF
>Reporter: Radar Lei
>Assignee: Shivram Mani
> Fix For: 2.2.0.0-incubating
>
>
> If we build pxf rpm package by 'make rpm', we will get below pxf packages:
> {quote}
>   apache-tomcat-7.0.62-el6.noarch.rpm
>   pxf-3.2.1.0-root.el6.noarch.rpm
>   pxf-hbase_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hive_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-jdbc_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-json_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-service_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> {quote}
> These rpm packages have dependencies on Apache Hadoop components only, some 
> other Hadoop distributes can't satisfy it. E.g. :
> {quote}
> rpm -ivh pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> error: Failed dependencies:
>   pxf-service_3_2_1_0 >= 3.2.1.0 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop >= 2.7.1 is needed by pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop-mapreduce >= 2.7.1 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
> {quote}
> We'd better make the rpm package name format and dependencies better. 
>   1. Remove the version string like '3_2_1_0'.
>   2. Remove the user name from the build environment.
>   3. Consider do we need to include the apache-tomcat rpm package into HAWQ 
> rpm release tarball.
>   4. Improve the hard code 'el6' string. (This might be optinal)
>   5. Improve the dependencies, including the dependencies between these pxf 
> rpm packages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1421) Improve PXF rpm package name format and dependencies

2017-03-29 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1421:
--

Assignee: Shivram Mani  (was: Vineet Goel)

> Improve PXF rpm package name format and dependencies
> 
>
> Key: HAWQ-1421
> URL: https://issues.apache.org/jira/browse/HAWQ-1421
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build, PXF
>Reporter: Radar Lei
>Assignee: Shivram Mani
> Fix For: 2.2.0.0-incubating
>
>
> If we build pxf rpm package by 'make rpm', we will get below pxf packages:
> {quote}
>   apache-tomcat-7.0.62-el6.noarch.rpm
>   pxf-3.2.1.0-root.el6.noarch.rpm
>   pxf-hbase_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hive_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-jdbc_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-json_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-service_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> {quote}
> These rpm packages have dependencies on Apache Hadoop components only, some 
> other Hadoop distributes can't satisfy it. E.g. :
> {quote}
> rpm -ivh pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> error: Failed dependencies:
>   pxf-service_3_2_1_0 >= 3.2.1.0 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop >= 2.7.1 is needed by pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop-mapreduce >= 2.7.1 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
> {quote}
> We'd better make the rpm package name format and dependencies better. 
>   1. Remove the version string like '3_2_1_0'.
>   2. Remove the user name from the build environment.
>   3. Consider do we need to include the apache-tomcat rpm package into HAWQ 
> rpm release tarball.
>   4. Improve the hard code 'el6' string. (This might be optinal)
>   5. Improve the dependencies, including the dependencies between these pxf 
> rpm packages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1108) Add JDBC PXF Plugin

2017-03-16 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15928264#comment-15928264
 ] 

Shivram Mani commented on HAWQ-1108:


[~michael.andre.pearce] since you are the reporter of this jira, please 
resolve/close it once you verify that the JDBC Plugin feature is enabled.

> Add JDBC PXF Plugin
> ---
>
> Key: HAWQ-1108
> URL: https://issues.apache.org/jira/browse/HAWQ-1108
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Devin Jia
>
> On the back of the work in :
> https://issues.apache.org/jira/browse/HAWQ-779
> We would like to add to Hawq Plugins a JDBC implementation.
> There are currently two noted implementations in the openly available in 
> GitHub.
> 1) https://github.com/kojec/pxf-field/tree/master/jdbc-pxf-ext
> 2) https://github.com/inspur-insight/pxf-plugin/tree/master/pxf-jdbc
> The latter (2) is an improved version of the former (1) and also what 
> HAWQ-779 changes were to support.
> [~jiadx] would you be happy to contribute the source as apache 2 license open 
> source?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1108) Add JDBC PXF Plugin

2017-03-14 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925328#comment-15925328
 ] 

Shivram Mani commented on HAWQ-1108:


No problem. 
We will be marking this as a beta only feature until we add the required number 
of automation tests.

> Add JDBC PXF Plugin
> ---
>
> Key: HAWQ-1108
> URL: https://issues.apache.org/jira/browse/HAWQ-1108
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Devin Jia
>
> On the back of the work in :
> https://issues.apache.org/jira/browse/HAWQ-779
> We would like to add to Hawq Plugins a JDBC implementation.
> There are currently two noted implementations in the openly available in 
> GitHub.
> 1) https://github.com/kojec/pxf-field/tree/master/jdbc-pxf-ext
> 2) https://github.com/inspur-insight/pxf-plugin/tree/master/pxf-jdbc
> The latter (2) is an improved version of the former (1) and also what 
> HAWQ-779 changes were to support.
> [~jiadx] would you be happy to contribute the source as apache 2 license open 
> source?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1108) Add JDBC PXF Plugin

2017-03-14 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925019#comment-15925019
 ] 

Shivram Mani commented on HAWQ-1108:


I've merged it to master.

> Add JDBC PXF Plugin
> ---
>
> Key: HAWQ-1108
> URL: https://issues.apache.org/jira/browse/HAWQ-1108
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Devin Jia
>
> On the back of the work in :
> https://issues.apache.org/jira/browse/HAWQ-779
> We would like to add to Hawq Plugins a JDBC implementation.
> There are currently two noted implementations in the openly available in 
> GitHub.
> 1) https://github.com/kojec/pxf-field/tree/master/jdbc-pxf-ext
> 2) https://github.com/inspur-insight/pxf-plugin/tree/master/pxf-jdbc
> The latter (2) is an improved version of the former (1) and also what 
> HAWQ-779 changes were to support.
> [~jiadx] would you be happy to contribute the source as apache 2 license open 
> source?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1108) Add JDBC PXF Plugin

2017-03-14 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15924858#comment-15924858
 ] 

Shivram Mani commented on HAWQ-1108:


Kavinder had very clearly suggested Devin to squash his commits into one commit 
to make it easy for us to merge to master while preserving the original author 
of the code.
We can alternatively do this ourselves and try using git hooks to modify the 
author prior to merge.

> Add JDBC PXF Plugin
> ---
>
> Key: HAWQ-1108
> URL: https://issues.apache.org/jira/browse/HAWQ-1108
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Devin Jia
>
> On the back of the work in :
> https://issues.apache.org/jira/browse/HAWQ-779
> We would like to add to Hawq Plugins a JDBC implementation.
> There are currently two noted implementations in the openly available in 
> GitHub.
> 1) https://github.com/kojec/pxf-field/tree/master/jdbc-pxf-ext
> 2) https://github.com/inspur-insight/pxf-plugin/tree/master/pxf-jdbc
> The latter (2) is an improved version of the former (1) and also what 
> HAWQ-779 changes were to support.
> [~jiadx] would you be happy to contribute the source as apache 2 license open 
> source?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1108) Add JDBC PXF Plugin

2017-02-22 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879609#comment-15879609
 ] 

Shivram Mani commented on HAWQ-1108:


My bad, i was waiting for a notification from you to do a final review. Can you 
rebase it with the latest master along with the javadoc compile issues 
mentioned in the PR. We can merge it as soon as this is done

> Add JDBC PXF Plugin
> ---
>
> Key: HAWQ-1108
> URL: https://issues.apache.org/jira/browse/HAWQ-1108
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Devin Jia
>
> On the back of the work in :
> https://issues.apache.org/jira/browse/HAWQ-779
> We would like to add to Hawq Plugins a JDBC implementation.
> There are currently two noted implementations in the openly available in 
> GitHub.
> 1) https://github.com/kojec/pxf-field/tree/master/jdbc-pxf-ext
> 2) https://github.com/inspur-insight/pxf-plugin/tree/master/pxf-jdbc
> The latter (2) is an improved version of the former (1) and also what 
> HAWQ-779 changes were to support.
> [~jiadx] would you be happy to contribute the source as apache 2 license open 
> source?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1346) If using WebHdfsFileSystem as default Filesytem, it will cause cast type exception

2017-02-22 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15879151#comment-15879151
 ] 

Shivram Mani commented on HAWQ-1346:


Awesome. Feel free to submit a pull request. There is value in adding a new 
profile using LineRecordReader for other default FS implementations.

> If using WebHdfsFileSystem as default Filesytem, it will cause cast type 
> exception
> --
>
> Key: HAWQ-1346
> URL: https://issues.apache.org/jira/browse/HAWQ-1346
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Tian Hong Wang
>Assignee: Shivram Mani
>Priority: Critical
>
> In 
> incubator-hawq/pxf/pxf-hdfs/src/main/java/org/apache/hawq/pxf/plugins/hdfs/ChunkRecordReader.java:
> private DFSInputStream getInputStream() {
>   return (DFSInputStream) (fileIn.getWrappedStream());
>  }
> If using WebHdfsFileSystem as default Filesytem, it will cause cast type 
> exception from WebHdfsInputStream to DFSInputStream.
> The following is detailed exception.
> java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$WebHdfsInputStream cannot be 
> cast to org.apache.hadoop.hdfs.DFSInputStream
> at 
> org.apache.hawq.pxf.plugins.hdfs.ChunkRecordReader.getInputStream(ChunkRecordReader.java:76)
> at 
> org.apache.hawq.pxf.plugins.hdfs.ChunkRecordReader.(ChunkRecordReader.java:112)
> at 
> org.apache.hawq.pxf.plugins.hdfs.LineBreakAccessor.getReader(LineBreakAccessor.java:64)
> at 
> org.apache.hawq.pxf.plugins.hdfs.HdfsSplittableDataAccessor.getNextSplit(HdfsSplittableDataAccessor.java:114)
> at 
> org.apache.hawq.pxf.plugins.hdfs.HdfsSplittableDataAccessor.openForRead(HdfsSplittableDataAccessor.java:83)
> at 
> org.apache.hawq.pxf.service.ReadBridge.beginIteration(ReadBridge.java:73)
> at 
> org.apache.hawq.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:132)
> at 
> com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
> at 
> com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
> at 
> com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at 
> org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
> at 
> org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1346) If using WebHdfsFileSystem as default Filesytem, it will cause cast type exception

2017-02-21 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876624#comment-15876624
 ] 

Shivram Mani commented on HAWQ-1346:


ChunkRecordReader was introduced to to improve performance in accessing text 
files. This reader uses DFSInputStream for performance reasons and hence 
wouldn't allow you to use HTTPFSFileSystem or WebHdfsFileSystem as the 
defaultFS.

You can introduce a new PXF profile that uses an Accessor similar to 
LineBreakAccessor which uses LineRecordReader instead of ChunkRecordReader. 
LineRecordReader doesn't impose this restriction and should work with 
WebHdfsFileSystem(we used to use this in the earlier version of PXF). The only 
negative is that you will not get the same optimized performance as you get 
with the ChunkRecordReader.

> If using WebHdfsFileSystem as default Filesytem, it will cause cast type 
> exception
> --
>
> Key: HAWQ-1346
> URL: https://issues.apache.org/jira/browse/HAWQ-1346
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Tian Hong Wang
>Assignee: Shivram Mani
>Priority: Critical
>
> In 
> incubator-hawq/pxf/pxf-hdfs/src/main/java/org/apache/hawq/pxf/plugins/hdfs/ChunkRecordReader.java:
> private DFSInputStream getInputStream() {
>   return (DFSInputStream) (fileIn.getWrappedStream());
>  }
> If using WebHdfsFileSystem as default Filesytem, it will cause cast type 
> exception from WebHdfsInputStream to DFSInputStream.
> The following is detailed exception.
> java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$WebHdfsInputStream cannot be 
> cast to org.apache.hadoop.hdfs.DFSInputStream
> at 
> org.apache.hawq.pxf.plugins.hdfs.ChunkRecordReader.getInputStream(ChunkRecordReader.java:76)
> at 
> org.apache.hawq.pxf.plugins.hdfs.ChunkRecordReader.(ChunkRecordReader.java:112)
> at 
> org.apache.hawq.pxf.plugins.hdfs.LineBreakAccessor.getReader(LineBreakAccessor.java:64)
> at 
> org.apache.hawq.pxf.plugins.hdfs.HdfsSplittableDataAccessor.getNextSplit(HdfsSplittableDataAccessor.java:114)
> at 
> org.apache.hawq.pxf.plugins.hdfs.HdfsSplittableDataAccessor.openForRead(HdfsSplittableDataAccessor.java:83)
> at 
> org.apache.hawq.pxf.service.ReadBridge.beginIteration(ReadBridge.java:73)
> at 
> org.apache.hawq.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:132)
> at 
> com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
> at 
> com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
> at 
> com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at 
> org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
> at 
> org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1344) Rollback Apache HAWQ version to 2.1.0.0

2017-02-18 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1344.

Resolution: Fixed

> Rollback Apache HAWQ version to 2.1.0.0
> ---
>
> Key: HAWQ-1344
> URL: https://issues.apache.org/jira/browse/HAWQ-1344
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: 2.1.0.0-incubating
>
>
> Apache HAWQ version which was set to 2.2.0.0 will now move back to its 
> original 2.1.0.0 version
> Set PXF version to 3.2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1344) Rollback Apache HAWQ version to 2.1.0.0

2017-02-18 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani closed HAWQ-1344.
--

> Rollback Apache HAWQ version to 2.1.0.0
> ---
>
> Key: HAWQ-1344
> URL: https://issues.apache.org/jira/browse/HAWQ-1344
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: 2.1.0.0-incubating
>
>
> Apache HAWQ version which was set to 2.2.0.0 will now move back to its 
> original 2.1.0.0 version
> Set PXF version to 3.2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1344) Rollback Apache HAWQ version to 2.1.0.0

2017-02-17 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1344:
---
Description: 
Apache HAWQ version which was set to 2.2.0.0 will now move back to its original 
2.1.0.0 version
Set PXF version to 3.2

  was:Apache HAWQ version which was set to 2.2.0.0 will now move back to its 
original 2.1.0.0 version


> Rollback Apache HAWQ version to 2.1.0.0
> ---
>
> Key: HAWQ-1344
> URL: https://issues.apache.org/jira/browse/HAWQ-1344
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: 2.2.0.0-incubating
>
>
> Apache HAWQ version which was set to 2.2.0.0 will now move back to its 
> original 2.1.0.0 version
> Set PXF version to 3.2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1344) Rollback Apache HAWQ version to 2.1.0.0

2017-02-17 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1344:
--

Assignee: Shivram Mani  (was: Ed Espino)

> Rollback Apache HAWQ version to 2.1.0.0
> ---
>
> Key: HAWQ-1344
> URL: https://issues.apache.org/jira/browse/HAWQ-1344
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: 2.2.0.0-incubating
>
>
> Apache HAWQ version which was set to 2.2.0.0 will now move back to its 
> original 2.1.0.0 version



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1344) Rollback Apache HAWQ versino to 2.1.0.0

2017-02-17 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1344:
--

 Summary: Rollback Apache HAWQ versino to 2.1.0.0
 Key: HAWQ-1344
 URL: https://issues.apache.org/jira/browse/HAWQ-1344
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Build
Reporter: Shivram Mani
Assignee: Ed Espino
 Fix For: 2.2.0.0-incubating


Apache HAWQ version which was set to 2.2.0.0 will now move back to its original 
2.1.0.0 version



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1331) Update HAWQ and PXF versions

2017-02-14 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1331:
--

Assignee: Shivram Mani  (was: Alexander Denissov)

> Update HAWQ and PXF versions
> 
>
> Key: HAWQ-1331
> URL: https://issues.apache.org/jira/browse/HAWQ-1331
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Ambari, Build, PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> Update HAWQ version to 2.2.0.0
> Update PXF version to 3.2.0.0
> The change must be made in core hawq, pxf, ambari plugin and ranger plugin



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1331) Update HAWQ and PXF versions

2017-02-14 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1331:
---
Issue Type: Task  (was: Bug)

> Update HAWQ and PXF versions
> 
>
> Key: HAWQ-1331
> URL: https://issues.apache.org/jira/browse/HAWQ-1331
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Ambari, Build, PXF
>Reporter: Shivram Mani
>Assignee: Alexander Denissov
>
> Update HAWQ version to 2.2.0.0
> Update PXF version to 3.2.0.0
> The change must be made in core hawq, pxf, ambari plugin and ranger plugin



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1331) Update HAWQ and PXF versions

2017-02-14 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1331:
--

 Summary: Update HAWQ and PXF versions
 Key: HAWQ-1331
 URL: https://issues.apache.org/jira/browse/HAWQ-1331
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Ambari, Build, PXF
Reporter: Shivram Mani
Assignee: Alexander Denissov


Update HAWQ version to 2.2.0.0
Update PXF version to 3.2.0.0
The change must be made in core hawq, pxf, ambari plugin and ranger plugin



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1320) Remove PXF version references from the code repo

2017-02-10 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani closed HAWQ-1320.
--
Resolution: Fixed

> Remove PXF version references from the code repo
> 
>
> Key: HAWQ-1320
> URL: https://issues.apache.org/jira/browse/HAWQ-1320
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: PXF
>Reporter: Vineet Goel
>Assignee: Shivram Mani
> Fix For: 2.1.0.0-incubating
>
>
> Currently, the PXF version is statically defined in the codebase in 
> gradle.properties file. It would be better to pass these version strings 
> dynamically while running make.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1320) Remove PXF version references from the code repo

2017-02-10 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1320:
--

Assignee: Shivram Mani  (was: Ed Espino)

> Remove PXF version references from the code repo
> 
>
> Key: HAWQ-1320
> URL: https://issues.apache.org/jira/browse/HAWQ-1320
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: PXF
>Reporter: Vineet Goel
>Assignee: Shivram Mani
> Fix For: 2.1.0.0-incubating
>
>
> Currently, the PXF version is statically defined in the codebase in 
> gradle.properties file. It would be better to pass these version strings 
> dynamically while running make.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1309) PXF Service must default to port 51200 and user pxf

2017-02-02 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1309.

Resolution: Fixed
  Assignee: Shivram Mani  (was: Ed Espino)

> PXF Service must default to port 51200 and user pxf
> ---
>
> Key: HAWQ-1309
> URL: https://issues.apache.org/jira/browse/HAWQ-1309
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: 2.1.0.0-incubating
>
>
> When configuration managers such as Ambari setup HAWQ/PXF, it overrides the 
> default configurations with its own template. Ambari templates may not have 
> configurations PXF_PORT or PXF_USER. Set their default values to 51200 and 
> pxf respectively



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1309) PXF Service must default to port 51200 and user pxf

2017-02-02 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1309:
---
Summary: PXF Service must default to port 51200 and user pxf  (was: PXF 
Service must default to port 51200)

> PXF Service must default to port 51200 and user pxf
> ---
>
> Key: HAWQ-1309
> URL: https://issues.apache.org/jira/browse/HAWQ-1309
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Ed Espino
> Fix For: 2.1.0.0-incubating
>
>
> When configuration managers such as Ambari setup HAWQ/PXF, it overrides the 
> default configurations with its own template. Ambari templates may not have 
> configurations PXF_PORT or PXF_USER. Set their default values to 51200 and 
> pxf respectively



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1309) PXF Service must default to port 51200

2017-02-02 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1309:
--

 Summary: PXF Service must default to port 51200
 Key: HAWQ-1309
 URL: https://issues.apache.org/jira/browse/HAWQ-1309
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Shivram Mani
Assignee: Ed Espino
 Fix For: 2.1.0.0-incubating


When configuration managers such as Ambari setup HAWQ/PXF, it overrides the 
default configurations with its own template. Ambari templates may not have 
configurations PXF_PORT or PXF_USER. Set their default values to 51200 and pxf 
respectively



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1302) PXF RPM install does not copy correct classpath

2017-02-02 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1302.

Resolution: Fixed

> PXF RPM install does not copy correct classpath
> ---
>
> Key: HAWQ-1302
> URL: https://issues.apache.org/jira/browse/HAWQ-1302
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Shivram Mani
> Fix For: 2.1.0.0-incubating
>
>
> Since the changes in 
> [HAWQ-1297|https://issues.apache.org/jira/browse/HAWQ-1297] the new 
> pxf-private.classpath results in a specific distributions classpath file 
> pxf-private[distro].classpath to not be succesfully renamed by gradle to 
> pxf-private.classpath 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1302) PXF RPM install does not copy correct classpath

2017-02-02 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani closed HAWQ-1302.
--

> PXF RPM install does not copy correct classpath
> ---
>
> Key: HAWQ-1302
> URL: https://issues.apache.org/jira/browse/HAWQ-1302
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Shivram Mani
> Fix For: 2.1.0.0-incubating
>
>
> Since the changes in 
> [HAWQ-1297|https://issues.apache.org/jira/browse/HAWQ-1297] the new 
> pxf-private.classpath results in a specific distributions classpath file 
> pxf-private[distro].classpath to not be succesfully renamed by gradle to 
> pxf-private.classpath 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (HAWQ-1302) PXF RPM install does not copy correct classpath

2017-01-31 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847405#comment-15847405
 ] 

Shivram Mani commented on HAWQ-1302:


We have introduced an additional pxf-private.classpath file to serve the pupose 
of a non distribution(hdp) or installation without PXF rpm.

On investigating this, the following occurs as part of the PXF rpm creation
{code}
from("src/main/resources/pxf-private${hddist}.classpath") {
into("/etc/pxf-${project.version}/conf")
rename("pxf-private${hddist}.classpath", "pxf-private.classpath") 
}
{code}

The above rename action would fail and the PXF webapp would now use the 
classpath file not intended for the HDP based installations.
This will need to be fixed so that the above action overrides the existing 
pxf-private.classpath file

> PXF RPM install does not copy correct classpath
> ---
>
> Key: HAWQ-1302
> URL: https://issues.apache.org/jira/browse/HAWQ-1302
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Shivram Mani
> Fix For: 2.1.0.0-incubating
>
>
> Since the changes in 
> [HAWQ-1297|https://issues.apache.org/jira/browse/HAWQ-1297] the new 
> pxf-private.classpath results in a specific distributions classpath file 
> pxf-private[distro].classpath to not be succesfully renamed by gradle to 
> pxf-private.classpath 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (HAWQ-1287) Document PXF Setup/Build Instructions

2017-01-30 Thread Shivram Mani (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Shivram Mani resolved as Fixed 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
https://cwiki.apache.org/confluence/display/HAWQ/PXF+Build+and+Install 
 
 
 
 
 
 
 
 
 
 Apache HAWQ /  HAWQ-1287 
 
 
 
  Document PXF Setup/Build Instructions  
 
 
 
 
 
 
 
 
 

Change By:
 
 Shivram Mani 
 
 
 

Resolution:
 
 Fixed 
 
 
 

Fix Version/s:
 
 2.1.0.0-incubating 
 
 
 

Status:
 
 In Progress Resolved 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.3.15#6346-sha1:dbc023d) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   



[jira] (HAWQ-1297) Make PXF install ready from source code

2017-01-30 Thread Shivram Mani (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Shivram Mani resolved as Fixed 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 Apache HAWQ /  HAWQ-1297 
 
 
 
  Make PXF install ready from source code  
 
 
 
 
 
 
 
 
 

Change By:
 
 Shivram Mani 
 
 
 

Resolution:
 
 Fixed 
 
 
 

Fix Version/s:
 
 2.2.0.0-incubating 
 
 
 

Fix Version/s:
 
 2.1.0.0-incubating 
 
 
 

Status:
 
 In Progress Resolved 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.3.15#6346-sha1:dbc023d) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   



[jira] [Assigned] (HAWQ-1297) Make PXF install ready from source code

2017-01-26 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1297:
--

Assignee: Shivram Mani  (was: Ed Espino)

> Make PXF install ready from source code
> ---
>
> Key: HAWQ-1297
> URL: https://issues.apache.org/jira/browse/HAWQ-1297
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> Currently PXF has certain strict assumptions about the underlying hadoop 
> distributions and the paths used by the distributions. The goal of this task 
> is to make PXF build and service scripts usable even from source code without 
> any assumptions on the underlying hadoop distribution.
> Introduce 'make install' functionality which invokes './gradlew install'. The 
> user will need to configure PXF_HOME to a location intended to serve as the 
> deploy PXF path. If this is not configured it would use GPHOME/pxf as the 
> default deploy path.
> The purpose of this functionality will be to the following:
> 1. Download apache tomcat (tomcatGet) and copy under $PXF_HOME/apache-tomcat
> 2. Consolidate all PXF components jars under $PXF_HOME/lib
> 3. Conslidate all PXF configuration files under $PXF_HOME/conf
> 4. Conslidate PXF scripts under $PXF_HOME/bin
> 5. Tomcat template files under tomcat-templates/
> Following this the user would need to update the configuration files under 
> $PXF_HOME/conf based on the user environment and hadoop directly layout. 
> Specifically update the following files 
> -> pxf-private.classpath
> -> pxf-log4j.properties
> -> pxf-env.sh
> Init PXF
> bin/pxf init
> Start PXF
> bin/pxf start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1297) Make PXF install ready from source code

2017-01-26 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1297:
---
Description: 
Currently PXF has certain strict assumptions about the underlying hadoop 
distributions and the paths used by the distributions. The goal of this task is 
to make PXF build and service scripts usable even from source code without any 
assumptions on the underlying hadoop distribution.

Introduce 'make install' functionality which invokes './gradlew install'. The 
user will need to configure PXF_HOME to a location intended to serve as the 
deploy PXF path. If this is not configured it would use GPHOME/pxf as the 
default deploy path.

The purpose of this functionality will be to the following:
1. Download apache tomcat (tomcatGet) and copy under $PXF_HOME/apache-tomcat
2. Consolidate all PXF components jars under $PXF_HOME/lib
3. Conslidate all PXF configuration files under $PXF_HOME/conf
4. Conslidate PXF scripts under $PXF_HOME/bin
5. Tomcat template files under tomcat-templates/

Following this the user would need to update the configuration files under 
$PXF_HOME/conf based on the user environment and hadoop directly layout. 
Specifically update the following files 
-> pxf-private.classpath
-> pxf-log4j.properties
-> pxf-env.sh

Init PXF
$PXF_HOME/bin/pxf init

Start PXF
$PXF_HOME/bin/pxf start

  was:
Currently PXF has certain strict assumptions about the underlying hadoop 
distributions and the paths used by the distributions. The goal of this task is 
to make PXF build and service scripts usable even from source code without any 
assumptions on the underlying hadoop distribution.

Introduce 'make install' functionality which invokes './gradlew install'. The 
user will need to configure PXF_HOME to a location intended to serve as the 
deploy PXF path. If this is not configured it would use GPHOME/pxf as the 
default deploy path.

The purpose of this functionality will be to the following:
1. Download apache tomcat (tomcatGet) and copy under $PXF_HOME/apache-tomcat
2. Consolidate all PXF components jars under $PXF_HOME/lib
3. Conslidate all PXF configuration files under $PXF_HOME/conf
4. Conslidate PXF scripts under $PXF_HOME/bin
5. Tomcat template files under tomcat-templates/

Following this the user would need to update the configuration files under 
$PXF_HOME/conf based on the user environment and hadoop directly layout. 
Specifically update the following files 
-> pxf-private.classpath
-> pxf-log4j.properties
-> pxf-env.sh

Init PXF
bin/pxf init

Start PXF
bin/pxf start


> Make PXF install ready from source code
> ---
>
> Key: HAWQ-1297
> URL: https://issues.apache.org/jira/browse/HAWQ-1297
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> Currently PXF has certain strict assumptions about the underlying hadoop 
> distributions and the paths used by the distributions. The goal of this task 
> is to make PXF build and service scripts usable even from source code without 
> any assumptions on the underlying hadoop distribution.
> Introduce 'make install' functionality which invokes './gradlew install'. The 
> user will need to configure PXF_HOME to a location intended to serve as the 
> deploy PXF path. If this is not configured it would use GPHOME/pxf as the 
> default deploy path.
> The purpose of this functionality will be to the following:
> 1. Download apache tomcat (tomcatGet) and copy under $PXF_HOME/apache-tomcat
> 2. Consolidate all PXF components jars under $PXF_HOME/lib
> 3. Conslidate all PXF configuration files under $PXF_HOME/conf
> 4. Conslidate PXF scripts under $PXF_HOME/bin
> 5. Tomcat template files under tomcat-templates/
> Following this the user would need to update the configuration files under 
> $PXF_HOME/conf based on the user environment and hadoop directly layout. 
> Specifically update the following files 
> -> pxf-private.classpath
> -> pxf-log4j.properties
> -> pxf-env.sh
> Init PXF
> $PXF_HOME/bin/pxf init
> Start PXF
> $PXF_HOME/bin/pxf start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1297) Make PXF install ready from source code

2017-01-25 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani updated HAWQ-1297:
---
Summary: Make PXF install ready from source code  (was: Update PXF build 
and service scripts to make it installable directly from source code)

> Make PXF install ready from source code
> ---
>
> Key: HAWQ-1297
> URL: https://issues.apache.org/jira/browse/HAWQ-1297
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Ed Espino
>
> Currently PXF has certain strict assumptions about the underlying hadoop 
> distributions and the paths used by the distributions. The goal of this task 
> is to make PXF build and service scripts usable even from source code without 
> any assumptions on the underlying hadoop distribution.
> Introduce 'make install' functionality which invokes './gradlew install'. The 
> user will need to configure PXF_HOME to a location intended to serve as the 
> deploy PXF path. If this is not configured it would use GPHOME/pxf as the 
> default deploy path.
> The purpose of this functionality will be to the following:
> 1. Download apache tomcat (tomcatGet) and copy under $PXF_HOME/apache-tomcat
> 2. Consolidate all PXF components jars under $PXF_HOME/lib
> 3. Conslidate all PXF configuration files under $PXF_HOME/conf
> 4. Conslidate PXF scripts under $PXF_HOME/bin
> 5. Tomcat template files under tomcat-templates/
> Following this the user would need to update the configuration files under 
> $PXF_HOME/conf based on the user environment and hadoop directly layout. 
> Specifically update the following files 
> -> pxf-private.classpath
> -> pxf-log4j.properties
> -> pxf-env.sh
> Init PXF
> bin/pxf init
> Start PXF
> bin/pxf start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1297) Update PXF build and service scripts to make it installable directly from source code

2017-01-25 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1297:
--

 Summary: Update PXF build and service scripts to make it 
installable directly from source code
 Key: HAWQ-1297
 URL: https://issues.apache.org/jira/browse/HAWQ-1297
 Project: Apache HAWQ
  Issue Type: Sub-task
  Components: PXF
Reporter: Shivram Mani
Assignee: Ed Espino


Currently PXF has certain strict assumptions about the underlying hadoop 
distributions and the paths used by the distributions. The goal of this task is 
to make PXF build and service scripts usable even from source code without any 
assumptions on the underlying hadoop distribution.

Introduce 'make install' functionality which invokes './gradlew install'. The 
user will need to configure PXF_HOME to a location intended to serve as the 
deploy PXF path. If this is not configured it would use GPHOME/pxf as the 
default deploy path.

The purpose of this functionality will be to the following:
1. Download apache tomcat (tomcatGet) and copy under $PXF_HOME/apache-tomcat
2. Consolidate all PXF components jars under $PXF_HOME/lib
3. Conslidate all PXF configuration files under $PXF_HOME/conf
4. Conslidate PXF scripts under $PXF_HOME/bin
5. Tomcat template files under tomcat-templates/

Following this the user would need to update the configuration files under 
$PXF_HOME/conf based on the user environment and hadoop directly layout. 
Specifically update the following files 
-> pxf-private.classpath
-> pxf-log4j.properties
-> pxf-env.sh

Init PXF
bin/pxf init

Start PXF
bin/pxf start



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-1287) Document PXF Setup/Build Instructions

2017-01-20 Thread Shivram Mani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani reassigned HAWQ-1287:
--

Assignee: Shivram Mani  (was: David Yozie)

> Document PXF Setup/Build Instructions
> -
>
> Key: HAWQ-1287
> URL: https://issues.apache.org/jira/browse/HAWQ-1287
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Documentation, PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>
> Currently HAWQ build instructions 
> https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install include 
> information to setup Hadoop apart from compiling and installing HAWQ from 
> source code. This provides the means to try and test HAWQ as well.
> We would need to extend this to also include instructions to setup PXF along 
> with instructions to setup Hive/HBase and other services that PXF can 
> integrate with.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1287) Document PXF Setup/Build Instructions

2017-01-19 Thread Shivram Mani (JIRA)
Shivram Mani created HAWQ-1287:
--

 Summary: Document PXF Setup/Build Instructions
 Key: HAWQ-1287
 URL: https://issues.apache.org/jira/browse/HAWQ-1287
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: Documentation, PXF
Reporter: Shivram Mani
Assignee: David Yozie


Currently HAWQ build instructions 
https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install include 
information to setup Hadoop apart from compiling and installing HAWQ from 
source code. This provides the means to try and test HAWQ as well.
We would need to extend this to also include instructions to setup PXF along 
with instructions to setup Hive/HBase and other services that PXF can integrate 
with.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1108) Add JDBC PXF Plugin

2017-01-11 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819189#comment-15819189
 ] 

Shivram Mani commented on HAWQ-1108:


Yes, we can always iteratively improve this profile. [~jiadx] can you rebase 
your PR branch against the latest master before we proceed with the merge.

> Add JDBC PXF Plugin
> ---
>
> Key: HAWQ-1108
> URL: https://issues.apache.org/jira/browse/HAWQ-1108
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Devin Jia
>
> On the back of the work in :
> https://issues.apache.org/jira/browse/HAWQ-779
> We would like to add to Hawq Plugins a JDBC implementation.
> There are currently two noted implementations in the openly available in 
> GitHub.
> 1) https://github.com/kojec/pxf-field/tree/master/jdbc-pxf-ext
> 2) https://github.com/inspur-insight/pxf-plugin/tree/master/pxf-jdbc
> The latter (2) is an improved version of the former (1) and also what 
> HAWQ-779 changes were to support.
> [~jiadx] would you be happy to contribute the source as apache 2 license open 
> source?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1108) Add JDBC PXF Plugin

2017-01-11 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819037#comment-15819037
 ] 

Shivram Mani commented on HAWQ-1108:


[~jiadx] [~michael.andre.pearce] based on your other requests, HAWQ provides 
both the Column Projection and Predicate Information to PXF 
https://issues.apache.org/jira/browse/HAWQ-583. It would be great if these 
attributes were used in the JDBC profile as well. It could either be done in 
the first cut, or in the next updated version. Thoughts ?

> Add JDBC PXF Plugin
> ---
>
> Key: HAWQ-1108
> URL: https://issues.apache.org/jira/browse/HAWQ-1108
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Devin Jia
>
> On the back of the work in :
> https://issues.apache.org/jira/browse/HAWQ-779
> We would like to add to Hawq Plugins a JDBC implementation.
> There are currently two noted implementations in the openly available in 
> GitHub.
> 1) https://github.com/kojec/pxf-field/tree/master/jdbc-pxf-ext
> 2) https://github.com/inspur-insight/pxf-plugin/tree/master/pxf-jdbc
> The latter (2) is an improved version of the former (1) and also what 
> HAWQ-779 changes were to support.
> [~jiadx] would you be happy to contribute the source as apache 2 license open 
> source?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1108) Add JDBC PXF Plugin

2017-01-10 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816344#comment-15816344
 ] 

Shivram Mani commented on HAWQ-1108:


[~michael.andre.pearce] i understand the frustration in the delay. The merge 
certification would've been a whole lot easier if the automation test code 
based is also open sourced.
[~vVineet] at this point the JDBC Plugin code has passed the Code review. I am 
ok with merging this to master and qualifying this as beta and over time we can 
improve on the automation tests for this new profile.

> Add JDBC PXF Plugin
> ---
>
> Key: HAWQ-1108
> URL: https://issues.apache.org/jira/browse/HAWQ-1108
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Devin Jia
>
> On the back of the work in :
> https://issues.apache.org/jira/browse/HAWQ-779
> We would like to add to Hawq Plugins a JDBC implementation.
> There are currently two noted implementations in the openly available in 
> GitHub.
> 1) https://github.com/kojec/pxf-field/tree/master/jdbc-pxf-ext
> 2) https://github.com/inspur-insight/pxf-plugin/tree/master/pxf-jdbc
> The latter (2) is an improved version of the former (1) and also what 
> HAWQ-779 changes were to support.
> [~jiadx] would you be happy to contribute the source as apache 2 license open 
> source?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   >