[jira] [Updated] (HAWQ-1304) documentation changes for HAWQ-1228

2017-01-31 Thread Lisa Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisa Owen updated HAWQ-1304:

Affects Version/s: 2.1.0.0-incubating

> documentation changes for HAWQ-1228
> ---
>
> Key: HAWQ-1304
> URL: https://issues.apache.org/jira/browse/HAWQ-1304
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Documentation
>Affects Versions: 2.1.0.0-incubating
>Reporter: Lisa Owen
>Assignee: David Yozie
>Priority: Minor
>
> - new pxf-profiles.xml outputFormat parameter
> - hive table access via external table and hcatalog now uses optimal profile 
> for each fragment
> - others



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1304) documentation changes for HAWQ-1228

2017-01-31 Thread Lisa Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisa Owen updated HAWQ-1304:

Priority: Minor  (was: Major)

> documentation changes for HAWQ-1228
> ---
>
> Key: HAWQ-1304
> URL: https://issues.apache.org/jira/browse/HAWQ-1304
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Documentation
>Affects Versions: 2.1.0.0-incubating
>Reporter: Lisa Owen
>Assignee: David Yozie
>Priority: Minor
>
> - new pxf-profiles.xml outputFormat parameter
> - hive table access via external table and hcatalog now uses optimal profile 
> for each fragment
> - others



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1304) documentation changes for HAWQ-1228

2017-01-31 Thread Lisa Owen (JIRA)
Lisa Owen created HAWQ-1304:
---

 Summary: documentation changes for HAWQ-1228
 Key: HAWQ-1304
 URL: https://issues.apache.org/jira/browse/HAWQ-1304
 Project: Apache HAWQ
  Issue Type: New Feature
  Components: Documentation
Reporter: Lisa Owen
Assignee: David Yozie


- new pxf-profiles.xml outputFormat parameter
- hive table access via external table and hcatalog now uses optimal profile 
for each fragment
- others



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (HAWQ-1303) Load each partition as separate table for heterogenous tables in HCatalog

2017-01-31 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-1303:
--
Description: 
Changes introduced in HAWQ-1228 made HAWQ use optimal profile/format for Hive 
tables. But there is a limitation when HAWQ loads Hive tables into memory, it 
loads them as one table even if a table has multiple partitions with different 
output formats(GPDBWritable, TEXT). Thus currently it uses GBDBWritable format 
for that case. The idea is to load each partition set of one output format as a 
separate table, so not optimal profile, but optimal output format could be used.

Example: 
We have Hive table with four partitions of following formats - Text, RC, ORC, 
Sequence file.
Currently, HAWQ will load it to memory with GPDBWritable format.
GPDBWritable format is optimal for HiveORC, Hive profiles but not optimal for 
HIveText and HiveRC profiles.

With proposed changes, HAWQ should load two tables with TEXT and GPDBWritable 
formats and use following pairs to read partitions - HiveText/TEXT, 
HiveRC/TEXT, HiveORC/GPDBWritable, Hive/GPDBWritable.


> Load each partition as separate table for heterogenous tables in HCatalog
> -
>
> Key: HAWQ-1303
> URL: https://issues.apache.org/jira/browse/HAWQ-1303
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Hcatalog, PXF
>Reporter: Oleksandr Diachenko
>Assignee: Ed Espino
>
> Changes introduced in HAWQ-1228 made HAWQ use optimal profile/format for Hive 
> tables. But there is a limitation when HAWQ loads Hive tables into memory, it 
> loads them as one table even if a table has multiple partitions with 
> different output formats(GPDBWritable, TEXT). Thus currently it uses 
> GBDBWritable format for that case. The idea is to load each partition set of 
> one output format as a separate table, so not optimal profile, but optimal 
> output format could be used.
> Example: 
> We have Hive table with four partitions of following formats - Text, RC, ORC, 
> Sequence file.
> Currently, HAWQ will load it to memory with GPDBWritable format.
> GPDBWritable format is optimal for HiveORC, Hive profiles but not optimal for 
> HIveText and HiveRC profiles.
> With proposed changes, HAWQ should load two tables with TEXT and GPDBWritable 
> formats and use following pairs to read partitions - HiveText/TEXT, 
> HiveRC/TEXT, HiveORC/GPDBWritable, Hive/GPDBWritable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (HAWQ-1303) Load each partition as separate table for heterogenous tables in HCatalog

2017-01-31 Thread Oleksandr Diachenko (JIRA)
Oleksandr Diachenko created HAWQ-1303:
-

 Summary: Load each partition as separate table for heterogenous 
tables in HCatalog
 Key: HAWQ-1303
 URL: https://issues.apache.org/jira/browse/HAWQ-1303
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: Hcatalog, PXF
Reporter: Oleksandr Diachenko
Assignee: Ed Espino






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (HAWQ-1228) Use profile based on file format in HCatalog integration(HiveRC, HiveText profiles)

2017-01-31 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko resolved HAWQ-1228.
---
Resolution: Fixed

> Use profile based on file format in HCatalog integration(HiveRC, HiveText 
> profiles)
> ---
>
> Key: HAWQ-1228
> URL: https://issues.apache.org/jira/browse/HAWQ-1228
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: 2.1.0.0-incubating
>
>
> To leverage changes introduced in HAWQ-1177, expand optimization for other 
> Hive profiles. Additional information needs to be included in user 
> metadata(e.g. DELIMITER etc).
> Changes needed:
> * Enhance the Metadata API, to add new attributes: outputFormats, 
> outputParameters;
> * Hive, HiveORC profiles should support just GPDBWritable format;
> * HIveText, HiveRC profiles should support both TEXT and GPDBWritable formats;
> * Unify HiveUserData data structures to be same among all Hive- profiles;
> * Bridge should read fragments using optimal profile read from fragment 
> information;
> * Optimal profile should be determined based on file's input 
> format(org.apache.hadoop.hive.ql.io.orc.OrcInputFormat - HiveORC, 
> org.apache.hadoop.hive.ql.io.RCFileInputFormat - HiveRC, 
> org.apache.hadoop.mapred.TextInputFormat - HiveText);
> * Default profile is Hive;
> * If Hive table has org.apache.hadoop.mapred.TextInputFormat but also has 
> some comlex types - Hive profile should be used(limitation should be 
> addressed in HAWQ-1265);
> * If table is homogeneous(all input file have the same output format) Bridge 
> uses the same format which table has. Otherwise, if table is heterogeneous, 
> GPDBWritable should be used;
> * Add new property outputFormat to pxf-profiles-default.xml, which means 
> default output format of profile.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (HAWQ-1228) Use profile based on file format in HCatalog integration(HiveRC, HiveText profiles)

2017-01-31 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-1228:
--
Fix Version/s: 2.1.0.0-incubating

> Use profile based on file format in HCatalog integration(HiveRC, HiveText 
> profiles)
> ---
>
> Key: HAWQ-1228
> URL: https://issues.apache.org/jira/browse/HAWQ-1228
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: 2.1.0.0-incubating
>
>
> To leverage changes introduced in HAWQ-1177, expand optimization for other 
> Hive profiles. Additional information needs to be included in user 
> metadata(e.g. DELIMITER etc).
> Changes needed:
> * Enhance the Metadata API, to add new attributes: outputFormats, 
> outputParameters;
> * Hive, HiveORC profiles should support just GPDBWritable format;
> * HIveText, HiveRC profiles should support both TEXT and GPDBWritable formats;
> * Unify HiveUserData data structures to be same among all Hive- profiles;
> * Bridge should read fragments using optimal profile read from fragment 
> information;
> * Optimal profile should be determined based on file's input 
> format(org.apache.hadoop.hive.ql.io.orc.OrcInputFormat - HiveORC, 
> org.apache.hadoop.hive.ql.io.RCFileInputFormat - HiveRC, 
> org.apache.hadoop.mapred.TextInputFormat - HiveText);
> * Default profile is Hive;
> * If Hive table has org.apache.hadoop.mapred.TextInputFormat but also has 
> some comlex types - Hive profile should be used(limitation should be 
> addressed in HAWQ-1265);
> * If table is homogeneous(all input file have the same output format) Bridge 
> uses the same format which table has. Otherwise, if table is heterogeneous, 
> GPDBWritable should be used;
> * Add new property outputFormat to pxf-profiles-default.xml, which means 
> default output format of profile.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (HAWQ-1228) Use profile based on file format in HCatalog integration(HiveRC, HiveText profiles)

2017-01-31 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-1228:
--
Description: 
To leverage changes introduced in HAWQ-1177, expand optimization for other Hive 
profiles. Additional information needs to be included in user metadata(e.g. 
DELIMITER etc).

Changes needed:
* Enhance the Metadata API, to add new attributes: outputFormats, 
outputParameters;
* Hive, HiveORC profiles should support just GPDBWritable format;
* HIveText, HiveRC profiles should support both TEXT and GPDBWritable formats;
* Unify HiveUserData data structures to be same among all Hive- profiles;
* Bridge should read fragments using optimal profile read from fragment 
information;
* Optimal profile should be determined based on file's input 
format(org.apache.hadoop.hive.ql.io.orc.OrcInputFormat - HiveORC, 
org.apache.hadoop.hive.ql.io.RCFileInputFormat - HiveRC, 
org.apache.hadoop.mapred.TextInputFormat - HiveText);
* Default profile is Hive;
* If Hive table has org.apache.hadoop.mapred.TextInputFormat but also has some 
comlex types - Hive profile should be used(limitation should be addressed in 
HAWQ-1265);
* If table is homogeneous(all input file have the same output format) Bridge 
uses the same format which table has. Otherwise, if table is heterogeneous, 
GPDBWritable should be used;
* Add new property outputFormat to pxf-profiles-default.xml, which means 
default output format of profile.

  was:
To leverage changes introduced in HAWQ-1177, expand optimization for other Hive 
profiles. Additional information needs to be included in user metadata(e.g. 
DELIMITER etc).

Changes needed:
* Enhance the Metadata API, to add new attributes: outputFormats, 
outputParameters;
* Hive, HiveORC profiles should support just GPDBWritable format;
* HIveText, HiveRC profiles should support both TEXT and GPDBWritable formats;
* Unify HiveUserData data structures to be same among all Hive- profiles;
* Bridge should read fragments using optimal profile read from fragment 
information;
* Optimal profile should be determined based on file's input 
format(org.apache.hadoop.hive.ql.io.orc.OrcInputFormat - HiveORC, 
org.apache.hadoop.hive.ql.io.RCFileInputFormat - HiveRC, 
org.apache.hadoop.mapred.TextInputFormat - HiveText);
* Default profile is Hive;
* If Hive table has org.apache.hadoop.mapred.TextInputFormat but also has some 
comlex types - Hive profile should be used(limitation should be addressed in 
HAWQ-1265);
* If table is homogeneous(all input file have the same output format) Bridge 
uses the same format which table has. Otherwise, if table is heterogeneous, 
GPDBWritable should be used;


> Use profile based on file format in HCatalog integration(HiveRC, HiveText 
> profiles)
> ---
>
> Key: HAWQ-1228
> URL: https://issues.apache.org/jira/browse/HAWQ-1228
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>
> To leverage changes introduced in HAWQ-1177, expand optimization for other 
> Hive profiles. Additional information needs to be included in user 
> metadata(e.g. DELIMITER etc).
> Changes needed:
> * Enhance the Metadata API, to add new attributes: outputFormats, 
> outputParameters;
> * Hive, HiveORC profiles should support just GPDBWritable format;
> * HIveText, HiveRC profiles should support both TEXT and GPDBWritable formats;
> * Unify HiveUserData data structures to be same among all Hive- profiles;
> * Bridge should read fragments using optimal profile read from fragment 
> information;
> * Optimal profile should be determined based on file's input 
> format(org.apache.hadoop.hive.ql.io.orc.OrcInputFormat - HiveORC, 
> org.apache.hadoop.hive.ql.io.RCFileInputFormat - HiveRC, 
> org.apache.hadoop.mapred.TextInputFormat - HiveText);
> * Default profile is Hive;
> * If Hive table has org.apache.hadoop.mapred.TextInputFormat but also has 
> some comlex types - Hive profile should be used(limitation should be 
> addressed in HAWQ-1265);
> * If table is homogeneous(all input file have the same output format) Bridge 
> uses the same format which table has. Otherwise, if table is heterogeneous, 
> GPDBWritable should be used;
> * Add new property outputFormat to pxf-profiles-default.xml, which means 
> default output format of profile.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (HAWQ-1228) Use profile based on file format in HCatalog integration(HiveRC, HiveText profiles)

2017-01-31 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-1228:
--
Description: 
To leverage changes introduced in HAWQ-1177, expand optimization for other Hive 
profiles. Additional information needs to be included in user metadata(e.g. 
DELIMITER etc).

Changes needed:
* Enhance the Metadata API, to add new attributes: outputFormats, 
outputParameters;
* Hive, HiveORC profiles should support just GPDBWritable format;
* HIveText, HiveRC profiles should support both TEXT and GPDBWritable formats;
* Unify HiveUserData data structures to be same among all Hive- profiles;
* Bridge should read fragments using optimal profile read from fragment 
information;
* Optimal profile should be determined based on file's input 
format(org.apache.hadoop.hive.ql.io.orc.OrcInputFormat - HiveORC, 
org.apache.hadoop.hive.ql.io.RCFileInputFormat - HiveRC, 
org.apache.hadoop.mapred.TextInputFormat - HiveText);
* Default profile is Hive;
* If Hive table has org.apache.hadoop.mapred.TextInputFormat but also has some 
comlex types - Hive profile should be used(limitation should be addressed in 
HAWQ-1265);
* If table is homogeneous(all input file have the same output format) Bridge 
uses the same format which table has. Otherwise, if table is heterogeneous, 
GPDBWritable should be used;

  was:
To leverage changes introduced in HAWQ-1177, expand optimization for other Hive 
profiles. Additional information needs to be included in user metadata(e.g. 
DELIMITER etc).

Changes needed:
* Enhance the Metadata API, to add new attributes: outputFormats, 
outputParameters;
* Hive, HiveORC profiles should support just GPDBWritable format;
* HIveText, HiveRC profiles should support both TEXT and GPDBWritable formats;
* Unify HiveUserData data structures to be same among all Hive- profiles;
* Bridge should read fragments using optimal profile read from fragment 
information;
* Optimal profile should be determined based on file's input 
format(org.apache.hadoop.hive.ql.io.orc.OrcInputFormat - HiveORC, 
org.apache.hadoop.hive.ql.io.RCFileInputFormat - HiveRC, 
org.apache.hadoop.mapred.TextInputFormat - HiveText);
* Default profile is Hive;
* If Hive table has org.apache.hadoop.mapred.TextInputFormat but also has some 
comlex types - Hive profile should be used();
* If table is homogeneous(all input file have the same output format) Bridge 
uses the same format which table has. Otherwise, if table is heterogeneous, 
GPDBWritable should be used;


> Use profile based on file format in HCatalog integration(HiveRC, HiveText 
> profiles)
> ---
>
> Key: HAWQ-1228
> URL: https://issues.apache.org/jira/browse/HAWQ-1228
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>
> To leverage changes introduced in HAWQ-1177, expand optimization for other 
> Hive profiles. Additional information needs to be included in user 
> metadata(e.g. DELIMITER etc).
> Changes needed:
> * Enhance the Metadata API, to add new attributes: outputFormats, 
> outputParameters;
> * Hive, HiveORC profiles should support just GPDBWritable format;
> * HIveText, HiveRC profiles should support both TEXT and GPDBWritable formats;
> * Unify HiveUserData data structures to be same among all Hive- profiles;
> * Bridge should read fragments using optimal profile read from fragment 
> information;
> * Optimal profile should be determined based on file's input 
> format(org.apache.hadoop.hive.ql.io.orc.OrcInputFormat - HiveORC, 
> org.apache.hadoop.hive.ql.io.RCFileInputFormat - HiveRC, 
> org.apache.hadoop.mapred.TextInputFormat - HiveText);
> * Default profile is Hive;
> * If Hive table has org.apache.hadoop.mapred.TextInputFormat but also has 
> some comlex types - Hive profile should be used(limitation should be 
> addressed in HAWQ-1265);
> * If table is homogeneous(all input file have the same output format) Bridge 
> uses the same format which table has. Otherwise, if table is heterogeneous, 
> GPDBWritable should be used;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (HAWQ-1228) Use profile based on file format in HCatalog integration(HiveRC, HiveText profiles)

2017-01-31 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-1228:
--
Description: 
To leverage changes introduced in HAWQ-1177, expand optimization for other Hive 
profiles. Additional information needs to be included in user metadata(e.g. 
DELIMITER etc).

Changes needed:
* Enhance the Metadata API, to add new attributes: outputFormats, 
outputParameters;
* Hive, HiveORC profiles should support just GPDBWritable format;
* HIveText, HiveRC profiles should support both TEXT and GPDBWritable formats;
* Unify HiveUserData data structures to be same among all Hive- profiles;
* Bridge should read fragments using optimal profile read from fragment 
information;
* Optimal profile should be determined based on file's input 
format(org.apache.hadoop.hive.ql.io.orc.OrcInputFormat - HiveORC, 
org.apache.hadoop.hive.ql.io.RCFileInputFormat - HiveRC, 
org.apache.hadoop.mapred.TextInputFormat - HiveText);
* Default profile is Hive;
* If Hive table has org.apache.hadoop.mapred.TextInputFormat but also has some 
comlex types - Hive profile should be used();
* If table is homogeneous(all input file have the same output format) Bridge 
uses the same format which table has. Otherwise, if table is heterogeneous, 
GPDBWritable should be used;

  was:
To leverage changes introduced in HAWQ-1177, expand optimization for other Hive 
profiles. Additional information needs to be included in user metadata(e.g. 
DELIMITER etc).
The change should support homogeneous tables as for now. Homogeneous table in 
this context means table which has no partitions, or all partitions span same 
storage format. For heterogeneous tables HAWQ should still use Hive profile.


> Use profile based on file format in HCatalog integration(HiveRC, HiveText 
> profiles)
> ---
>
> Key: HAWQ-1228
> URL: https://issues.apache.org/jira/browse/HAWQ-1228
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>
> To leverage changes introduced in HAWQ-1177, expand optimization for other 
> Hive profiles. Additional information needs to be included in user 
> metadata(e.g. DELIMITER etc).
> Changes needed:
> * Enhance the Metadata API, to add new attributes: outputFormats, 
> outputParameters;
> * Hive, HiveORC profiles should support just GPDBWritable format;
> * HIveText, HiveRC profiles should support both TEXT and GPDBWritable formats;
> * Unify HiveUserData data structures to be same among all Hive- profiles;
> * Bridge should read fragments using optimal profile read from fragment 
> information;
> * Optimal profile should be determined based on file's input 
> format(org.apache.hadoop.hive.ql.io.orc.OrcInputFormat - HiveORC, 
> org.apache.hadoop.hive.ql.io.RCFileInputFormat - HiveRC, 
> org.apache.hadoop.mapred.TextInputFormat - HiveText);
> * Default profile is Hive;
> * If Hive table has org.apache.hadoop.mapred.TextInputFormat but also has 
> some comlex types - Hive profile should be used();
> * If table is homogeneous(all input file have the same output format) Bridge 
> uses the same format which table has. Otherwise, if table is heterogeneous, 
> GPDBWritable should be used;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (HAWQ-1302) PXF RPM install does not copy correct classpath

2017-01-31 Thread Shivram Mani (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15847405#comment-15847405
 ] 

Shivram Mani commented on HAWQ-1302:


We have introduced an additional pxf-private.classpath file to serve the pupose 
of a non distribution(hdp) or installation without PXF rpm.

On investigating this, the following occurs as part of the PXF rpm creation
{code}
from("src/main/resources/pxf-private${hddist}.classpath") {
into("/etc/pxf-${project.version}/conf")
rename("pxf-private${hddist}.classpath", "pxf-private.classpath") 
}
{code}

The above rename action would fail and the PXF webapp would now use the 
classpath file not intended for the HDP based installations.
This will need to be fixed so that the above action overrides the existing 
pxf-private.classpath file

> PXF RPM install does not copy correct classpath
> ---
>
> Key: HAWQ-1302
> URL: https://issues.apache.org/jira/browse/HAWQ-1302
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Shivram Mani
> Fix For: 2.1.0.0-incubating
>
>
> Since the changes in 
> [HAWQ-1297|https://issues.apache.org/jira/browse/HAWQ-1297] the new 
> pxf-private.classpath results in a specific distributions classpath file 
> pxf-private[distro].classpath to not be succesfully renamed by gradle to 
> pxf-private.classpath 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (HAWQ-1302) PXF RPM install does not copy correct classpath

2017-01-31 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-1302:
---

Assignee: Shivram Mani  (was: Ed Espino)

> PXF RPM install does not copy correct classpath
> ---
>
> Key: HAWQ-1302
> URL: https://issues.apache.org/jira/browse/HAWQ-1302
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Shivram Mani
> Fix For: 2.1.0.0-incubating
>
>
> Since the changes in 
> [HAWQ-1297|https://issues.apache.org/jira/browse/HAWQ-1297] the new 
> pxf-private.classpath results in a specific distributions classpath file 
> pxf-private[distro].classpath to not be succesfully renamed by gradle to 
> pxf-private.classpath 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq pull request #1102: HAWQ-1297. Make PXF install ready from so...

2017-01-31 Thread shivzone
Github user shivzone closed the pull request at:

https://github.com/apache/incubator-hawq/pull/1102


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---