[jira] [Commented] (HAWQ-781) Move src/postgres to depends/thirdparty/postgres
[ https://issues.apache.org/jira/browse/HAWQ-781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320042#comment-15320042 ] ASF GitHub Bot commented on HAWQ-781: - Github user paul-guo- closed the pull request at: https://github.com/apache/incubator-hawq/pull/694 > Move src/postgres to depends/thirdparty/postgres > > > Key: HAWQ-781 > URL: https://issues.apache.org/jira/browse/HAWQ-781 > Project: Apache HAWQ > Issue Type: Improvement > Components: Build >Reporter: Paul Guo >Assignee: Paul Guo > Fix For: 2.0.0-beta-incubating > > > Discussed offline about this. We git-submoduled src/postgres. The purpose is > to support the pgcrypto functionality only. It looks a bit ugly to store the > upstream postgres code under src/. It seems that we'd better put it under > /depends. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-780) Remove quicklz compression related code but keep related meta data in short term.
[ https://issues.apache.org/jira/browse/HAWQ-780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320041#comment-15320041 ] ASF GitHub Bot commented on HAWQ-780: - Github user paul-guo- closed the pull request at: https://github.com/apache/incubator-hawq/pull/692 > Remove quicklz compression related code but keep related meta data in short > term. > - > > Key: HAWQ-780 > URL: https://issues.apache.org/jira/browse/HAWQ-780 > Project: Apache HAWQ > Issue Type: Bug > Components: Storage >Reporter: Paul Guo >Assignee: Paul Guo > Fix For: 2.0.0-beta-incubating > > > To avoid potential license issue, we'd better remove it. Given we have snappy > support now, there is no problem to do this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq pull request #692: HAWQ-780. Remove quicklz compression relat...
Github user paul-guo- closed the pull request at: https://github.com/apache/incubator-hawq/pull/692 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Closed] (HAWQ-791) remove parquet related test from installcheck-good
[ https://issues.apache.org/jira/browse/HAWQ-791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhenglin tao closed HAWQ-791. - > remove parquet related test from installcheck-good > -- > > Key: HAWQ-791 > URL: https://issues.apache.org/jira/browse/HAWQ-791 > Project: Apache HAWQ > Issue Type: Test > Components: Tests >Reporter: zhenglin tao >Assignee: zhenglin tao > Fix For: 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HAWQ-791) remove parquet related test from installcheck-good
[ https://issues.apache.org/jira/browse/HAWQ-791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhenglin tao resolved HAWQ-791. --- Resolution: Fixed Fix Version/s: 2.0.0 > remove parquet related test from installcheck-good > -- > > Key: HAWQ-791 > URL: https://issues.apache.org/jira/browse/HAWQ-791 > Project: Apache HAWQ > Issue Type: Test > Components: Tests >Reporter: zhenglin tao >Assignee: zhenglin tao > Fix For: 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-793) Temporarily remove the snappy info in metadata but keep the snappy support for row oriented storage.
[ https://issues.apache.org/jira/browse/HAWQ-793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Guo reassigned HAWQ-793: - Assignee: Paul Guo (was: Lei Chang) > Temporarily remove the snappy info in metadata but keep the snappy support > for row oriented storage. > > > Key: HAWQ-793 > URL: https://issues.apache.org/jira/browse/HAWQ-793 > Project: Apache HAWQ > Issue Type: Bug > Components: Storage >Reporter: Paul Guo >Assignee: Paul Guo > > In HAWQ-774 we added the snappy support for the row oriented storage, however > to make the change more friendly to upgradation, we will need to temporarily > hack to keep the related metadata unmodified. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HAWQ-780) Remove quicklz compression related code but keep related meta data in short term.
[ https://issues.apache.org/jira/browse/HAWQ-780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318031#comment-15318031 ] Paul Guo edited comment on HAWQ-780 at 6/8/16 5:32 AM: --- We do not delete related entries in pg_compresion, pg_proc tables to be more upgrading friendly. This is a short plan. In the long run, we will remove them from the tables also. was (Author: paul guo): We do not delete related entries in pg_compresion, pg_proc tables to be more upgrading friendly. This is a short plan. In the long run, we will remote them from the tables also. > Remove quicklz compression related code but keep related meta data in short > term. > - > > Key: HAWQ-780 > URL: https://issues.apache.org/jira/browse/HAWQ-780 > Project: Apache HAWQ > Issue Type: Bug > Components: Storage >Reporter: Paul Guo >Assignee: Paul Guo > Fix For: 2.0.0-beta-incubating > > > To avoid potential license issue, we'd better remove it. Given we have snappy > support now, there is no problem to do this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-781) Move src/postgres to depends/thirdparty/postgres
[ https://issues.apache.org/jira/browse/HAWQ-781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Guo updated HAWQ-781: -- Fix Version/s: 2.0.0-beta-incubating > Move src/postgres to depends/thirdparty/postgres > > > Key: HAWQ-781 > URL: https://issues.apache.org/jira/browse/HAWQ-781 > Project: Apache HAWQ > Issue Type: Improvement > Components: Build >Reporter: Paul Guo >Assignee: Paul Guo > Fix For: 2.0.0-beta-incubating > > > Discussed offline about this. We git-submoduled src/postgres. The purpose is > to support the pgcrypto functionality only. It looks a bit ugly to store the > upstream postgres code under src/. It seems that we'd better put it under > /depends. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (HAWQ-781) Move src/postgres to depends/thirdparty/postgres
[ https://issues.apache.org/jira/browse/HAWQ-781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Guo closed HAWQ-781. - Resolution: Fixed > Move src/postgres to depends/thirdparty/postgres > > > Key: HAWQ-781 > URL: https://issues.apache.org/jira/browse/HAWQ-781 > Project: Apache HAWQ > Issue Type: Improvement > Components: Build >Reporter: Paul Guo >Assignee: Paul Guo > Fix For: 2.0.0-beta-incubating > > > Discussed offline about this. We git-submoduled src/postgres. The purpose is > to support the pgcrypto functionality only. It looks a bit ugly to store the > upstream postgres code under src/. It seems that we'd better put it under > /depends. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq issue #654: Explicitly initialize GPOPT and its dependencies.
Github user wengyanqing commented on the issue: https://github.com/apache/incubator-hawq/pull/654 PR has been reverted because compile failed. Please check build with orca on. After fix it, I'll merge it again. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (HAWQ-781) Move src/postgres to depends/thirdparty/postgres
[ https://issues.apache.org/jira/browse/HAWQ-781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320030#comment-15320030 ] ASF GitHub Bot commented on HAWQ-781: - Github user yaoj2 commented on the issue: https://github.com/apache/incubator-hawq/pull/694 +1 > Move src/postgres to depends/thirdparty/postgres > > > Key: HAWQ-781 > URL: https://issues.apache.org/jira/browse/HAWQ-781 > Project: Apache HAWQ > Issue Type: Improvement > Components: Build >Reporter: Paul Guo >Assignee: Paul Guo > > Discussed offline about this. We git-submoduled src/postgres. The purpose is > to support the pgcrypto functionality only. It looks a bit ugly to store the > upstream postgres code under src/. It seems that we'd better put it under > /depends. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq issue #694: HAWQ-781. Move src/postgres to depends/thirdparty...
Github user yaoj2 commented on the issue: https://github.com/apache/incubator-hawq/pull/694 +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (HAWQ-794) Add back snappy to related system tables in the future
Paul Guo created HAWQ-794: - Summary: Add back snappy to related system tables in the future Key: HAWQ-794 URL: https://issues.apache.org/jira/browse/HAWQ-794 Project: Apache HAWQ Issue Type: Bug Components: Storage Reporter: Paul Guo Assignee: Lei Chang See HAWQ-793 for the context. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-793) Temporarily remove the snappy info in metadata but keep the snappy support for row oriented storage.
Paul Guo created HAWQ-793: - Summary: Temporarily remove the snappy info in metadata but keep the snappy support for row oriented storage. Key: HAWQ-793 URL: https://issues.apache.org/jira/browse/HAWQ-793 Project: Apache HAWQ Issue Type: Bug Components: Storage Reporter: Paul Guo Assignee: Lei Chang In HAWQ-774 we added the snappy support for the row oriented storage, however to make the change more friendly to upgradation, we will need to temporarily hack to keep the related metadata unmodified. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-781) Move src/postgres to depends/thirdparty/postgres
[ https://issues.apache.org/jira/browse/HAWQ-781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320006#comment-15320006 ] ASF GitHub Bot commented on HAWQ-781: - Github user radarwave commented on the issue: https://github.com/apache/incubator-hawq/pull/694 +1 > Move src/postgres to depends/thirdparty/postgres > > > Key: HAWQ-781 > URL: https://issues.apache.org/jira/browse/HAWQ-781 > Project: Apache HAWQ > Issue Type: Improvement > Components: Build >Reporter: Paul Guo >Assignee: Paul Guo > > Discussed offline about this. We git-submoduled src/postgres. The purpose is > to support the pgcrypto functionality only. It looks a bit ugly to store the > upstream postgres code under src/. It seems that we'd better put it under > /depends. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq issue #694: HAWQ-781. Move src/postgres to depends/thirdparty...
Github user radarwave commented on the issue: https://github.com/apache/incubator-hawq/pull/694 +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Closed] (HAWQ-774) Add snappy compression support to row oriented storage
[ https://issues.apache.org/jira/browse/HAWQ-774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Guo closed HAWQ-774. - Resolution: Fixed > Add snappy compression support to row oriented storage > -- > > Key: HAWQ-774 > URL: https://issues.apache.org/jira/browse/HAWQ-774 > Project: Apache HAWQ > Issue Type: New Feature > Components: Storage >Reporter: Paul Guo >Assignee: Paul Guo > Labels: oss > Fix For: 2.0.0-beta-incubating > > > We'd better remove the quicklz compression due to the license reason, thus we > need a new good compression algorithm which has high compression speed with a > reasonable compression ratio. Google snappy is a good choice. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq pull request #697: HAWQ-792. Orca on causes different error m...
GitHub user jiny2 opened a pull request: https://github.com/apache/incubator-hawq/pull/697 HAWQ-792. Orca on causes different error message when insert null value to a not null col You can merge this pull request into a Git repository by running: $ git pull https://github.com/jiny2/incubator-hawq HAWQ-792 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/697.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #697 commit 1009234c468d68ff6b8967155b13e3597ef44a14 Author: YI JINDate: 2016-06-08T04:32:34Z HAWQ-792. Orca on causes different error message when insert null value to a not null col --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (HAWQ-792) Orca on causes different error message when insert null value to a not null col
[ https://issues.apache.org/jira/browse/HAWQ-792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319979#comment-15319979 ] ASF GitHub Bot commented on HAWQ-792: - GitHub user jiny2 opened a pull request: https://github.com/apache/incubator-hawq/pull/697 HAWQ-792. Orca on causes different error message when insert null value to a not null col You can merge this pull request into a Git repository by running: $ git pull https://github.com/jiny2/incubator-hawq HAWQ-792 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/697.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #697 commit 1009234c468d68ff6b8967155b13e3597ef44a14 Author: YI JINDate: 2016-06-08T04:32:34Z HAWQ-792. Orca on causes different error message when insert null value to a not null col > Orca on causes different error message when insert null value to a not null > col > --- > > Key: HAWQ-792 > URL: https://issues.apache.org/jira/browse/HAWQ-792 > Project: Apache HAWQ > Issue Type: Bug > Components: Core >Reporter: Yi Jin >Assignee: Yi Jin > > INSERT INTO serialtest VALUES('wrong',NULL); > psql:/tmp/TestQuerySequence_TestSequenceCreateSerialColumn.sql:4: ERROR: One > or more assertions failed (seg0 localhost:4 pid=11362) > DETAIL: Not null constraint for column f2 of table serialtest was violated -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-792) Orca on causes different error message when insert null value to a not null col
Yi Jin created HAWQ-792: --- Summary: Orca on causes different error message when insert null value to a not null col Key: HAWQ-792 URL: https://issues.apache.org/jira/browse/HAWQ-792 Project: Apache HAWQ Issue Type: Bug Components: Core Reporter: Yi Jin Assignee: Lei Chang INSERT INTO serialtest VALUES('wrong',NULL); psql:/tmp/TestQuerySequence_TestSequenceCreateSerialColumn.sql:4: ERROR: One or more assertions failed (seg0 localhost:4 pid=11362) DETAIL: Not null constraint for column f2 of table serialtest was violated -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-792) Orca on causes different error message when insert null value to a not null col
[ https://issues.apache.org/jira/browse/HAWQ-792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Jin reassigned HAWQ-792: --- Assignee: Yi Jin (was: Lei Chang) > Orca on causes different error message when insert null value to a not null > col > --- > > Key: HAWQ-792 > URL: https://issues.apache.org/jira/browse/HAWQ-792 > Project: Apache HAWQ > Issue Type: Bug > Components: Core >Reporter: Yi Jin >Assignee: Yi Jin > > INSERT INTO serialtest VALUES('wrong',NULL); > psql:/tmp/TestQuerySequence_TestSequenceCreateSerialColumn.sql:4: ERROR: One > or more assertions failed (seg0 localhost:4 pid=11362) > DETAIL: Not null constraint for column f2 of table serialtest was violated -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-713) lc_numeric guc doesn't behave as expected after some time
[ https://issues.apache.org/jira/browse/HAWQ-713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319904#comment-15319904 ] ASF GitHub Bot commented on HAWQ-713: - Github user wengyanqing commented on the issue: https://github.com/apache/incubator-hawq/pull/652 merge done > lc_numeric guc doesn't behave as expected after some time > - > > Key: HAWQ-713 > URL: https://issues.apache.org/jira/browse/HAWQ-713 > Project: Apache HAWQ > Issue Type: Bug > Components: Core >Reporter: Karthikeyan Jambu Rajaraman >Assignee: Lei Chang > Attachments: lc_numeric_check.out, lc_numeric_check.sql > > > Create a simple table with a 1,1 value in it that we will change based on > lc_numeric. Default we are expecting will be 11 but 1.1 under de_DE > {code} > gpadmin=# \d tbl_lc_numeric_test > Table "public.tbl_lc_numeric_test" > Column | Type | Modifiers > +---+--- > a | text | > s | character varying(50) | > Distributed by: (a) > gpadmin=# select * from tbl_lc_numeric_test; > a | s > -+--- > 3 | blablabla > 1,1 | bla > 2 | blabla > (3 rows) > {code} > When lc_numeric is changed 'de_DE.utf8' as shown below, '1,1' is printed as > '1.1' as expected. > {code} > gpadmin=# set lc_numeric='de_DE.utf8'; > SET > gpadmin=# show lc_numeric > gpadmin-# ; > lc_numeric > > de_DE.utf8 > (1 row) > gpadmin=# \echo `date` > Thu Apr 21 10:05:00 PDT 2016 > gpadmin=# select to_number(a,'99D9')::numeric(10,5), s from > tbl_lc_numeric_test; > to_number | s > ---+--- >1.1 | bla >3.0 | blablabla >2.0 | blabla > (3 rows) > {code} > But if we run above select again after some time (after idle gang timeout), > then we saw the value as 11 instead of 1.1. > {code} > gpadmin=# \echo `date` > Thu Apr 21 10:05:30 PDT 2016 > gpadmin=# select to_number(a,'99D9')::numeric(10,5), s from > tbl_lc_numeric_test; > to_number | s > ---+--- >3.0 | blablabla >2.0 | blabla > 11.0 | bla > (3 rows) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-791) remove parquet related test from installcheck-good
[ https://issues.apache.org/jira/browse/HAWQ-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319903#comment-15319903 ] ASF GitHub Bot commented on HAWQ-791: - Github user ztao1987 closed the pull request at: https://github.com/apache/incubator-hawq/pull/696 > remove parquet related test from installcheck-good > -- > > Key: HAWQ-791 > URL: https://issues.apache.org/jira/browse/HAWQ-791 > Project: Apache HAWQ > Issue Type: Test > Components: Tests >Reporter: zhenglin tao >Assignee: Jiali Yao > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq issue #652: HAWQ-713 Make lc_numeric guc to have GUC_GPDB_ADD...
Github user wengyanqing commented on the issue: https://github.com/apache/incubator-hawq/pull/652 merge done --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq pull request #696: HAWQ-791. remove parquet related test from...
Github user ztao1987 closed the pull request at: https://github.com/apache/incubator-hawq/pull/696 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq issue #696: HAWQ-791. remove parquet related test from instal...
Github user yaoj2 commented on the issue: https://github.com/apache/incubator-hawq/pull/696 +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (HAWQ-791) remove parquet related test from installcheck-good
[ https://issues.apache.org/jira/browse/HAWQ-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319899#comment-15319899 ] ASF GitHub Bot commented on HAWQ-791: - Github user yaoj2 commented on the issue: https://github.com/apache/incubator-hawq/pull/696 +1 > remove parquet related test from installcheck-good > -- > > Key: HAWQ-791 > URL: https://issues.apache.org/jira/browse/HAWQ-791 > Project: Apache HAWQ > Issue Type: Test > Components: Tests >Reporter: zhenglin tao >Assignee: Jiali Yao > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq pull request #696: HAWQ-791. remove parquet related test from...
GitHub user ztao1987 opened a pull request: https://github.com/apache/incubator-hawq/pull/696 HAWQ-791. remove parquet related test from installcheck-good. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ztao1987/incubator-hawq HAWQ-791 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/696.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #696 commit 7f1c6b56bdf7a73c2c10e3990ddcf97d5d6fc79c Author: ztao1987Date: 2016-06-08T02:58:14Z HAWQ-791. remove parquet related test from installcheck-good. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (HAWQ-791) remove parquet related test from installcheck-good
[ https://issues.apache.org/jira/browse/HAWQ-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319897#comment-15319897 ] ASF GitHub Bot commented on HAWQ-791: - GitHub user ztao1987 opened a pull request: https://github.com/apache/incubator-hawq/pull/696 HAWQ-791. remove parquet related test from installcheck-good. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ztao1987/incubator-hawq HAWQ-791 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/696.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #696 commit 7f1c6b56bdf7a73c2c10e3990ddcf97d5d6fc79c Author: ztao1987Date: 2016-06-08T02:58:14Z HAWQ-791. remove parquet related test from installcheck-good. > remove parquet related test from installcheck-good > -- > > Key: HAWQ-791 > URL: https://issues.apache.org/jira/browse/HAWQ-791 > Project: Apache HAWQ > Issue Type: Test > Components: Tests >Reporter: zhenglin tao >Assignee: Jiali Yao > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-791) remove parquet related test from installcheck-good
zhenglin tao created HAWQ-791: - Summary: remove parquet related test from installcheck-good Key: HAWQ-791 URL: https://issues.apache.org/jira/browse/HAWQ-791 Project: Apache HAWQ Issue Type: Test Components: Tests Reporter: zhenglin tao Assignee: Jiali Yao -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq issue #661: Deadcode #119780063
Github user wengyanqing commented on the issue: https://github.com/apache/incubator-hawq/pull/661 @vraghavan78 , your PR#661 can't build successfully. Please fix it and then I'll merge it. Also help to check related PR#644 and PR#646. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (HAWQ-790) Remove CTranslatorPlStmtToDXL Deadcode [#119102697]
Ivan Weng created HAWQ-790: -- Summary: Remove CTranslatorPlStmtToDXL Deadcode [#119102697] Key: HAWQ-790 URL: https://issues.apache.org/jira/browse/HAWQ-790 Project: Apache HAWQ Issue Type: Task Reporter: Ivan Weng Assignee: Lei Chang -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HAWQ-789) Explicitly initialize GPOPT and its dependencies
[ https://issues.apache.org/jira/browse/HAWQ-789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Weng resolved HAWQ-789. Resolution: Fixed > Explicitly initialize GPOPT and its dependencies > > > Key: HAWQ-789 > URL: https://issues.apache.org/jira/browse/HAWQ-789 > Project: Apache HAWQ > Issue Type: New Feature >Reporter: Ivan Weng >Assignee: Lei Chang > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-789) Explicitly initialize GPOPT and its dependencies
Ivan Weng created HAWQ-789: -- Summary: Explicitly initialize GPOPT and its dependencies Key: HAWQ-789 URL: https://issues.apache.org/jira/browse/HAWQ-789 Project: Apache HAWQ Issue Type: New Feature Reporter: Ivan Weng Assignee: Lei Chang -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-788) Explicitly initialize GPOPT and its dependencies
Ivan Weng created HAWQ-788: -- Summary: Explicitly initialize GPOPT and its dependencies Key: HAWQ-788 URL: https://issues.apache.org/jira/browse/HAWQ-788 Project: Apache HAWQ Issue Type: New Feature Reporter: Ivan Weng Assignee: Lei Chang -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-787) Remove QP Deadcode
Ivan Weng created HAWQ-787: -- Summary: Remove QP Deadcode Key: HAWQ-787 URL: https://issues.apache.org/jira/browse/HAWQ-787 Project: Apache HAWQ Issue Type: Task Reporter: Ivan Weng Assignee: Lei Chang -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-779) support more pxf filter pushdwon
[ https://issues.apache.org/jira/browse/HAWQ-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319829#comment-15319829 ] Devin Jia commented on HAWQ-779: [~GodenYao] I already create a "pull request". About I developed plug-ins(pxf-jdbc & pxf-solr : https://github.com/inspur-insight/pxf-plugin ) , can have merged into the main branch? > support more pxf filter pushdwon > - > > Key: HAWQ-779 > URL: https://issues.apache.org/jira/browse/HAWQ-779 > Project: Apache HAWQ > Issue Type: Improvement > Components: PXF >Reporter: Devin Jia >Assignee: Goden Yao > Fix For: backlog > > > When I use the pxf hawq, I need to read a traditional relational database > systems and solr by way of the external table. The project > :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext, > only "WriteAccessor ",so I developed 2 plug-ins, the projects: > https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to > modified HAWQ: > 1. When get a list of fragment from pxf services, push down the > 'filterString'. modify the backend / optimizer / plan / createplan.c of > create_pxf_plan methods: > segdb_work_map = map_hddata_2gp_segments (uri_str, > total_segs, segs_participating, > relation, ctx-> root-> parse-> jointree-> quals); > 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, > Date type data operator, Float type operator. > 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE > operator. > I already created a feature branch in my local ,and tested. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-779) support more pxf filter pushdwon
[ https://issues.apache.org/jira/browse/HAWQ-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319818#comment-15319818 ] ASF GitHub Bot commented on HAWQ-779: - GitHub user jiadexin opened a pull request: https://github.com/apache/incubator-hawq/pull/695 support more pxf filter pushdwon https://issues.apache.org/jira/browse/HAWQ-779 Description When I use the pxf hawq, I need to read a traditional relational database systems and solr by way of the external table. The project :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext, only "WriteAccessor ",so I developed 2 plug-ins, the projects: https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to modified HAWQ: 1. When get a list of fragment from pxf services, push down the 'filterString'. modify the backend / optimizer / plan / createplan.c of create_pxf_plan methods: segdb_work_map = map_hddata_2gp_segments (uri_str, total_segs, segs_participating, relation, ctx-> root-> parse-> jointree-> quals); 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, Date type data operator, Float type operator. 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE operator. You can merge this pull request into a Git repository by running: $ git pull https://github.com/inspur-insight/incubator-hawq feature-pxf Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/695.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #695 commit fb00bd27021fbabc98c1c940ebb890e974496500 Author: rootDate: 2016-06-07T01:16:06Z HAWQ-779 Support more pxf filter pushdwon commit caa20039e73589112c48c20a3f78c4a8f7b1f2d6 Author: Devin Jia Date: 2016-06-08T01:04:08Z HAWQ-779 support more pxf filter pushdwon > support more pxf filter pushdwon > - > > Key: HAWQ-779 > URL: https://issues.apache.org/jira/browse/HAWQ-779 > Project: Apache HAWQ > Issue Type: Improvement > Components: PXF >Reporter: Devin Jia >Assignee: Goden Yao > Fix For: backlog > > > When I use the pxf hawq, I need to read a traditional relational database > systems and solr by way of the external table. The project > :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext, > only "WriteAccessor ",so I developed 2 plug-ins, the projects: > https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to > modified HAWQ: > 1. When get a list of fragment from pxf services, push down the > 'filterString'. modify the backend / optimizer / plan / createplan.c of > create_pxf_plan methods: > segdb_work_map = map_hddata_2gp_segments (uri_str, > total_segs, segs_participating, > relation, ctx-> root-> parse-> jointree-> quals); > 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, > Date type data operator, Float type operator. > 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE > operator. > I already created a feature branch in my local ,and tested. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq pull request #695: support more pxf filter pushdwon
GitHub user jiadexin opened a pull request: https://github.com/apache/incubator-hawq/pull/695 support more pxf filter pushdwon https://issues.apache.org/jira/browse/HAWQ-779 Description When I use the pxf hawq, I need to read a traditional relational database systems and solr by way of the external table. The project :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext, only "WriteAccessor ",so I developed 2 plug-ins, the projects: https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to modified HAWQ: 1. When get a list of fragment from pxf services, push down the 'filterString'. modify the backend / optimizer / plan / createplan.c of create_pxf_plan methods: segdb_work_map = map_hddata_2gp_segments (uri_str, total_segs, segs_participating, relation, ctx-> root-> parse-> jointree-> quals); 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, Date type data operator, Float type operator. 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE operator. You can merge this pull request into a Git repository by running: $ git pull https://github.com/inspur-insight/incubator-hawq feature-pxf Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/695.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #695 commit fb00bd27021fbabc98c1c940ebb890e974496500 Author: rootDate: 2016-06-07T01:16:06Z HAWQ-779 Support more pxf filter pushdwon commit caa20039e73589112c48c20a3f78c4a8f7b1f2d6 Author: Devin Jia Date: 2016-06-08T01:04:08Z HAWQ-779 support more pxf filter pushdwon --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq pull request #:
Github user jiadexin commented on the pull request: https://github.com/apache/incubator-hawq/commit/caa20039e73589112c48c20a3f78c4a8f7b1f2d6#commitcomment-17780830 HAWQ-779. support more pxf filter pushdwon --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (HAWQ-779) support more pxf filter pushdwon
[ https://issues.apache.org/jira/browse/HAWQ-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319816#comment-15319816 ] ASF GitHub Bot commented on HAWQ-779: - Github user jiadexin commented on the pull request: https://github.com/apache/incubator-hawq/commit/caa20039e73589112c48c20a3f78c4a8f7b1f2d6#commitcomment-17780830 HAWQ-779. support more pxf filter pushdwon > support more pxf filter pushdwon > - > > Key: HAWQ-779 > URL: https://issues.apache.org/jira/browse/HAWQ-779 > Project: Apache HAWQ > Issue Type: Improvement > Components: PXF >Reporter: Devin Jia >Assignee: Goden Yao > Fix For: backlog > > > When I use the pxf hawq, I need to read a traditional relational database > systems and solr by way of the external table. The project > :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext, > only "WriteAccessor ",so I developed 2 plug-ins, the projects: > https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to > modified HAWQ: > 1. When get a list of fragment from pxf services, push down the > 'filterString'. modify the backend / optimizer / plan / createplan.c of > create_pxf_plan methods: > segdb_work_map = map_hddata_2gp_segments (uri_str, > total_segs, segs_participating, > relation, ctx-> root-> parse-> jointree-> quals); > 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, > Date type data operator, Float type operator. > 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE > operator. > I already created a feature branch in my local ,and tested. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-786) Framework to support pluggable format with C/C++ scanner
Lei Chang created HAWQ-786: -- Summary: Framework to support pluggable format with C/C++ scanner Key: HAWQ-786 URL: https://issues.apache.org/jira/browse/HAWQ-786 Project: Apache HAWQ Issue Type: New Feature Components: Storage Reporter: Lei Chang Assignee: Lei Chang In current HAWQ, two native formats are supported: AO and parquet. Now we want to support ORC. A framework to support pluggable format with C/C++ scanner is needed. And it can also be potentially used for fast external data access. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-762) Hive aggregation queries through PXF sometimes hang
[ https://issues.apache.org/jira/browse/HAWQ-762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319745#comment-15319745 ] Oleksandr Diachenko commented on HAWQ-762: -- [~michael.andre.pearce] could you please help to reproduce this issue? Which cluster topology, table definition and heap szie for pxf are you using? > Hive aggregation queries through PXF sometimes hang > --- > > Key: HAWQ-762 > URL: https://issues.apache.org/jira/browse/HAWQ-762 > Project: Apache HAWQ > Issue Type: Bug > Components: Hcatalog, PXF >Reporter: Oleksandr Diachenko >Assignee: Goden Yao > Labels: performance > > Reproduce Steps: > {code} > select count(*) from hcatalog.default.hivetable; > {code} > sometimes, this query will hang and we see from pxf logs that Hive thrift > server cannot be connected from PXF agent. > While users can still visit hive metastore (through HUE) and execute the same > query. > After a restart of PXF agent, this query goes through without issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-785) Failure running `make -j8 all`
Kavinder Dhaliwal created HAWQ-785: -- Summary: Failure running `make -j8 all` Key: HAWQ-785 URL: https://issues.apache.org/jira/browse/HAWQ-785 Project: Apache HAWQ Issue Type: Bug Components: libhdfs Reporter: Kavinder Dhaliwal Assignee: Lei Chang I am trying to build hawq on a local OS X 10.10 environment with gcc version 5.3.0 I can succesfully run {code} ./configure CFLAGS="-O3 -g" CXXFLAGS="-O3 -g" LDFLAGS= --with-pgport=5432 --with-libedit-preferred --enable-email --enable-snmp --with-perl --with-python --with-java --with-openssl --with-pam --without-krb5 --with-gssapi --with-ldap --with-r --with-pgcrypto --enable-orca --prefix=~/hawq_install/ {code} However when I run `make -j8 all` I get many errors related to building libhdfs3 such as {code} Undefined symbols for architecture x86_64: "google::protobuf::MessageFactory::InternalRegisterGeneratedFile(char const*, void (*)(std::__cxx11::basic_stringconst&))", referenced from: Hdfs::Internal::protobuf_AddDesc_ClientDatanodeProtocol_2eproto() in ClientDatanodeProtocol.pb.cc.o Hdfs::Internal::protobuf_AddDesc_ClientNamenodeProtocol_2eproto() in ClientNamenodeProtocol.pb.cc.o Hdfs::Internal::protobuf_AddDesc_datatransfer_2eproto() in datatransfer.pb.cc.o Hdfs::Internal::protobuf_AddDesc_hdfs_2eproto() in hdfs.pb.cc.o Hdfs::Internal::protobuf_AddDesc_IpcConnectionContext_2eproto() in IpcConnectionContext.pb.cc.o Hdfs::Internal::protobuf_AddDesc_ProtobufRpcEngine_2eproto() in ProtobufRpcEngine.pb.cc.o Hdfs::Internal::protobuf_AddDesc_RpcHeader_2eproto() in RpcHeader.pb.cc.o ... ld: symbol(s) not found for architecture x86_64 collect2: error: ld returned 1 exit status make[4]: *** [src/libhdfs3.2.2.31.dylib] Error 1 make[3]: *** [src/CMakeFiles/libhdfs3-shared.dir/all] Error 2 {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HAWQ-779) support more pxf filter pushdwon
[ https://issues.apache.org/jira/browse/HAWQ-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319137#comment-15319137 ] Goden Yao edited comment on HAWQ-779 at 6/7/16 7:07 PM: [~jiadx] Filter was earlier added in the PXF framework primarily to handle predicate push down operations supported by PXF HBase plugin. Adding support for LIKE is definitely useful within the PXF framework especially in the context of free text index like Solr. Please do provide the PR as Goden suggested. was (Author: shivram): [~jiadx] Filter was earlier added in the PXF framework parimarily to handle predicate push down operations supported by PXF HBase plugin. Adding support for LIKE is definitely useful within the PXF framework especially in the context of free text index like Solr. Please do provide the PR as Goden suggested. > support more pxf filter pushdwon > - > > Key: HAWQ-779 > URL: https://issues.apache.org/jira/browse/HAWQ-779 > Project: Apache HAWQ > Issue Type: Improvement > Components: PXF >Reporter: Devin Jia >Assignee: Goden Yao > Fix For: backlog > > > When I use the pxf hawq, I need to read a traditional relational database > systems and solr by way of the external table. The project > :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext, > only "WriteAccessor ",so I developed 2 plug-ins, the projects: > https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to > modified HAWQ: > 1. When get a list of fragment from pxf services, push down the > 'filterString'. modify the backend / optimizer / plan / createplan.c of > create_pxf_plan methods: > segdb_work_map = map_hddata_2gp_segments (uri_str, > total_segs, segs_participating, > relation, ctx-> root-> parse-> jointree-> quals); > 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, > Date type data operator, Float type operator. > 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE > operator. > I already created a feature branch in my local ,and tested. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-779) support more pxf filter pushdwon
[ https://issues.apache.org/jira/browse/HAWQ-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319137#comment-15319137 ] Shivram Mani commented on HAWQ-779: --- [~jiadx] Filter was earlier added in the PXF framework parimarily to handle predicate push down operations supported by PXF HBase plugin. Adding support for LIKE is definitely useful within the PXF framework especially in the context of free text index like Solr. Please do provide the PR as Goden suggested. > support more pxf filter pushdwon > - > > Key: HAWQ-779 > URL: https://issues.apache.org/jira/browse/HAWQ-779 > Project: Apache HAWQ > Issue Type: Improvement > Components: PXF >Reporter: Devin Jia >Assignee: Goden Yao > Fix For: backlog > > > When I use the pxf hawq, I need to read a traditional relational database > systems and solr by way of the external table. The project > :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext, > only "WriteAccessor ",so I developed 2 plug-ins, the projects: > https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to > modified HAWQ: > 1. When get a list of fragment from pxf services, push down the > 'filterString'. modify the backend / optimizer / plan / createplan.c of > create_pxf_plan methods: > segdb_work_map = map_hddata_2gp_segments (uri_str, > total_segs, segs_participating, > relation, ctx-> root-> parse-> jointree-> quals); > 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, > Date type data operator, Float type operator. > 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE > operator. > I already created a feature branch in my local ,and tested. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
test
[jira] [Commented] (HAWQ-779) support more pxf filter pushdwon
[ https://issues.apache.org/jira/browse/HAWQ-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318836#comment-15318836 ] Goden Yao commented on HAWQ-779: Thanks [~jiadx] for your contribution and interests in PXF plugins. Can you create a PR based on incubator hawq code base and put the link in the JIRA, we'll have community review and potential discussion around this change. > support more pxf filter pushdwon > - > > Key: HAWQ-779 > URL: https://issues.apache.org/jira/browse/HAWQ-779 > Project: Apache HAWQ > Issue Type: Improvement > Components: PXF >Reporter: Devin Jia >Assignee: Goden Yao > Fix For: backlog > > > When I use the pxf hawq, I need to read a traditional relational database > systems and solr by way of the external table. The project > :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext, > only "WriteAccessor ",so I developed 2 plug-ins, the projects: > https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to > modified HAWQ: > 1. When get a list of fragment from pxf services, push down the > 'filterString'. modify the backend / optimizer / plan / createplan.c of > create_pxf_plan methods: > segdb_work_map = map_hddata_2gp_segments (uri_str, > total_segs, segs_participating, > relation, ctx-> root-> parse-> jointree-> quals); > 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, > Date type data operator, Float type operator. > 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE > operator. > I already created a feature branch in my local ,and tested. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-779) support more pxf filter pushdwon
[ https://issues.apache.org/jira/browse/HAWQ-779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao updated HAWQ-779: --- Fix Version/s: backlog > support more pxf filter pushdwon > - > > Key: HAWQ-779 > URL: https://issues.apache.org/jira/browse/HAWQ-779 > Project: Apache HAWQ > Issue Type: Improvement > Components: PXF >Reporter: Devin Jia >Assignee: Goden Yao > Fix For: backlog > > > When I use the pxf hawq, I need to read a traditional relational database > systems and solr by way of the external table. The project > :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext, > only "WriteAccessor ",so I developed 2 plug-ins, the projects: > https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to > modified HAWQ: > 1. When get a list of fragment from pxf services, push down the > 'filterString'. modify the backend / optimizer / plan / createplan.c of > create_pxf_plan methods: > segdb_work_map = map_hddata_2gp_segments (uri_str, > total_segs, segs_participating, > relation, ctx-> root-> parse-> jointree-> quals); > 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, > Date type data operator, Float type operator. > 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE > operator. > I already created a feature branch in my local ,and tested. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-780) Remove quicklz compression related code but keep related meta data in short term.
[ https://issues.apache.org/jira/browse/HAWQ-780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao updated HAWQ-780: --- Fix Version/s: 2.0.0-beta-incubating > Remove quicklz compression related code but keep related meta data in short > term. > - > > Key: HAWQ-780 > URL: https://issues.apache.org/jira/browse/HAWQ-780 > Project: Apache HAWQ > Issue Type: Bug > Components: Storage >Reporter: Paul Guo >Assignee: Paul Guo > Fix For: 2.0.0-beta-incubating > > > To avoid potential license issue, we'd better remove it. Given we have snappy > support now, there is no problem to do this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-644) Failure in security ha environment with certain writable external tables
[ https://issues.apache.org/jira/browse/HAWQ-644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318733#comment-15318733 ] ASF GitHub Bot commented on HAWQ-644: - Github user kavinderd commented on the issue: https://github.com/apache/incubator-hawq/pull/604 Merged > Failure in security ha environment with certain writable external tables > > > Key: HAWQ-644 > URL: https://issues.apache.org/jira/browse/HAWQ-644 > Project: Apache HAWQ > Issue Type: Bug > Components: PXF, Security >Reporter: Goden Yao >Assignee: Goden Yao > > In a Secure HA environment: > Few tests which tests writable table fail due to empty dfs_address prior to > getting the delegation token in the segment. > On an initial investigation, the shared_path seems to not be set by the hawq > master. > Log from the specific segment. The hdfs path available in the segment is > empty and hence the failure. > {code} > 2016-04-08 22:53:11.034661 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF > received from configuration HA Namenode-1 having rpc-address > and rest-address > ","External table > readable_table, line 1 of > pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: > ","INSERT INTO writable_table SELECT * FROM > readable_table;",0,,"ha_config.c",157, > 2016-04-08 22:53:11.034699 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF > received from configuration HA Namenode-2 having rpc-address > and rest-address > ","External table > readable_table, line 1 of > pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: > ","INSERT INTO writable_table SELECT * FROM > readable_table;",0,,"ha_config.c",157, > 2016-04-08 22:53:11.034785 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","locating > token for 0^N\","External table readable_table, line 1 of > pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: > ","INSERT INTO writable_table SELECT * FROM > readable_table;",0,,"gpbridgeapi.c",521, > 2016-04-08 22:53:11.034871 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"WARNING","01000","internal > error HdfsParsePath: no filesystem protocol found in path > ""0^N\^A""","External table readable_table, line 1 of > pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: > ","INSERT INTO writable_table SELECT * FROM > readable_table;",0,,"fd.c",2501, > 2016-04-08 22:53:11.035004 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG1","0","Deleted > entry for query (sessionid=73, commandcnt=43)",,"INSERT INTO > writable_table SELECT * FROM readable_table;",0,,"workfile_queryspace.c",329, > 2016-04-08 22:53:11.066243 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"ERROR","XX000","fail to > parse uri: 0^N\ (cdbfilesystemcredential.c:529)","External table > readable_table, line 1 of > pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: > ","INSERT INTO writable_table SELECT * FROM > readable_table;",0,,"cdbfilesystemcredential.c",529,"Stack trace: > 10x871f8f postgres + 0x871f8f > 20x872679 postgres elog_finish + 0xa9 > 30x99d7b8 postgres find_filesystem_credential_with_uri + 0x78 > 40x7fc935c8f5a2 pxf.so add_delegation_token + 0xa2 > 50x7fc935c8f9b2 pxf.so get_pxf_server + 0xe2 > 60x7fc935c8ff02 pxf.so gpbridge_export_start + 0x42 > 70x7fc935c90170 pxf.so gpbridge_export + 0x50 > 80x507eb8 postgres + 0x507eb8 > 90x5083af postgres url_fwrite + 0x9f > 10 0x5042b4 postgres external_insert + 0x184 > 11 0x69dcca postgres ExecInsert + 0x1fa > 12 0x69d41c postgres ExecDML + 0x1ec > 13 0x65e185 postgres ExecProcNode + 0x3c5 > 14 0x659f4a postgres + 0x659f4a > 15 0x65a8d3 postgres ExecutorRun + 0x4a3 > 16 0x7b550a postgres + 0x7b550a > 17 0x7b5baf postgres + 0x7b5baf > 18 0x7b6142 postgres PortalRun + 0x342 > 19 0x7b2c21 postgres PostgresMain + 0x3861 > 20 0x763ce3 postgres + 0x763ce3 > 21 0x76443d postgres + 0x76443d > 22 0x76626e postgres PostmasterMain + 0xc7e > 23 0x6c04ea postgres main + 0x48a > 24 0x7fc95f276d5d libc.so.6 __libc_start_main + 0xfd > 25 0x4a1489 postgres + 0x4a1489 > {code} --
[jira] [Commented] (HAWQ-644) Failure in security ha environment with certain writable external tables
[ https://issues.apache.org/jira/browse/HAWQ-644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318734#comment-15318734 ] ASF GitHub Bot commented on HAWQ-644: - Github user kavinderd closed the pull request at: https://github.com/apache/incubator-hawq/pull/604 > Failure in security ha environment with certain writable external tables > > > Key: HAWQ-644 > URL: https://issues.apache.org/jira/browse/HAWQ-644 > Project: Apache HAWQ > Issue Type: Bug > Components: PXF, Security >Reporter: Goden Yao >Assignee: Goden Yao > > In a Secure HA environment: > Few tests which tests writable table fail due to empty dfs_address prior to > getting the delegation token in the segment. > On an initial investigation, the shared_path seems to not be set by the hawq > master. > Log from the specific segment. The hdfs path available in the segment is > empty and hence the failure. > {code} > 2016-04-08 22:53:11.034661 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF > received from configuration HA Namenode-1 having rpc-address > and rest-address > ","External table > readable_table, line 1 of > pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: > ","INSERT INTO writable_table SELECT * FROM > readable_table;",0,,"ha_config.c",157, > 2016-04-08 22:53:11.034699 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF > received from configuration HA Namenode-2 having rpc-address > and rest-address > ","External table > readable_table, line 1 of > pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: > ","INSERT INTO writable_table SELECT * FROM > readable_table;",0,,"ha_config.c",157, > 2016-04-08 22:53:11.034785 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","locating > token for 0^N\","External table readable_table, line 1 of > pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: > ","INSERT INTO writable_table SELECT * FROM > readable_table;",0,,"gpbridgeapi.c",521, > 2016-04-08 22:53:11.034871 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"WARNING","01000","internal > error HdfsParsePath: no filesystem protocol found in path > ""0^N\^A""","External table readable_table, line 1 of > pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: > ","INSERT INTO writable_table SELECT * FROM > readable_table;",0,,"fd.c",2501, > 2016-04-08 22:53:11.035004 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG1","0","Deleted > entry for query (sessionid=73, commandcnt=43)",,"INSERT INTO > writable_table SELECT * FROM readable_table;",0,,"workfile_queryspace.c",329, > 2016-04-08 22:53:11.066243 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"ERROR","XX000","fail to > parse uri: 0^N\ (cdbfilesystemcredential.c:529)","External table > readable_table, line 1 of > pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: > ","INSERT INTO writable_table SELECT * FROM > readable_table;",0,,"cdbfilesystemcredential.c",529,"Stack trace: > 10x871f8f postgres + 0x871f8f > 20x872679 postgres elog_finish + 0xa9 > 30x99d7b8 postgres find_filesystem_credential_with_uri + 0x78 > 40x7fc935c8f5a2 pxf.so add_delegation_token + 0xa2 > 50x7fc935c8f9b2 pxf.so get_pxf_server + 0xe2 > 60x7fc935c8ff02 pxf.so gpbridge_export_start + 0x42 > 70x7fc935c90170 pxf.so gpbridge_export + 0x50 > 80x507eb8 postgres + 0x507eb8 > 90x5083af postgres url_fwrite + 0x9f > 10 0x5042b4 postgres external_insert + 0x184 > 11 0x69dcca postgres ExecInsert + 0x1fa > 12 0x69d41c postgres ExecDML + 0x1ec > 13 0x65e185 postgres ExecProcNode + 0x3c5 > 14 0x659f4a postgres + 0x659f4a > 15 0x65a8d3 postgres ExecutorRun + 0x4a3 > 16 0x7b550a postgres + 0x7b550a > 17 0x7b5baf postgres + 0x7b5baf > 18 0x7b6142 postgres PortalRun + 0x342 > 19 0x7b2c21 postgres PostgresMain + 0x3861 > 20 0x763ce3 postgres + 0x763ce3 > 21 0x76443d postgres + 0x76443d > 22 0x76626e postgres PostmasterMain + 0xc7e > 23 0x6c04ea postgres main + 0x48a > 24 0x7fc95f276d5d libc.so.6 __libc_start_main + 0xfd > 25 0x4a1489 postgres + 0x4a1489 > {code} -- This message
[jira] [Commented] (HAWQ-780) Remove quicklz compression related code but keep related meta data in short term.
[ https://issues.apache.org/jira/browse/HAWQ-780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318235#comment-15318235 ] ASF GitHub Bot commented on HAWQ-780: - Github user ictmalili commented on the issue: https://github.com/apache/incubator-hawq/pull/692 LGTM. +1 > Remove quicklz compression related code but keep related meta data in short > term. > - > > Key: HAWQ-780 > URL: https://issues.apache.org/jira/browse/HAWQ-780 > Project: Apache HAWQ > Issue Type: Bug > Components: Storage >Reporter: Paul Guo >Assignee: Paul Guo > > To avoid potential license issue, we'd better remove it. Given we have snappy > support now, there is no problem to do this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-784) Refine hawq register document and tests
[ https://issues.apache.org/jira/browse/HAWQ-784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yangcheng Luo updated HAWQ-784: --- Description: Refine the document of "hawq register" to give user more details about the data types information in HIVE and HAWQ. Refine the tests to check the conversion of data types from HIVE to HAWQ. was:Refine the document of "hawq register" to give user more details about the data types information in HIVE and HAWQ. > Refine hawq register document and tests > --- > > Key: HAWQ-784 > URL: https://issues.apache.org/jira/browse/HAWQ-784 > Project: Apache HAWQ > Issue Type: Sub-task > Components: Command Line Tools >Reporter: Yangcheng Luo >Assignee: Lei Chang > > Refine the document of "hawq register" to give user more details about the > data types information in HIVE and HAWQ. > Refine the tests to check the conversion of data types from HIVE to HAWQ. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-784) Refine hawq register document and tests
[ https://issues.apache.org/jira/browse/HAWQ-784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yangcheng Luo updated HAWQ-784: --- Summary: Refine hawq register document and tests (was: Refine hawq register document) > Refine hawq register document and tests > --- > > Key: HAWQ-784 > URL: https://issues.apache.org/jira/browse/HAWQ-784 > Project: Apache HAWQ > Issue Type: Sub-task > Components: Command Line Tools >Reporter: Yangcheng Luo >Assignee: Lei Chang > > Refine the document of "hawq register" to give user more details about the > data types information in HIVE and HAWQ. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-781) Move src/postgres to depends/thirdparty/postgres
[ https://issues.apache.org/jira/browse/HAWQ-781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318219#comment-15318219 ] ASF GitHub Bot commented on HAWQ-781: - GitHub user paul-guo- opened a pull request: https://github.com/apache/incubator-hawq/pull/694 HAWQ-781. Move src/postgres to depends/thirdparty/postgres You can merge this pull request into a Git repository by running: $ git pull https://github.com/paul-guo-/incubator-hawq move Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/694.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #694 commit a349a921b4543d4f905cc0ae9ae013a1e3ee2faf Author: Paul GuoDate: 2016-06-07T09:27:20Z HAWQ-781. Move src/postgres to depends/thirdparty/postgres > Move src/postgres to depends/thirdparty/postgres > > > Key: HAWQ-781 > URL: https://issues.apache.org/jira/browse/HAWQ-781 > Project: Apache HAWQ > Issue Type: Improvement > Components: Build >Reporter: Paul Guo >Assignee: Paul Guo > > Discussed offline about this. We git-submoduled src/postgres. The purpose is > to support the pgcrypto functionality only. It looks a bit ugly to store the > upstream postgres code under src/. It seems that we'd better put it under > /depends. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-781) Move src/postgres to depends/thirdparty/postgres
[ https://issues.apache.org/jira/browse/HAWQ-781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Guo updated HAWQ-781: -- Summary: Move src/postgres to depends/thirdparty/postgres (was: Move src/postgres to depends/) > Move src/postgres to depends/thirdparty/postgres > > > Key: HAWQ-781 > URL: https://issues.apache.org/jira/browse/HAWQ-781 > Project: Apache HAWQ > Issue Type: Improvement > Components: Build >Reporter: Paul Guo >Assignee: Paul Guo > > Discussed offline about this. We git-submoduled src/postgres. The purpose is > to support the pgcrypto functionality only. It looks a bit ugly to store the > upstream postgres code under src/. It seems that we'd better put it under > /depends. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-781) Move src/postgres to depends/
[ https://issues.apache.org/jira/browse/HAWQ-781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Guo reassigned HAWQ-781: - Assignee: Paul Guo (was: Lei Chang) > Move src/postgres to depends/ > - > > Key: HAWQ-781 > URL: https://issues.apache.org/jira/browse/HAWQ-781 > Project: Apache HAWQ > Issue Type: Improvement > Components: Build >Reporter: Paul Guo >Assignee: Paul Guo > > Discussed offline about this. We git-submoduled src/postgres. The purpose is > to support the pgcrypto functionality only. It looks a bit ugly to store the > upstream postgres code under src/. It seems that we'd better put it under > /depends. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-784) Refine hawq register document
[ https://issues.apache.org/jira/browse/HAWQ-784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318197#comment-15318197 ] ASF GitHub Bot commented on HAWQ-784: - GitHub user Oliver-Luo opened a pull request: https://github.com/apache/incubator-hawq/pull/693 HAWQ-784. Refine the document of 'hawq register' Refine the document of 'hawq register' to give user information about data types mapping between HIVE and HAWQ. You can merge this pull request into a Git repository by running: $ git pull https://github.com/Oliver-Luo/incubator-hawq HAWQ-784 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/693.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #693 commit 6023231ef272a9ce954b0069dd5173901af1683a Author: Yancheng LuoDate: 2016-06-07T09:17:07Z HAWQ-784. Refine the document of 'hawq register' to give user information about data types in HIVE and HAWQ. > Refine hawq register document > - > > Key: HAWQ-784 > URL: https://issues.apache.org/jira/browse/HAWQ-784 > Project: Apache HAWQ > Issue Type: Sub-task > Components: Command Line Tools >Reporter: Yangcheng Luo >Assignee: Lei Chang > > Refine the document of "hawq register" to give user more details about the > data types information in HIVE and HAWQ. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-784) Refine hawq register document
Yangcheng Luo created HAWQ-784: -- Summary: Refine hawq register document Key: HAWQ-784 URL: https://issues.apache.org/jira/browse/HAWQ-784 Project: Apache HAWQ Issue Type: Sub-task Components: Command Line Tools Reporter: Yangcheng Luo Assignee: Lei Chang Refine the document of "hawq register" to give user more details about the data types information in HIVE and HAWQ. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-780) Remove quicklz compression related code but keep related meta data in short term.
[ https://issues.apache.org/jira/browse/HAWQ-780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Guo reassigned HAWQ-780: - Assignee: Paul Guo (was: Lei Chang) > Remove quicklz compression related code but keep related meta data in short > term. > - > > Key: HAWQ-780 > URL: https://issues.apache.org/jira/browse/HAWQ-780 > Project: Apache HAWQ > Issue Type: Bug > Components: Storage >Reporter: Paul Guo >Assignee: Paul Guo > > To avoid potential license issue, we'd better remove it. Given we have snappy > support now, there is no problem to do this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-783) Remove quicklz in medadata
Paul Guo created HAWQ-783: - Summary: Remove quicklz in medadata Key: HAWQ-783 URL: https://issues.apache.org/jira/browse/HAWQ-783 Project: Apache HAWQ Issue Type: Bug Components: Storage Reporter: Paul Guo Assignee: Lei Chang This is the rest work of complete quicklz removal, beside HAWQ-780 (Remove quicklz compression related code but keep related meta data in short term). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-780) Remove quicklz compression related code but keep related meta data in short term.
[ https://issues.apache.org/jira/browse/HAWQ-780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318046#comment-15318046 ] ASF GitHub Bot commented on HAWQ-780: - Github user ztao1987 commented on the issue: https://github.com/apache/incubator-hawq/pull/692 +1 > Remove quicklz compression related code but keep related meta data in short > term. > - > > Key: HAWQ-780 > URL: https://issues.apache.org/jira/browse/HAWQ-780 > Project: Apache HAWQ > Issue Type: Bug > Components: Storage >Reporter: Paul Guo >Assignee: Lei Chang > > To avoid potential license issue, we'd better remove it. Given we have snappy > support now, there is no problem to do this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-780) Remove quicklz compression related code but keep related meta data in short term.
[ https://issues.apache.org/jira/browse/HAWQ-780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318037#comment-15318037 ] ASF GitHub Bot commented on HAWQ-780: - GitHub user paul-guo- opened a pull request: https://github.com/apache/incubator-hawq/pull/692 HAWQ-780. Remove quicklz compression related code but keep related me… …ta data in short term. You can merge this pull request into a Git repository by running: $ git pull https://github.com/paul-guo-/incubator-hawq compress Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/692.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #692 commit 4e7f193c2375bd199bf28bec61581e656cb12c3c Author: Paul GuoDate: 2016-06-07T07:25:48Z HAWQ-780. Remove quicklz compression related code but keep related meta data in short term. > Remove quicklz compression related code but keep related meta data in short > term. > - > > Key: HAWQ-780 > URL: https://issues.apache.org/jira/browse/HAWQ-780 > Project: Apache HAWQ > Issue Type: Bug > Components: Storage >Reporter: Paul Guo >Assignee: Lei Chang > > To avoid potential license issue, we'd better remove it. Given we have snappy > support now, there is no problem to do this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-780) Remove quicklz compression related code but keep related meta data in short term.
[ https://issues.apache.org/jira/browse/HAWQ-780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318031#comment-15318031 ] Paul Guo commented on HAWQ-780: --- We do not delete related entries in pg_compresion, pg_proc tables to be more upgrading friendly. This is a short plan. In the long run, we will remote them from the tables also. > Remove quicklz compression related code but keep related meta data in short > term. > - > > Key: HAWQ-780 > URL: https://issues.apache.org/jira/browse/HAWQ-780 > Project: Apache HAWQ > Issue Type: Bug > Components: Storage >Reporter: Paul Guo >Assignee: Lei Chang > > To avoid potential license issue, we'd better remove it. Given we have snappy > support now, there is no problem to do this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-780) Remove quicklz compression related code but keep related meta data in short term.
[ https://issues.apache.org/jira/browse/HAWQ-780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Guo updated HAWQ-780: -- Summary: Remove quicklz compression related code but keep related meta data in short term. (was: Remove quicklz compress type without modifying meta data.) > Remove quicklz compression related code but keep related meta data in short > term. > - > > Key: HAWQ-780 > URL: https://issues.apache.org/jira/browse/HAWQ-780 > Project: Apache HAWQ > Issue Type: Bug > Components: Storage >Reporter: Paul Guo >Assignee: Lei Chang > > To avoid potential license issue, we'd better remove it. Given we have snappy > support now, there is no problem to do this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-780) Remove quicklz compress type without modifying meta data.
[ https://issues.apache.org/jira/browse/HAWQ-780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Guo updated HAWQ-780: -- Summary: Remove quicklz compress type without modifying meta data. (was: Remove quicklz) > Remove quicklz compress type without modifying meta data. > - > > Key: HAWQ-780 > URL: https://issues.apache.org/jira/browse/HAWQ-780 > Project: Apache HAWQ > Issue Type: Bug > Components: Storage >Reporter: Paul Guo >Assignee: Lei Chang > > To avoid potential license issue, we'd better remove it. Given we have snappy > support now, there is no problem to do this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-782) Libyarn Failover to Standby RM More Seamlessly
Lin Wen created HAWQ-782: Summary: Libyarn Failover to Standby RM More Seamlessly Key: HAWQ-782 URL: https://issues.apache.org/jira/browse/HAWQ-782 Project: Apache HAWQ Issue Type: Bug Components: libyarn Reporter: Lin Wen Assignee: Lei Chang Segments are tempararily non-available during YARN RM failover -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-644) Failure in security ha environment with certain writable external tables
[ https://issues.apache.org/jira/browse/HAWQ-644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317993#comment-15317993 ] ASF GitHub Bot commented on HAWQ-644: - Github user changleicn commented on the issue: https://github.com/apache/incubator-hawq/pull/604 @kavinderd the code is in, can you close this PR? thanks! > Failure in security ha environment with certain writable external tables > > > Key: HAWQ-644 > URL: https://issues.apache.org/jira/browse/HAWQ-644 > Project: Apache HAWQ > Issue Type: Bug > Components: PXF, Security >Reporter: Goden Yao >Assignee: Goden Yao > > In a Secure HA environment: > Few tests which tests writable table fail due to empty dfs_address prior to > getting the delegation token in the segment. > On an initial investigation, the shared_path seems to not be set by the hawq > master. > Log from the specific segment. The hdfs path available in the segment is > empty and hence the failure. > {code} > 2016-04-08 22:53:11.034661 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF > received from configuration HA Namenode-1 having rpc-address > and rest-address > ","External table > readable_table, line 1 of > pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: > ","INSERT INTO writable_table SELECT * FROM > readable_table;",0,,"ha_config.c",157, > 2016-04-08 22:53:11.034699 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","PXF > received from configuration HA Namenode-2 having rpc-address > and rest-address > ","External table > readable_table, line 1 of > pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: > ","INSERT INTO writable_table SELECT * FROM > readable_table;",0,,"ha_config.c",157, > 2016-04-08 22:53:11.034785 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG2","0","locating > token for 0^N\","External table readable_table, line 1 of > pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: > ","INSERT INTO writable_table SELECT * FROM > readable_table;",0,,"gpbridgeapi.c",521, > 2016-04-08 22:53:11.034871 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"WARNING","01000","internal > error HdfsParsePath: no filesystem protocol found in path > ""0^N\^A""","External table readable_table, line 1 of > pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: > ","INSERT INTO writable_table SELECT * FROM > readable_table;",0,,"fd.c",2501, > 2016-04-08 22:53:11.035004 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"DEBUG1","0","Deleted > entry for query (sessionid=73, commandcnt=43)",,"INSERT INTO > writable_table SELECT * FROM readable_table;",0,,"workfile_queryspace.c",329, > 2016-04-08 22:53:11.066243 > UTC,"gpadmin","pxfautomation",p337499,th1668020416,"10.32.37.27","34072",2016-04-08 > 22:52:25 UTC,37729,con73,cmd43,seg9,,,x37729,sx1,"ERROR","XX000","fail to > parse uri: 0^N\ (cdbfilesystemcredential.c:529)","External table > readable_table, line 1 of > pxf://mycluster/tmp/pxf_automation_data/data?PROFILE=HdfsTextSimple: > ","INSERT INTO writable_table SELECT * FROM > readable_table;",0,,"cdbfilesystemcredential.c",529,"Stack trace: > 10x871f8f postgres + 0x871f8f > 20x872679 postgres elog_finish + 0xa9 > 30x99d7b8 postgres find_filesystem_credential_with_uri + 0x78 > 40x7fc935c8f5a2 pxf.so add_delegation_token + 0xa2 > 50x7fc935c8f9b2 pxf.so get_pxf_server + 0xe2 > 60x7fc935c8ff02 pxf.so gpbridge_export_start + 0x42 > 70x7fc935c90170 pxf.so gpbridge_export + 0x50 > 80x507eb8 postgres + 0x507eb8 > 90x5083af postgres url_fwrite + 0x9f > 10 0x5042b4 postgres external_insert + 0x184 > 11 0x69dcca postgres ExecInsert + 0x1fa > 12 0x69d41c postgres ExecDML + 0x1ec > 13 0x65e185 postgres ExecProcNode + 0x3c5 > 14 0x659f4a postgres + 0x659f4a > 15 0x65a8d3 postgres ExecutorRun + 0x4a3 > 16 0x7b550a postgres + 0x7b550a > 17 0x7b5baf postgres + 0x7b5baf > 18 0x7b6142 postgres PortalRun + 0x342 > 19 0x7b2c21 postgres PostgresMain + 0x3861 > 20 0x763ce3 postgres + 0x763ce3 > 21 0x76443d postgres + 0x76443d > 22 0x76626e postgres PostmasterMain + 0xc7e > 23 0x6c04ea postgres main + 0x48a > 24 0x7fc95f276d5d libc.so.6 __libc_start_main + 0xfd >
[jira] [Commented] (HAWQ-521) external table test failure in installcheck-good with orca disabled
[ https://issues.apache.org/jira/browse/HAWQ-521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317989#comment-15317989 ] ASF GitHub Bot commented on HAWQ-521: - Github user changleicn commented on the issue: https://github.com/apache/incubator-hawq/pull/449 @tom-meyer the code is in, can you close this pull request? > external table test failure in installcheck-good with orca disabled > --- > > Key: HAWQ-521 > URL: https://issues.apache.org/jira/browse/HAWQ-521 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Tom Meyer >Assignee: Lei Chang > > We are running installcheck-good against a hawq built without --enable-orca > {noformat} > *** ./expected/exttab1.out2016-02-26 12:54:17.786833482 + > --- ./results/exttab1.out 2016-02-26 12:54:17.866833482 + > *** > *** 673,681 > -- positive > --- > -- > - ERROR: ON clause may not be used with a writable external table > ERROR: it is not possible to read from a WRITABLE external table. > ERROR: location uri "gpfdist://localhost:7070/wet.out" appears more than > once > ERROR: the file protocol for external tables is deprecated > HINT: Create the table as READABLE instead > HINT: use the gpfdist protocol or COPY FROM instead > --- 670,678 > -- positive > --- > -- > ERROR: it is not possible to read from a WRITABLE external table. > ERROR: location uri "gpfdist://localhost:7070/wet.out" appears more than > once > + ERROR: the ON segment syntax for writable external tables is deprecated > ERROR: the file protocol for external tables is deprecated > HINT: Create the table as READABLE instead > HINT: use the gpfdist protocol or COPY FROM instead > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-178) Add JSON plugin support in code base
[ https://issues.apache.org/jira/browse/HAWQ-178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317980#comment-15317980 ] ASF GitHub Bot commented on HAWQ-178: - Github user changleicn commented on the issue: https://github.com/apache/incubator-hawq/pull/302 @tzolov the code is in, can you please close this PR? > Add JSON plugin support in code base > > > Key: HAWQ-178 > URL: https://issues.apache.org/jira/browse/HAWQ-178 > Project: Apache HAWQ > Issue Type: New Feature > Components: PXF >Reporter: Goden Yao >Assignee: Christian Tzolov > Fix For: 2.0.0-beta-incubating > > Attachments: PXFJSONPluginforHAWQ2.0andPXF3.0.0.pdf, > PXFJSONPluginforHAWQ2.0andPXF3.0.0v.2.pdf, > PXFJSONPluginforHAWQ2.0andPXF3.0.0v.3.pdf > > > JSON has been a popular format used in HDFS as well as in the community, > there has been a few JSON PXF plugins developed by the community and we'd > like to see it being incorporated into the code base as an optional package. -- This message was sent by Atlassian JIRA (v6.3.4#6332)