[jira] [Created] (HAWQ-1647) Update HAWQ version from 2.3.0.0 to 2.4.0.0

2018-08-07 Thread Radar Lei (JIRA)
Radar Lei created HAWQ-1647:
---

 Summary: Update HAWQ version from 2.3.0.0 to 2.4.0.0
 Key: HAWQ-1647
 URL: https://issues.apache.org/jira/browse/HAWQ-1647
 Project: Apache HAWQ
  Issue Type: Task
  Components: Build
Reporter: Radar Lei
Assignee: Radar Lei


Update version number from 2.3.0.0 to 2.4.0.0 for release of HAWQ 
2.4.0.0-incubating.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1494) The bug can appear every time when I execute a specific sql: Unexpect internal error (setref.c:298), server closed the connection unexpectedly

2018-07-30 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1494:

Fix Version/s: (was: 2.4.0.0-incubating)
   backlog

> The bug can appear every time when I execute a specific sql:  Unexpect 
> internal error (setref.c:298), server closed the connection unexpectedly
> ---
>
> Key: HAWQ-1494
> URL: https://issues.apache.org/jira/browse/HAWQ-1494
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: fangpei
>Assignee: Yi Jin
>Priority: Major
> Fix For: backlog
>
>
> When I execute a specific sql, a serious bug can happen every time. (Hawq 
> version is 2.2.0.0)
> BUG information:
> FATAL: Unexpect internal error (setref.c:298)
> DETAIL: AssertImply failed("!(!var->varattno >= 0) || (var->varattno <= 
> list_length(colNames) + list_length(rte- >pseudocols)))", File: "setrefs.c", 
> Line: 298)
> HINT:  Process 239600 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released  until then.
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
>  before or while processing the request.
> The connection to the server was lost. Attemping reset: Succeeded.
> I use GDB to debug, the GDB information is the same every time. The 
> information is: 
> Loaded symbols for /lib64/libnss_files.so.2
> 0x0032dd40eb5c in recv 0 from /lib64/libpthread.so.0
> (gdb) b setrefs.c:298
> Breakpoint 1 at 0x846063: file setrefs.c, line 298.
> (gdb) c 
> Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe8e930adb8) at setrefs.c:298
> 298 set ref s .c:没有那个文件或目录.
> (gdb) c 1923
> Will ignore next 1922 crossings of breakpoint 1. Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe869c70340) at setrefs.c:298
> 298 in setrefs.c
> (gdb) p list_length(allVars) 
> $1 = 1422
> (gdb) p var->varno 
> $2 = 65001
> (gdb) p list_length(glob->finalrtable) 
> $3 = 66515
> (gdb) p var->varattno 
> $4 = 31
> (gdb) p list_length(colNames) 
> $5 = 30
> (gdb) p list_length(rte->pseudocols) 
> $6 = 0
> the SQL sentence is just like :
> SELECT *
> FROM (select t.*,1001 as ttt from AAA t where  ( aaa = '3201066235'  
> or aaa = '3201066236'  or aaa = '3201026292'  or aaa = 
> '3201066293'  or aaa = '3201060006393' ) and (  bbb between 
> '20170601065900' and '20170601175100'  and (ccc = '2017-06-01' ))  union all  
> select t.*,1002 as ttt from AAA t where  ( aaa = '3201066007'  or aaa 
> = '3201066006' ) and (  bbb between '20170601072900' and 
> '20170601210100'  and ( ccc = '2017-06-01' ))  union all  select t.*,1003 as 
> ttt from AAA t where  ( aaa = '3201062772' ) and (  bbb between 
> '20170601072900' and '20170601170100'  and ( ccc = '2017-06-01' ))  union all 
>  select t.*,1004 as ttt from AAA t where  (aaa = '3201066115'  or aaa 
> = '3201066116'  or aaa = '3201066318'  or aaa = 
> '3201066319' ) and (  bbb between '20170601085900' and 
> '20170601163100' and ( ccc = '2017-06-01' ))  union all  select t.*,1005 as 
> ttt from AAA t where  ( aaa = '3201066180' or aaa = 
> '3201046385' ) and (  bbb between '20170601205900' and 
> '20170601230100'  and ( ccc = '2017-06-01' )) union all  select t.*,1006 as 
> ttt from AAA t where  ( aaa = '3201026423'  or aaa = 
> '3201026255'  or aaa = '3201066258'  or aaa = 
> '3201066259' ) and (  bbb between '20170601215900' and 
> '20170602004900'  and ( ccc = '2017-06-01'  or ccc = '2017-06-02' ))  union 
> all select t.*,1007 as ttt from AAA t where  ( aaa = '3201066175' or 
> aaa = '3201066004' ) and (  bbb between '20170602074900' and 
> '20170602182100'  and ( ccc = '2017-06-02'  )) union all select t.*,1008 as 
> ttt from AAA t where  ( aaa = '3201026648' ) and (  bbb between 
> '20170602132900' and '20170602134600'  and ( ccc = '2017-06-02' ))  union all 
>  select t.*,1009 as ttt from AAA t where  ( aaa = '3201062765'  or 
> aaa = '3201006282' ) and (  bbb between '20170602142900' and 
> '20170603175100'  and ( ccc = '2017-06-02'  or ccc = '2017-06-03' ))  union 
> all  select t.*,1010 as ttt from AAA t where  (aaa = '3201066060' ) 
> and (  bbb between '20170602165900' and '20170603034100'  and ( ccc = 
> '2017-06-02'  or ccc = '2017-06-03' ))  union all select t.*,1011 as ttt from 
> AAA t where  ( aaa = '32010662229'  or aaa = '3201066230'  or 
> aaa = '3201022783'  or 

[jira] [Updated] (HAWQ-127) Create CI projects for HAWQ releases

2018-07-26 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-127:
---
Fix Version/s: (was: 2.4.0.0-incubating)
   backlog

> Create CI projects for HAWQ releases
> 
>
> Key: HAWQ-127
> URL: https://issues.apache.org/jira/browse/HAWQ-127
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.1.0.0-incubating
>Reporter: Lei Chang
>Assignee: Jiali Yao
>Priority: Major
> Fix For: backlog
>
>
> Create Jenkins projects that build HAWQ binary, source tarballs and docker 
> images, and run sanity tests including at least installcheck-good tests for 
> each commit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (HAWQ-1483) cache lookup failure

2018-07-25 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei closed HAWQ-1483.
---
Resolution: Cannot Reproduce

> cache lookup failure
> 
>
> Key: HAWQ-1483
> URL: https://issues.apache.org/jira/browse/HAWQ-1483
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Rahul Iyer
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> I'm getting a failure when performing a distinct count with another immutable 
> aggregate. We found this issue when running MADlib on HAWQ 2.0.0. Please find 
> below a simple repro. 
> Setup: 
> {code}
> CREATE TABLE example_data(
> id SERIAL,
> outlook text,
> temperature float8,
> humidity float8,
> windy text,
> class text) ;
> COPY example_data (outlook, temperature, humidity, windy, class) FROM stdin 
> DELIMITER ',' NULL '?' ;
> sunny, 85, 85, false, Don't Play
> sunny, 80, 90, true, Don't Play
> overcast, 83, 78, false, Play
> rain, 70, 96, false, Play
> rain, 68, 80, false, Play
> rain, 65, 70, true, Don't Play
> overcast, 64, 65, true, Play
> sunny, 72, 95, false, Don't Play
> sunny, 69, 70, false, Play
> rain, 75, 80, false, Play
> sunny, 75, 70, true, Play
> overcast, 72, 90, true, Play
> overcast, 81, 75, false, Play
> rain, 71, 80, true, Don't Play
> \.
> create function grt_sfunc(agg_state point, el float8)
> returns point
> immutable
> language plpgsql
> as $$
> declare
>   greatest_sum float8;
>   current_sum float8;
> begin
>   current_sum := agg_state[0] + el;
>   if agg_state[1] < current_sum then
> greatest_sum := current_sum;
>   else
> greatest_sum := agg_state[1];
>   end if;
>   return point(current_sum, greatest_sum);
> end;
> $$;
> create function grt_finalfunc(agg_state point)
> returns float8
> immutable
> strict
> language plpgsql
> as $$
> begin
>   return agg_state[1];
> end;
> $$;
> create aggregate greatest_running_total (float8)
> (
> sfunc = grt_sfunc,
> stype = point,
> finalfunc = grt_finalfunc
> );
> {code}
> Error: 
> {code}
> select count(distinct outlook), greatest_running_total(humidity::integer) 
> from example_data;
> {code} 
> {code}
> ERROR:  cache lookup failed for function 0 (fmgr.c:223)
> {code}
> Execution goes through if I remove the {{distinct}} or if I add another 
> column for the {{count(distinct)}}. 
> {code:sql}
> select count(distinct outlook) as c1, count(distinct windy) as c2, 
> greatest_running_total(humidity) from example_data;
> {code}
> {code}
>  c1 | c2 | greatest_running_total
> ++
>   3 |  2 |
> (1 row)
> {code}
> {code:sql}
> select count(outlook) as c1, greatest_running_total(humidity) from 
> example_data;
> {code}
> {code}
>  count | greatest_running_total
> ---+
> 14 |
> (1 row)
> {code}
> It's an older build - I don't have the resources at present to test this on 
> the latest HAWQ. 
> {code}
> select version();
>   
>   version
> ---
>  PostgreSQL 8.2.15 (Greenplum Database 4.2.0 build 1) (HAWQ 2.0.0.0 build 
> 22126) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled 
> on Apr 25 2016 09:52:54
> (1 row)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1483) cache lookup failure

2018-07-25 Thread Radar Lei (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16555476#comment-16555476
 ] 

Radar Lei commented on HAWQ-1483:
-

Can not reproduce in latest 2.3.0.0 version, should already been fixed.

> cache lookup failure
> 
>
> Key: HAWQ-1483
> URL: https://issues.apache.org/jira/browse/HAWQ-1483
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Rahul Iyer
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> I'm getting a failure when performing a distinct count with another immutable 
> aggregate. We found this issue when running MADlib on HAWQ 2.0.0. Please find 
> below a simple repro. 
> Setup: 
> {code}
> CREATE TABLE example_data(
> id SERIAL,
> outlook text,
> temperature float8,
> humidity float8,
> windy text,
> class text) ;
> COPY example_data (outlook, temperature, humidity, windy, class) FROM stdin 
> DELIMITER ',' NULL '?' ;
> sunny, 85, 85, false, Don't Play
> sunny, 80, 90, true, Don't Play
> overcast, 83, 78, false, Play
> rain, 70, 96, false, Play
> rain, 68, 80, false, Play
> rain, 65, 70, true, Don't Play
> overcast, 64, 65, true, Play
> sunny, 72, 95, false, Don't Play
> sunny, 69, 70, false, Play
> rain, 75, 80, false, Play
> sunny, 75, 70, true, Play
> overcast, 72, 90, true, Play
> overcast, 81, 75, false, Play
> rain, 71, 80, true, Don't Play
> \.
> create function grt_sfunc(agg_state point, el float8)
> returns point
> immutable
> language plpgsql
> as $$
> declare
>   greatest_sum float8;
>   current_sum float8;
> begin
>   current_sum := agg_state[0] + el;
>   if agg_state[1] < current_sum then
> greatest_sum := current_sum;
>   else
> greatest_sum := agg_state[1];
>   end if;
>   return point(current_sum, greatest_sum);
> end;
> $$;
> create function grt_finalfunc(agg_state point)
> returns float8
> immutable
> strict
> language plpgsql
> as $$
> begin
>   return agg_state[1];
> end;
> $$;
> create aggregate greatest_running_total (float8)
> (
> sfunc = grt_sfunc,
> stype = point,
> finalfunc = grt_finalfunc
> );
> {code}
> Error: 
> {code}
> select count(distinct outlook), greatest_running_total(humidity::integer) 
> from example_data;
> {code} 
> {code}
> ERROR:  cache lookup failed for function 0 (fmgr.c:223)
> {code}
> Execution goes through if I remove the {{distinct}} or if I add another 
> column for the {{count(distinct)}}. 
> {code:sql}
> select count(distinct outlook) as c1, count(distinct windy) as c2, 
> greatest_running_total(humidity) from example_data;
> {code}
> {code}
>  c1 | c2 | greatest_running_total
> ++
>   3 |  2 |
> (1 row)
> {code}
> {code:sql}
> select count(outlook) as c1, greatest_running_total(humidity) from 
> example_data;
> {code}
> {code}
>  count | greatest_running_total
> ---+
> 14 |
> (1 row)
> {code}
> It's an older build - I don't have the resources at present to test this on 
> the latest HAWQ. 
> {code}
> select version();
>   
>   version
> ---
>  PostgreSQL 8.2.15 (Greenplum Database 4.2.0 build 1) (HAWQ 2.0.0.0 build 
> 22126) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled 
> on Apr 25 2016 09:52:54
> (1 row)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1639) Unexpected internal error when truncate and alter in a transaction

2018-07-25 Thread Radar Lei (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16555442#comment-16555442
 ] 

Radar Lei commented on HAWQ-1639:
-

Please check if you hit the limitation.

[https://hawq.incubator.apache.org/docs/userguide/2.1.0.0-incubating/reference/sql/ALTER-TABLE.html]
h2. Limitations

HAWQ does not support using {{ALTER TABLE}} to {{ADD}} or {{DROP}} a column in 
an existing Parquet table.

> Unexpected internal error when truncate and alter in a transaction
> --
>
> Key: HAWQ-1639
> URL: https://issues.apache.org/jira/browse/HAWQ-1639
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.3.0.0-incubating
>Reporter: TaoJIn
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> hdb=# select version();
>   
>  version  
>    
>    
>  
> --
>  --
>   PostgreSQL 8.2.15 (Greenplum Database 4.2.0 build 1) (HAWQ 
> 2.3.0.0-incubating build dev) on x86_64-unknown-linux-gnu, compiled by GCC 
> gcc (GCC) 4.8.5 20150623 (R
>  ed Hat 4.8.5-16) compiled on May  4 2018 06:27:27
>  (1 row)
>  
>  hdb=# begin;
>  BEGIN
>  hdb=# select * from test limit 2;
>   a  
>  
>   asdfsdgrtecvxbfgdh
>   asdfsdgrtecvxbfgdh
>  (2 rows)
>  
>  hdb=# truncate table test;
>  TRUNCATE TABLE
>  hdb=# select * from test limit 2;
>   a 
>  ---
>  (0 rows)
>  
>  hdb=# alter table test add column b varchar(20) default '';
>  ALTER TABLE
>  hdb=# commit;
>  ERROR:  Unexpected internal error (appendonlywriter.c:525)
>  hdb=# rollback;
>  WARNING:  there is no transaction in progress
>  ROLLBACK
>  hdb=#



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1643) How to solve this problem in HAWQ install on centos 7 with version 2.1.0 ?

2018-07-25 Thread Radar Lei (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16555432#comment-16555432
 ] 

Radar Lei commented on HAWQ-1643:
-

I did not see this error before.

Please make sure all the dependencies are installed.

You can refer to: 
https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install#idq3fsX3CO

> How to solve this problem in HAWQ install on centos 7 with version 2.1.0 ?
> --
>
> Key: HAWQ-1643
> URL: https://issues.apache.org/jira/browse/HAWQ-1643
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: ercengsha
>Assignee: Radar Lei
>Priority: Major
>
> Hello managers,Here are Problem Descript below:
> this problems raised in process of make 
>  
> gcc -O3 -std=gnu99  -Wall -Wmissing-prototypes -Wpointer-arith  
> -Wendif-labels -Wformat-security -fno-strict-aliasing -fwrapv 
> -fno-aggressive-loop-optimizations  -I/usr/include/libxml2 -fpic -I. 
> -I../../src/include -D_GNU_SOURCE  -I***- 
> incubating/depends/libhdfs3/build/install/usr/local/hawq/include -I ***-  -c 
> -o sqlparse.o sqlparse.c
>  *+sqlparse.y: In function ‘orafce_sql_yyparse’:+*
>  *+sqlparse.y:88:17: error: ‘result’ undeclared (first use in this function)+*
>    *+elements \{*((void*)result) = $1; }+*
>                         *+^+*
>  *+sqlparse.y:88:17: note: each undeclared identifier is reported only once 
> for each function it appears in+*
>  make[2]: *** [sqlparse.o] Error 1
>  make[2]: *** Waiting for unfinished jobs
>  make[2]: Leaving directory `***`
>  make[1]: *** [all] Error 2
>  make[1]: Leaving directory `***`
>  make: *** [all] Error 2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1593) Vectorized execution condition check in plan tree

2018-07-25 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1593.
-
Resolution: Fixed

Resolve this issue since the fix is merged.

> Vectorized execution condition check in plan tree 
> --
>
> Key: HAWQ-1593
> URL: https://issues.apache.org/jira/browse/HAWQ-1593
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Query Execution
>Reporter: WANG Weinan
>Assignee: zhangshujie
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> check and assign "v" tag in plan tree each node.
> if a node is a leaf node and all expression can be vectorized execute, assign 
> a "v" tag 
> if a node's all child nodes are assigned "v" tag and all expression can be 
> vectorized execute, assign a "v" tag



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1592) vectorized data types initialization and relevant function definetion

2018-07-25 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1592.
-
Resolution: Fixed

Resolve this issue since the fix is merged.

> vectorized data types initialization and relevant function definetion
> -
>
> Key: HAWQ-1592
> URL: https://issues.apache.org/jira/browse/HAWQ-1592
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Query Execution
>Reporter: WANG Weinan
>Assignee: zhangshujie
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> * vectorization data type initialization
>  * type relevant operation declaration
>  * expose these types in the catalog table
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1642) How to use Ranger to control access to HAWQ row and column read?

2018-07-23 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1642:
---

Assignee: Hongxu Ma  (was: Radar Lei)

> How to use Ranger to control access to HAWQ row and column read?
> 
>
> Key: HAWQ-1642
> URL: https://issues.apache.org/jira/browse/HAWQ-1642
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: ercengsha
>Assignee: Hongxu Ma
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> How to use Ranger to control access to HAWQ row and column read?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1641) How to install a low version of HAWQ?

2018-07-23 Thread Radar Lei (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553733#comment-16553733
 ] 

Radar Lei commented on HAWQ-1641:
-

Hi [~ercengsha], we deliver rpm builds since HAWQ 2.2.0.0-incubating, for prior 
versions you can compile it following by this doc:

[https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install]

 

BTW, why you want to install a lower version HAWQ? Thanks.

> How to install a low version of HAWQ?
> -
>
> Key: HAWQ-1641
> URL: https://issues.apache.org/jira/browse/HAWQ-1641
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: ercengsha
>Assignee: Radar Lei
>Priority: Major
>
> How to install a low version of HAWQ, or acquire or produce a low version 
> HAWQ(e.g. HAWQ version=2.1.0) RPM installation package?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1591) Common tuple batch structure for vectorized execution

2018-07-20 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1591.
-
Resolution: Fixed

Set to fixed as the PR are merged.

> Common tuple batch structure for vectorized execution
> -
>
> Key: HAWQ-1591
> URL: https://issues.apache.org/jira/browse/HAWQ-1591
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Query Execution
>Reporter: Hongxu Ma
>Assignee: WANG Weinan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> A common tuple batch structure for vectorized execution, holds the tuples 
> which be transfered between vectorized operators.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1583) Add vectorized executor extension and GUC

2018-07-20 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1583.
-
Resolution: Fixed

> Add vectorized executor extension and GUC
> -
>
> Key: HAWQ-1583
> URL: https://issues.apache.org/jira/browse/HAWQ-1583
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Query Execution
>Reporter: Hongxu Ma
>Assignee: WANG Weinan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> The vectorized executor will be implemented as a extension (located at 
> contrib directory).
> And use a GUC to enable vectorized executor, e.g:
> {code:java}
> postgres=# set vectorized_executor_enable to on;
> // run the new vectorized executor
> postgres=# set vectorized_executor_enable to off;
> // run the original HAWQ executor
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1603) add new hook api for expressions

2018-07-20 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1603.
-
Resolution: Fixed

Set to fixed as the PR already been merged.

> add new hook api for expressions
> 
>
> Key: HAWQ-1603
> URL: https://issues.apache.org/jira/browse/HAWQ-1603
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Query Execution
>Reporter: zhangshujie
>Assignee: zhangshujie
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> 1.add new hook API for expressions
> 2.add new hook API for refactoring the plan tree



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1597) Implement Runtime Filter for Hash Join

2018-07-20 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1597.
-
Resolution: Fixed

> Implement Runtime Filter for Hash Join
> --
>
> Key: HAWQ-1597
> URL: https://issues.apache.org/jira/browse/HAWQ-1597
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Query Execution
>Reporter: Lin Wen
>Assignee: Lin Wen
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
> Attachments: 111BA854-7318-46A7-8338-5F2993D60FA3.png, HAWQ Runtime 
> Filter Design.pdf, HAWQ Runtime Filter Design.pdf, q17_modified_hawq.gif
>
>
> Bloom filter is a space-efficient probabilistic data structure invented in 
> 1970, which is used to test whether an element is a member of a set.
> Nowdays, bloom filter is widely used in OLAP or data-intensive applications 
> to quickly filter data. It is usually implemented in OLAP systems for hash 
> join. The basic idea is, when hash join two tables, during the build phase, 
> build a bloomfilter information for the inner table, then push down this 
> bloomfilter information to the scan of the outer table, so that, less tuples 
> from the outer table will be returned to hash join node and joined with hash 
> table. It can greatly improment the hash join performance if the selectivity 
> is high.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HAWQ-1618) Segment panic at workfile_mgr_close_file() when transaction ROLLBACK

2018-07-20 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reopened HAWQ-1618:
-

Reopen to change the resolution type.

> Segment panic at workfile_mgr_close_file() when transaction ROLLBACK
> 
>
> Key: HAWQ-1618
> URL: https://issues.apache.org/jira/browse/HAWQ-1618
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Log:
> {code}
> 2018-05-23 15:49:14.843058 
> UTC,"user","db",p179799,th401824032,"172.31.6.17","6935",2018-05-23 15:47:39 
> UTC,1260445558,con25148,cmd7,seg21,slice82,,x1260445558,sx1,"ERROR","25M01","*canceling
>  MPP operation*",,"INSERT INTO ...
> 2018-05-23 15:49:15.253671 UTC,,,p179799,th0,,,2018-05-23 15:47:39 
> UTC,0,con25148,cmd7,seg21,slice82"PANIC","XX000","Unexpected internal 
> error: Segment process r
> eceived signal SIGSEGV",,,0"1    0x8ce2a3 postgres gp_backtrace + 0xa3
> 2    0x8ce491 postgres  + 0x8ce491
> 3    0x7f2d147ae7e0 libpthread.so.0  + 0x147ae7e0
> 4    0x91f4ad postgres workfile_mgr_close_file + 0xd
> 5    0x90bc84 postgres  + 0x90bc84
> 6    0x4e6b60 postgres AbortTransaction + 0x240
> 7    0x4e75c5 postgres AbortCurrentTransaction + 0x25
> 8    0x7ed81a postgres PostgresMain + 0x6ea
> 9    0x7a0c50 postgres  + 0x7a0c50
> 10   0x7a3a19 postgres PostmasterMain + 0x759
> 11   0x4a5309 postgres main + 0x519
> 12   0x7f2d13cead1d libc.so.6 __libc_start_main + 0xfd
> 13   0x4a5389 postgres  + 0x4a5389"
> {code}
>  
> Core stack:
> {code}
> (gdb) bt
> #0  0x7f2d147ae6ab in raise () from libpthread.so.0
> #1  0x008ce552 in SafeHandlerForSegvBusIll (postgres_signal_arg=11, 
> processName=) at elog.c:4573
> #2  
> #3  *workfile_mgr_close_file* (work_set=0x0, file=0x7f2ce96d2de0, 
> canReportError=canReportError@entry=0 '\000') at workfile_file.c:129
> #4  0x0090bc84 in *ntuplestore_cleanup* (fNormal=0 '\000', 
> canReportError=0 '\000', ts=0x21f4810) at tuplestorenew.c:654
> #5  XCallBack_NTS (event=event@entry=XACT_EVENT_ABORT, 
> nts=nts@entry=0x21f4810) at tuplestorenew.c:674
> #6  0x004e6b60 in CallXactCallbacksOnce (event=) at 
> xact.c:3660
> #7  AbortTransaction () at xact.c:2871
> #8  0x004e75c5 in AbortCurrentTransaction () at xact.c:3377
> #9  0x007ed81a in PostgresMain (argc=, argv= out>, argv@entry=0x182c900, username=0x17ddcd0 "user") at postgres.c:4648
> #10 0x007a0c50 in BackendRun (port=0x17cfb10) at postmaster.c:5915
> #11 BackendStartup (port=0x17cfb10) at postmaster.c:5484
> #12 ServerLoop () at postmaster.c:2163
> #13 0x007a3a19 in PostmasterMain (argc=, 
> argv=) at postmaster.c:1454
> #14 0x004a5309 in main (argc=9, argv=0x1785d10) at main.c:226
> {code}
>  
> Repro:
> {code}
> # create test table
> drop table if exists testsisc; 
> create table testsisc (i1 int, i2 int, i3 int, i4 int); 
> insert into testsisc select i, i % 1000, i % 10, i % 75 from 
> generate_series(0,1) i;
> drop table if exists to_insert_into; 
> create table to_insert_into as 
> with ctesisc as 
>  (select count(i1) as c1,i3 as c2 from testsisc group by i3)
> select t1.c1 as c11, t1.c2 as c12, t2.c1 as c21, t2.c2 as c22
> from ctesisc as t1, ctesisc as t2
> where t1.c1 = t2.c2
> limit 10;
> # run a long time query
> begin;
> set gp_simex_run=on;
> set gp_cte_sharing=on;
> insert into to_insert_into
> with ctesisc as 
>  (select count(i1) as c1,i3 as c2 from testsisc group by i3)
> select *
> from ctesisc as t1, ctesisc as t2
> where t1.c1 = t2.c2;
> commit;
> {code}
> Kill one segment process when the second query is running. Then will find 
> panic log in segment log.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1618) Segment panic at workfile_mgr_close_file() when transaction ROLLBACK

2018-07-20 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1618.
-
Resolution: Fixed

> Segment panic at workfile_mgr_close_file() when transaction ROLLBACK
> 
>
> Key: HAWQ-1618
> URL: https://issues.apache.org/jira/browse/HAWQ-1618
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Log:
> {code}
> 2018-05-23 15:49:14.843058 
> UTC,"user","db",p179799,th401824032,"172.31.6.17","6935",2018-05-23 15:47:39 
> UTC,1260445558,con25148,cmd7,seg21,slice82,,x1260445558,sx1,"ERROR","25M01","*canceling
>  MPP operation*",,"INSERT INTO ...
> 2018-05-23 15:49:15.253671 UTC,,,p179799,th0,,,2018-05-23 15:47:39 
> UTC,0,con25148,cmd7,seg21,slice82"PANIC","XX000","Unexpected internal 
> error: Segment process r
> eceived signal SIGSEGV",,,0"1    0x8ce2a3 postgres gp_backtrace + 0xa3
> 2    0x8ce491 postgres  + 0x8ce491
> 3    0x7f2d147ae7e0 libpthread.so.0  + 0x147ae7e0
> 4    0x91f4ad postgres workfile_mgr_close_file + 0xd
> 5    0x90bc84 postgres  + 0x90bc84
> 6    0x4e6b60 postgres AbortTransaction + 0x240
> 7    0x4e75c5 postgres AbortCurrentTransaction + 0x25
> 8    0x7ed81a postgres PostgresMain + 0x6ea
> 9    0x7a0c50 postgres  + 0x7a0c50
> 10   0x7a3a19 postgres PostmasterMain + 0x759
> 11   0x4a5309 postgres main + 0x519
> 12   0x7f2d13cead1d libc.so.6 __libc_start_main + 0xfd
> 13   0x4a5389 postgres  + 0x4a5389"
> {code}
>  
> Core stack:
> {code}
> (gdb) bt
> #0  0x7f2d147ae6ab in raise () from libpthread.so.0
> #1  0x008ce552 in SafeHandlerForSegvBusIll (postgres_signal_arg=11, 
> processName=) at elog.c:4573
> #2  
> #3  *workfile_mgr_close_file* (work_set=0x0, file=0x7f2ce96d2de0, 
> canReportError=canReportError@entry=0 '\000') at workfile_file.c:129
> #4  0x0090bc84 in *ntuplestore_cleanup* (fNormal=0 '\000', 
> canReportError=0 '\000', ts=0x21f4810) at tuplestorenew.c:654
> #5  XCallBack_NTS (event=event@entry=XACT_EVENT_ABORT, 
> nts=nts@entry=0x21f4810) at tuplestorenew.c:674
> #6  0x004e6b60 in CallXactCallbacksOnce (event=) at 
> xact.c:3660
> #7  AbortTransaction () at xact.c:2871
> #8  0x004e75c5 in AbortCurrentTransaction () at xact.c:3377
> #9  0x007ed81a in PostgresMain (argc=, argv= out>, argv@entry=0x182c900, username=0x17ddcd0 "user") at postgres.c:4648
> #10 0x007a0c50 in BackendRun (port=0x17cfb10) at postmaster.c:5915
> #11 BackendStartup (port=0x17cfb10) at postmaster.c:5484
> #12 ServerLoop () at postmaster.c:2163
> #13 0x007a3a19 in PostmasterMain (argc=, 
> argv=) at postmaster.c:1454
> #14 0x004a5309 in main (argc=9, argv=0x1785d10) at main.c:226
> {code}
>  
> Repro:
> {code}
> # create test table
> drop table if exists testsisc; 
> create table testsisc (i1 int, i2 int, i3 int, i4 int); 
> insert into testsisc select i, i % 1000, i % 10, i % 75 from 
> generate_series(0,1) i;
> drop table if exists to_insert_into; 
> create table to_insert_into as 
> with ctesisc as 
>  (select count(i1) as c1,i3 as c2 from testsisc group by i3)
> select t1.c1 as c11, t1.c2 as c12, t2.c1 as c21, t2.c2 as c22
> from ctesisc as t1, ctesisc as t2
> where t1.c1 = t2.c2
> limit 10;
> # run a long time query
> begin;
> set gp_simex_run=on;
> set gp_cte_sharing=on;
> insert into to_insert_into
> with ctesisc as 
>  (select count(i1) as c1,i3 as c2 from testsisc group by i3)
> select *
> from ctesisc as t1, ctesisc as t2
> where t1.c1 = t2.c2;
> commit;
> {code}
> Kill one segment process when the second query is running. Then will find 
> panic log in segment log.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1450) New HAWQ executor with vectorization & possible code generation

2018-07-20 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1450.
-
Resolution: Fixed

> New HAWQ executor with vectorization & possible code generation
> ---
>
> Key: HAWQ-1450
> URL: https://issues.apache.org/jira/browse/HAWQ-1450
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Query Execution
>Reporter: Lei Chang
>Assignee: Hongxu Ma
>Priority: Major
> Fix For: backlog, 2.4.0.0-incubating
>
> Attachments: hawq_vectorized_execution_design_v0.1.pdf
>
>
> Most HAWQ executor code is inherited from postgres & gpdb. Let's discuss how 
> to build a new hawq executor with vectorization and possibly code generation. 
> These optimization may potentially improve the query performance a lot.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1633) Add parameter for maven package hawq-hadoop

2018-07-20 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1633.
-
Resolution: Fixed

> Add parameter for maven package hawq-hadoop
> ---
>
> Key: HAWQ-1633
> URL: https://issues.apache.org/jira/browse/HAWQ-1633
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: WANG Weinan
>Assignee: WANG Weinan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Since maven server update, package "hawq-hadoop" failed. add a parameter to 
> adjust it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1633) Add parameter for maven package hawq-hadoop

2018-07-20 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1633:
---

Assignee: WANG Weinan  (was: Radar Lei)

> Add parameter for maven package hawq-hadoop
> ---
>
> Key: HAWQ-1633
> URL: https://issues.apache.org/jira/browse/HAWQ-1633
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: WANG Weinan
>Assignee: WANG Weinan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Since maven server update, package "hawq-hadoop" failed. add a parameter to 
> adjust it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (HAWQ-1633) Add parameter for maven package hawq-hadoop

2018-07-20 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei closed HAWQ-1633.
---

> Add parameter for maven package hawq-hadoop
> ---
>
> Key: HAWQ-1633
> URL: https://issues.apache.org/jira/browse/HAWQ-1633
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: WANG Weinan
>Assignee: WANG Weinan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Since maven server update, package "hawq-hadoop" failed. add a parameter to 
> adjust it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1638) Issues with website

2018-07-13 Thread Radar Lei (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542774#comment-16542774
 ] 

Radar Lei commented on HAWQ-1638:
-

I don't think we encourage users to use 2.2.0.0 rather than the latest 2.3.0.0. 
So I removed this release to archive.

BTW, the release notes are added.

Thanks for pointing those out.

> Issues with website
> ---
>
> Key: HAWQ-1638
> URL: https://issues.apache.org/jira/browse/HAWQ-1638
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Radar Lei
>Priority: Major
>
> The HAWQ page looks nice, however there are a few problems with it.
> The phrase
> "Plus, HAWQ® works Apache MADlib) machine learning libraries"
> does not read well. Something missing?
> The first/main references to ASF products such as Hadoop, YARN etc must use 
> the full name, i.e. Apache Hadoop etc.
> The download section does not have any link to the KEYS file, nor any 
> instructions on how to use the KEYS+sig or hashes to validate downloads.
> The download section still includes references to MD5 hashes.
> These are deprecated and can be removed for older releases that have other 
> hashes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1638) Issues with website

2018-07-13 Thread Radar Lei (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542722#comment-16542722
 ] 

Radar Lei commented on HAWQ-1638:
-

[~s...@apache.org] I fixed the issues, would you help to refresh and review 
again? Thanks.

[http://hawq.incubator.apache.org/]

> Issues with website
> ---
>
> Key: HAWQ-1638
> URL: https://issues.apache.org/jira/browse/HAWQ-1638
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Radar Lei
>Priority: Major
>
> The HAWQ page looks nice, however there are a few problems with it.
> The phrase
> "Plus, HAWQ® works Apache MADlib) machine learning libraries"
> does not read well. Something missing?
> The first/main references to ASF products such as Hadoop, YARN etc must use 
> the full name, i.e. Apache Hadoop etc.
> The download section does not have any link to the KEYS file, nor any 
> instructions on how to use the KEYS+sig or hashes to validate downloads.
> The download section still includes references to MD5 hashes.
> These are deprecated and can be removed for older releases that have other 
> hashes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1639) Unexpected internal error when truncate and alter in a transaction

2018-07-13 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1639:

Affects Version/s: 2.3.0.0-incubating
Fix Version/s: (was: 2.3.0.0-incubating)
   backlog
  Description: 
hdb=# select version();
    
   version  
   
   
 
--
 --
  PostgreSQL 8.2.15 (Greenplum Database 4.2.0 build 1) (HAWQ 2.3.0.0-incubating 
build dev) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.8.5 
20150623 (R
 ed Hat 4.8.5-16) compiled on May  4 2018 06:27:27
 (1 row)
 
 hdb=# begin;
 BEGIN
 hdb=# select * from test limit 2;
  a  
 
  asdfsdgrtecvxbfgdh
  asdfsdgrtecvxbfgdh
 (2 rows)
 
 hdb=# truncate table test;
 TRUNCATE TABLE
 hdb=# select * from test limit 2;
  a 
 ---
 (0 rows)
 
 hdb=# alter table test add column b varchar(20) default '';
 ALTER TABLE
 hdb=# commit;
 ERROR:  Unexpected internal error (appendonlywriter.c:525)
 hdb=# rollback;
 WARNING:  there is no transaction in progress
 ROLLBACK
 hdb=#


  was:

hdb=# select version();
    
   version  
   
   
 
--
 --
  PostgreSQL 8.2.15 (Greenplum Database 4.2.0 build 1) (HAWQ 2.3.0.0-incubating 
build dev) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.8.5 
20150623 (R
 ed Hat 4.8.5-16) compiled on May  4 2018 06:27:27
 (1 row)
 
 hdb=# begin;
 BEGIN
 hdb=# select * from test limit 2;
  a  
 
  asdfsdgrtecvxbfgdh
  asdfsdgrtecvxbfgdh
 (2 rows)
 
 hdb=# truncate table test;
 TRUNCATE TABLE
 hdb=# select * from test limit 2;
  a 
 ---
 (0 rows)
 
 hdb=# alter table test add column b varchar(20) default '';
 ALTER TABLE
 hdb=# commit;
 ERROR:  Unexpected internal error (appendonlywriter.c:525)
 hdb=# rollback;
 WARNING:  there is no transaction in progress
 ROLLBACK
 hdb=#



> Unexpected internal error when truncate and alter in a transaction
> --
>
> Key: HAWQ-1639
> URL: https://issues.apache.org/jira/browse/HAWQ-1639
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.3.0.0-incubating
>Reporter: TaoJIn
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> hdb=# select version();
>   
>  version  
>    
>    
>  
> --
>  --
>   PostgreSQL 8.2.15 (Greenplum Database 4.2.0 build 1) (HAWQ 
> 2.3.0.0-incubating build dev) on x86_64-unknown-linux-gnu, compiled by GCC 
> gcc (GCC) 4.8.5 20150623 (R
>  ed Hat 4.8.5-16) compiled on May  4 2018 06:27:27
>  (1 row)
>  
>  hdb=# begin;
>  BEGIN
>  hdb=# select * from test limit 2;
>   a  
>  
>   asdfsdgrtecvxbfgdh
>   asdfsdgrtecvxbfgdh
>  (2 rows)
>  
>  hdb=# truncate table test;
>  TRUNCATE TABLE
>  hdb=# select * from test limit 2;
>   a 
>  ---
>  (0 rows)
>  
>  hdb=# alter table test add column b varchar(20) default '';
>  ALTER TABLE
>  hdb=# commit;
>  ERROR:  Unexpected internal error (appendonlywriter.c:525)
>  hdb=# rollback;
>  WARNING:  there is no transaction in progress
>  ROLLBACK
>  hdb=#



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1640) process not exit after query finished immediately while client connection lost

2018-07-13 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1640:

Affects Version/s: 2.3.0.0-incubating
Fix Version/s: (was: 2.3.0.0-incubating)
   backlog

> process not exit after query finished immediately while client connection lost
> --
>
> Key: HAWQ-1640
> URL: https://issues.apache.org/jira/browse/HAWQ-1640
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.3.0.0-incubating
>Reporter: TaoJIn
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
> Attachments: gdb backtrace.jpeg
>
>
> When client (such as pgbouncer,jdbc,zeppelin) connected to hawq and 
> execute a long query,if the client connection interrupted before query 
> finished,the server process will not exit until  an hour later.
> This issue was happend in HAWQ 2.3.0.0-incubating.And 
> set parameter gp_interconnect_transmit_timeout to 600(default 3600) will 
> reduce the time to 10 minutes.
> When the query wa running,we could see its status in 
> pg_stat_activty,but  after it finished  we could only saw the process id 
> in pg_locks and OS process.
> We could saw some  error  log as below:
> $ tailf hawq-2018-07-04_063514.csv|grep p294
> 2018-07-04 08:13:29.595365 
> UTC,"dev","hdb",p294,th1628359104,"172.17.10.148","63974",2018-07-04 
> 06:37:28 UTC,58896,con19,cmd32,seg-1,,,x58896,sx1,"LOG","0","ConnID 
> 5. Returned resource to resource manager.",,,0,,"rmcomm_QD2RM.c",951,
> 2018-07-04 08:13:29.59 
> UTC,"dev","hdb",p294,th1628359104,"172.17.10.148","63974",2018-07-04 
> 06:37:28 UTC,58896,con19,cmd32,seg-1,,,x58896,sx1,"LOG","0","ConnID 
> 5. Unregistered from HAWQ resource manager.",,,0,,"rmcomm_QD2RM.c",661,
> 2018-07-04 08:15:58.706458 
> UTC,"dev","hdb",p294,th1628359104,"172.17.10.148","63974",2018-07-04 
> 06:37:28 UTC,58903,con19,cmd34,seg-1,,,x58903,sx1,"LOG","0","ConnID 
> 6. Registered in HAWQ resource manager (By OID)",,"select * from 
> cppayorderproduct",0,,"rmcomm_QD2RM.c",609,
> 2018-07-04 08:15:58.706640 
> UTC,"dev","hdb",p294,th1628359104,"172.17.10.148","63974",2018-07-04 
> 06:37:28 UTC,58903,con19,cmd34,seg-1,,,x58903,sx1,"LOG","0","ConnID 
> 6. Acquired resource from resource manager, (256 MB, 0.062500 CORE) x 
> 18.",,"select * from cppayorderproduct",0,,"rmcomm_QD2RM.c",868,
> 2018-07-04 09:04:56.190873 
> UTC,"dev","hdb",p294,th1628359104,"172.17.10.148","63974",2018-07-04 
> 06:37:28 UTC,58903,con19,cmd35,seg-1,,,x58903,sx1,"LOG","08006","could 
> not send data to client: Connection reset by peer",,"select * from 
> cppayorderproduct",0,,"pqcomm.c",1413,
> 2018-07-04 09:04:56.192347 
> UTC,"dev","hdb",p294,th1628359104,"172.17.10.148","63974",2018-07-04 
> 06:37:28 
> UTC,58903,con19,cmd35,seg-1,,,x58903,sx1,"FATAL","08006","connection to 
> client lost",,"select * from cppayorderproduct",0,,"postgres.c",3606,
> 2018-07-04 10:04:56.306412 
> UTC,"dev","hdb",p294,th1627412224,"172.17.10.148","63974",2018-07-04 
> 06:37:28 
> UTC,58903,con19,cmd35,seg-1,,,x58903,sx1,"LOG","0","function 
> executormgr_consume meets error, connection is bad.",,,0
> 2018-07-04 10:04:56.306535 
> UTC,"dev","hdb",p294,th1627412224,"172.17.10.148","63974",2018-07-04 
> 06:37:28 
> UTC,58903,con19,cmd35,seg-1,,,x58903,sx1,"LOG","0","dispmgt_thread_func_run():
>  
> fail to consume data. Will exit and clean up.",,,0
> 2018-07-04 10:04:56.309663 
> UTC,"dev","hdb",p294,th1627412224,"172.17.10.148","63974",2018-07-04 
> 06:37:28 
> UTC,58903,con19,cmd35,seg-1,,,x58903,sx1,"LOG","0","function 
> executormgr_cancel calling executormgr_catch_error",,,0
> 2018-07-04 10:04:56.312741 
> UTC,"dev","hdb",p294,th1627412224,"172.17.10.148","63974",2018-07-04 
> 06:37:28 
> UTC,58903,con19,cmd35,seg-1,,,x58903,sx1,"LOG","0","function 
> executormgr_cancel calling executormgr_catch_error",,,0
> 2018-07-04 10:04:56.315364 
> UTC,"dev","hdb",p294,th1627412224,"172.17.10.148","63974",2018-07-04 
> 06:37:28 
> UTC,58903,con19,cmd35,seg-1,,,x58903,sx1,"LOG","0","function 
> executormgr_cancel calling executormgr_catch_error",,,0
> 2018-07-04 10:04:56.317885 
> UTC,"dev","hdb",p294,th1627412224,"172.17.10.148","63974",2018-07-04 
> 06:37:28 
> UTC,58903,con19,cmd35,seg-1,,,x58903,sx1,"LOG","0","function 
> executormgr_cancel calling executormgr_catch_error",,,0
> 2018-07-04 10:04:56.320411 
> UTC,"dev","hdb",p294,th1627412224,"172.17.10.148","63974",2018-07-04 
> 06:37:28 
> UTC,58903,con19,cmd35,seg-1,,,x58903,sx1,"LOG","0","function 
> executormgr_cancel calling executormgr_catch_error",,,0
> 2018-07-04 10:04:56.322998 
> 

[jira] [Commented] (HAWQ-1638) Issues with website

2018-07-11 Thread Radar Lei (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16539794#comment-16539794
 ] 

Radar Lei commented on HAWQ-1638:
-

Thanks [~s...@apache.org], we will continue to fix the issues you mentioned.

> Issues with website
> ---
>
> Key: HAWQ-1638
> URL: https://issues.apache.org/jira/browse/HAWQ-1638
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Radar Lei
>Priority: Major
>
> The HAWQ page looks nice, however there are a few problems with it.
> The phrase
> "Plus, HAWQ® works Apache MADlib) machine learning libraries"
> does not read well. Something missing?
> The first/main references to ASF products such as Hadoop, YARN etc must use 
> the full name, i.e. Apache Hadoop etc.
> The download section does not have any link to the KEYS file, nor any 
> instructions on how to use the KEYS+sig or hashes to validate downloads.
> The download section still includes references to MD5 hashes.
> These are deprecated and can be removed for older releases that have other 
> hashes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1638) Issues with website

2018-07-11 Thread Radar Lei (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16539737#comment-16539737
 ] 

Radar Lei commented on HAWQ-1638:
-

[~s...@apache.org], I have fixed the webpage as this ticket describes, please 
help to review, thanks.

[http://hawq.incubator.apache.org/]

[https://github.com/apache/incubator-hawq-site/pull/16]

[https://github.com/apache/incubator-hawq-site/pull/17]

 

> Issues with website
> ---
>
> Key: HAWQ-1638
> URL: https://issues.apache.org/jira/browse/HAWQ-1638
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Radar Lei
>Priority: Major
>
> The HAWQ page looks nice, however there are a few problems with it.
> The phrase
> "Plus, HAWQ® works Apache MADlib) machine learning libraries"
> does not read well. Something missing?
> The first/main references to ASF products such as Hadoop, YARN etc must use 
> the full name, i.e. Apache Hadoop etc.
> The download section does not have any link to the KEYS file, nor any 
> instructions on how to use the KEYS+sig or hashes to validate downloads.
> The download section still includes references to MD5 hashes.
> These are deprecated and can be removed for older releases that have other 
> hashes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1638) Issues with website

2018-07-07 Thread Radar Lei (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535667#comment-16535667
 ] 

Radar Lei commented on HAWQ-1638:
-

Thanks, we will correct it base on the comments soon.

> Issues with website
> ---
>
> Key: HAWQ-1638
> URL: https://issues.apache.org/jira/browse/HAWQ-1638
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Radar Lei
>Priority: Major
>
> The HAWQ page looks nice, however there are a few problems with it.
> The phrase
> "Plus, HAWQ® works Apache MADlib) machine learning libraries"
> does not read well. Something missing?
> The first/main references to ASF products such as Hadoop, YARN etc must use 
> the full name, i.e. Apache Hadoop etc.
> The download section does not have any link to the KEYS file, nor any 
> instructions on how to use the KEYS+sig or hashes to validate downloads.
> The download section still includes references to MD5 hashes.
> These are deprecated and can be removed for older releases that have other 
> hashes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1549) Re-syncing standby fails even when stop mode is fast

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1549:
---

Assignee: Shubham Sharma  (was: Radar Lei)

>  Re-syncing standby fails even when stop mode is fast
> -
>
> Key: HAWQ-1549
> URL: https://issues.apache.org/jira/browse/HAWQ-1549
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools, Standby master
>Reporter: Shubham Sharma
>Assignee: Shubham Sharma
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> Recently observed a behaviour while re-syncing standby from hawq command line.
> Here are the reproduction steps -
> 1 - Open a client connection to hawq using psql
> 2 - From a different terminal run command - hawq init standby -n -v -M fast
> 3 - Standby resync fails with error
> {code}
> 20171113:03:49:21:158354 hawq_stop:hdp3:gpadmin-[WARNING]:-There are other 
> connections to this instance, shutdown mode smart aborted
> 20171113:03:49:21:158354 hawq_stop:hdp3:gpadmin-[WARNING]:-Either remove 
> connections, or use 'hawq stop master -M fast' or 'hawq stop master -M 
> immediate'
> 20171113:03:49:21:158354 hawq_stop:hdp3:gpadmin-[WARNING]:-See hawq stop 
> --help for all options
> 20171113:03:49:21:158354 hawq_stop:hdp3:gpadmin-[ERROR]:-Active connections. 
> Aborting shutdown...
> 20171113:03:49:21:158143 hawq_init:hdp3:gpadmin-[ERROR]:-Stop hawq cluster 
> failed, exit
> {code}
> 4 - When -M (stop mode) is passed it should terminate existing client 
> connections. 
> The source of this issue appears to be tools/bin/hawq_ctl method 
> _resync_standby. When this is called the command formation does not include 
> stop_mode options as passed to the arguments.
> {code}
>  def _resync_standby(self):
> logger.info("Re-sync standby")
> cmd = "%s; hawq stop master -a;" % source_hawq_env
> check_return_code(local_ssh(cmd, logger), logger, "Stop hawq cluster 
> failed, exit")
> ..
> ..
> {code}
> I can start this and submit a PR when changes are done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (HAWQ-1548) Ambiguous message while logging hawq utilization

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei closed HAWQ-1548.
---
   Resolution: Not A Problem
Fix Version/s: backlog

> Ambiguous message while logging hawq utilization
> 
>
> Key: HAWQ-1548
> URL: https://issues.apache.org/jira/browse/HAWQ-1548
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: libyarn
>Reporter: Shubham Sharma
>Assignee: Lin Wen
>Priority: Major
> Fix For: backlog
>
>
> While YARN mode is enabled, resource broker logs two things - 
> - YARN cluster total resource 
> - HAWQ's total resource per node.
> Following messages are logged 
> {code}
> 2017-11-11 23:21:40.944904 
> UTC,,,p549330,th9000778560,con4,,seg-1,"LOG","0","Resource 
> manager YARN resource broker counted YARN cluster having total resource 
> (1376256 MB, 168.00 CORE).",,,0,,"resourcebroker_LIBYARN.c",776,
> 2017-11-11 23:21:40.944921 
> UTC,,,p549330,th9000778560,con4,,seg-1,"LOG","0","Resource 
> manager YARN resource broker counted HAWQ cluster now having (98304 MB, 
> 12.00 CORE) in a YARN cluster of total resource (1376256 MB, 168.00 
> CORE).",,,0,,"resourcebroker_LIBYARN.c",785,
> {code}
> The second message shown above is ambiguous, After reading the sentence below 
> it looks like that complete Hawq cluster in whole has only 98304 MB and 12 
> cores. However according to the configuration it should be 98304 MB and 12 
> cores per segment server.
> {code}
> Resource manager YARN resource broker counted HAWQ cluster now having (98304 
> MB, 12.00 CORE) in a YARN cluster of total resource (1376256 MB, 
> 168.00 CORE).
> {code}
> Either the wrong variables are printed or we can correct the message to 
> represent that the resources logged are per node. As this can confuse the 
> user into thinking that hawq cluster does not have enough resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1515) how to build and complie hawq based on suse11

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1515.
-
   Resolution: Fixed
Fix Version/s: backlog

> how to build and complie hawq based on suse11
> -
>
> Key: HAWQ-1515
> URL: https://issues.apache.org/jira/browse/HAWQ-1515
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: FengHuang
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> three are a little zypper rep for all kinds of dependencies for build of hawq 
> on suse11. can you recommend some available and comprehensive zypper rep?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1542) PXF Demo profile should support write use case.

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1542.
-
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

> PXF Demo profile should support write use case.
> ---
>
> Key: HAWQ-1542
> URL: https://issues.apache.org/jira/browse/HAWQ-1542
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Alexander Denissov
>Assignee: Alexander Denissov
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> The Demo PXF accessors / resolvers should support a use case for defining 
> writable external table that saves data to a file on a local file system.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1549) Re-syncing standby fails even when stop mode is fast

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1549.
-
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

>  Re-syncing standby fails even when stop mode is fast
> -
>
> Key: HAWQ-1549
> URL: https://issues.apache.org/jira/browse/HAWQ-1549
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools, Standby master
>Reporter: Shubham Sharma
>Assignee: Shubham Sharma
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> Recently observed a behaviour while re-syncing standby from hawq command line.
> Here are the reproduction steps -
> 1 - Open a client connection to hawq using psql
> 2 - From a different terminal run command - hawq init standby -n -v -M fast
> 3 - Standby resync fails with error
> {code}
> 20171113:03:49:21:158354 hawq_stop:hdp3:gpadmin-[WARNING]:-There are other 
> connections to this instance, shutdown mode smart aborted
> 20171113:03:49:21:158354 hawq_stop:hdp3:gpadmin-[WARNING]:-Either remove 
> connections, or use 'hawq stop master -M fast' or 'hawq stop master -M 
> immediate'
> 20171113:03:49:21:158354 hawq_stop:hdp3:gpadmin-[WARNING]:-See hawq stop 
> --help for all options
> 20171113:03:49:21:158354 hawq_stop:hdp3:gpadmin-[ERROR]:-Active connections. 
> Aborting shutdown...
> 20171113:03:49:21:158143 hawq_init:hdp3:gpadmin-[ERROR]:-Stop hawq cluster 
> failed, exit
> {code}
> 4 - When -M (stop mode) is passed it should terminate existing client 
> connections. 
> The source of this issue appears to be tools/bin/hawq_ctl method 
> _resync_standby. When this is called the command formation does not include 
> stop_mode options as passed to the arguments.
> {code}
>  def _resync_standby(self):
> logger.info("Re-sync standby")
> cmd = "%s; hawq stop master -a;" % source_hawq_env
> check_return_code(local_ssh(cmd, logger), logger, "Stop hawq cluster 
> failed, exit")
> ..
> ..
> {code}
> I can start this and submit a PR when changes are done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1572) Travis CI build failure on master. Thrift/boost incompatibility

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1572.
-
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

> Travis CI build failure on master. Thrift/boost incompatibility
> ---
>
> Key: HAWQ-1572
> URL: https://issues.apache.org/jira/browse/HAWQ-1572
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Shubham Sharma
>Assignee: Shubham Sharma
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> Hi,
> The travis CI build is failing for master and new commits. The CI is erroring 
> out with
> {code}
> configure: error: thrift is required
> The command “./configure” failed and exited with 1 during .
> {code}
> I was able to reproduce this issue and looking at the config.log it looks 
> like it is failing at the line below while running a conftest.cpp -
> {code}
> /usr/local/include/thrift/stdcxx.h:32:10: fatal error: 
> 'boost/tr1/functional.hpp' file not found
> {code}
> The root cause of the problem is compatibility of thrift 0.11 with boost 
> 1.65.1 . Travis recently upgraded there xcode to 9.2 and list of default 
> packages now contains boost 1.65.1 and thrift 0.11.
> Thrift uses 
> [stdcxx.h|https://github.com/apache/thrift/blob/master/lib/cpp/src/thrift/stdcxx.h]
>  which includes boost/tr1/functional.hpp library. The support for tr1 has 
> been removed in boost 1.65, see 
> [here|http://www.boost.org/users/history/version_1_65_1.html] under topic 
> “Removed Libraries”.
> Since tr1 library is no longer present in boost 1.65, this causes thrift to 
> fail and eventually ./configure fails
> Solution
> As a solution I recommend that we uninstall boost 1.65 and install boost 
> 1.60(the last compatible build with thrift).
> I am not sure if this is a problem with thrift that they are not yet 
> compatible with boost 1.65 yet or a problem with travis ci that they have 
> included two incompatible versions. Will love to hear community's thoughts on 
> it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1572) Travis CI build failure on master. Thrift/boost incompatibility

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1572:
---

Assignee: Shubham Sharma  (was: Radar Lei)

> Travis CI build failure on master. Thrift/boost incompatibility
> ---
>
> Key: HAWQ-1572
> URL: https://issues.apache.org/jira/browse/HAWQ-1572
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Shubham Sharma
>Assignee: Shubham Sharma
>Priority: Major
>
> Hi,
> The travis CI build is failing for master and new commits. The CI is erroring 
> out with
> {code}
> configure: error: thrift is required
> The command “./configure” failed and exited with 1 during .
> {code}
> I was able to reproduce this issue and looking at the config.log it looks 
> like it is failing at the line below while running a conftest.cpp -
> {code}
> /usr/local/include/thrift/stdcxx.h:32:10: fatal error: 
> 'boost/tr1/functional.hpp' file not found
> {code}
> The root cause of the problem is compatibility of thrift 0.11 with boost 
> 1.65.1 . Travis recently upgraded there xcode to 9.2 and list of default 
> packages now contains boost 1.65.1 and thrift 0.11.
> Thrift uses 
> [stdcxx.h|https://github.com/apache/thrift/blob/master/lib/cpp/src/thrift/stdcxx.h]
>  which includes boost/tr1/functional.hpp library. The support for tr1 has 
> been removed in boost 1.65, see 
> [here|http://www.boost.org/users/history/version_1_65_1.html] under topic 
> “Removed Libraries”.
> Since tr1 library is no longer present in boost 1.65, this causes thrift to 
> fail and eventually ./configure fails
> Solution
> As a solution I recommend that we uninstall boost 1.65 and install boost 
> 1.60(the last compatible build with thrift).
> I am not sure if this is a problem with thrift that they are not yet 
> compatible with boost 1.65 yet or a problem with travis ci that they have 
> included two incompatible versions. Will love to hear community's thoughts on 
> it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1574) libhdfs fails silently when hdfs extended acls are in use

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1574:

Fix Version/s: backlog

> libhdfs fails silently when hdfs extended acls are in use
> -
>
> Key: HAWQ-1574
> URL: https://issues.apache.org/jira/browse/HAWQ-1574
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: Peter Parente
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> {code}
> # list files in a folder
> hdfs.ls('/user/p-pparente/example')
> ['/user/p-pparente/example/1',
>  '/user/p-pparente/example/2',
>  '/user/p-pparente/example/3']
> # using the standard hdfs CLI, set some extended acls
> # hdfs dfs -setfacl -m user:analytics:rwx /user/p-pparente/example/1
> # try to list files again, nothing shows!
> hdfs.ls('/user/p-pparente/example')
> []
> # remove the extended acl using the hdfs CLI
> # hdfs dfs -setfacl -x user:analytics /user/p-pparente/example/1
> # list again, and still nothing there because the extended ACLs have been set 
> at least once
> hdfs.ls('/user/p-pparente/example')
> []
> # Remove the file from the directory entirely
> # hdfs dfs -rm /user/p-pparente/example/1
> # list again, and now everything is fine once more
> hdfs.ls('/user/p-pparente/example')
> hdfs.ls('/user/p-pparente/example')
> ['/user/p-pparente/example/1', '/user/p-pparente/example/2']
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1578) Regression Test (Feature->Ranger)Failed because pxfwritable_import_beginscan function was not found

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1578.
-
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

> Regression Test (Feature->Ranger)Failed because pxfwritable_import_beginscan 
> function was not found 
> 
>
> Key: HAWQ-1578
> URL: https://issues.apache.org/jira/browse/HAWQ-1578
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF, Tests
>Reporter: WANG Weinan
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
> Attachments: HAWQ-1578.patch
>
>
> The TestHawqRanger failed when do test PXFHiveTest and PXFHBaseTest, the test 
> log is shown as follow:
> Note: Google Test filter = TestHawqRanger.PXFHiveTest
> [==] Running 1 test from 1 test case.
> [--] Global test environment set-up.
> [--] 1 test from TestHawqRanger
> [ RUN  ] TestHawqRanger.PXFHiveTest
> lib/sql_util.cpp:197: Failure
> Value of: is_sql_ans_diff
>   Actual: true
> Expected: false
> lib/sql_util.cpp:203: Failure
> Value of: true
>   Actual: true
> Expected: false
> [  FAILED  ] TestHawqRanger.PXFHiveTest (89777 ms)
> [--] 1 test from TestHawqRanger (89777 ms total)
> [--] Global test environment tear-down
> [==] 1 test from 1 test case ran. (89777 ms total)
> [  PASSED  ] 0 tests.
> [  FAILED  ] 1 test, listed below:
> [  FAILED  ] TestHawqRanger.PXFHiveTest
>  1 FAILED TEST
> [125/133] TestHawqRanger.PXFHiveTest returned/aborted with exit code 1 (89787 
> ms)
> [128/133] TestHawqRanger.PXFHBaseTest (87121 ms)  
>   
> Note: Google Test filter = TestHawqRanger.PXFHBaseTest
> [==] Running 1 test from 1 test case.
> [--] Global test environment set-up.
> [--] 1 test from TestHawqRanger
> [ RUN  ] TestHawqRanger.PXFHBaseTest
> lib/sql_util.cpp:197: Failure
> Value of: is_sql_ans_diff
>   Actual: true
> Expected: false
> lib/sql_util.cpp:203: Failure
> Value of: true
>   Actual: true
> Expected: false
> [  FAILED  ] TestHawqRanger.PXFHBaseTest (87098 ms)
> [--] 1 test from TestHawqRanger (87098 ms total)
> [--] Global test environment tear-down
> [==] 1 test from 1 test case ran. (87099 ms total)
> [  PASSED  ] 0 tests.
> [  FAILED  ] 1 test, listed below:
> [  FAILED  ] TestHawqRanger.PXFHBaseTest
> We can find some suspicious log in master segment log file :
> 2018-01-03 05:21:30.170970 
> UTC,"gpadmin","hawq_feature_test_db",p109703,th-290256608,"127.0.0.1","56288",2018-01-03
>  05:21:29 
> UTC,14669,con2342,cmd4,seg-1,,,x14669,sx1,"ERROR","XX000","pxfwritable_import_beginscan
>  function was not found (nodeExternalscan.c:310)",,"select * from 
> test_hbase;",0,,"nodeExternalscan.c",310,"Stack trace:
> 10x8cf31e postgres errstart (elog.c:505)
> 20x8d11bb postgres elog_finish (elog.c:1459)
> 30x69134a postgres ExecInitExternalScan (nodeExternalscan.c:215)
> 40x670b9d postgres ExecInitNode (execProcnode.c:371)
> 50x69b7d1 postgres ExecInitMotion (nodeMotion.c:1096)
> 60x670064 postgres ExecInitNode (execProcnode.c:629)
> 70x66a407 postgres ExecutorStart (execMain.c:2048)
> 80x7f8fcd postgres PortalStart (pquery.c:1308)
> 90x7f0628 postgres  (postgres.c:1795)
> 10   0x7f1cb0 postgres PostgresMain (postgres.c:4897)
> 11   0x7a40c0 postgres  (postmaster.c:5486)
> 12   0x7a6e89 postgres PostmasterMain (postmaster.c:1459)
> 13   0x4a5a59 postgres main (main.c:226)
> 14   0x7fceea8a1d1d libc.so.6 __libc_start_main (??:0)
> 15   0x4a5ad9 postgres  (??:0)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1583) Add vectorized executor extension and GUC

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1583:

Fix Version/s: (was: backlog)
   2.4.0.0-incubating

> Add vectorized executor extension and GUC
> -
>
> Key: HAWQ-1583
> URL: https://issues.apache.org/jira/browse/HAWQ-1583
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Query Execution
>Reporter: Hongxu Ma
>Assignee: WANG Weinan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> The vectorized executor will be implemented as a extension (located at 
> contrib directory).
> And use a GUC to enable vectorized executor, e.g:
> {code:java}
> postgres=# set vectorized_executor_enable to on;
> // run the new vectorized executor
> postgres=# set vectorized_executor_enable to off;
> // run the original HAWQ executor
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1591) Common tuple batch structure for vectorized execution

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1591:

Fix Version/s: (was: backlog)
   2.4.0.0-incubating

> Common tuple batch structure for vectorized execution
> -
>
> Key: HAWQ-1591
> URL: https://issues.apache.org/jira/browse/HAWQ-1591
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Query Execution
>Reporter: Hongxu Ma
>Assignee: WANG Weinan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> A common tuple batch structure for vectorized execution, holds the tuples 
> which be transfered between vectorized operators.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1592) vectorized data types initialization and relevant function definetion

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1592:

Fix Version/s: (was: backlog)
   2.4.0.0-incubating

> vectorized data types initialization and relevant function definetion
> -
>
> Key: HAWQ-1592
> URL: https://issues.apache.org/jira/browse/HAWQ-1592
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Query Execution
>Reporter: WANG Weinan
>Assignee: zhangshujie
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> * vectorization data type initialization
>  * type relevant operation declaration
>  * expose these types in the catalog table
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1593) Vectorized execution condition check in plan tree

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1593:

Fix Version/s: (was: backlog)
   2.4.0.0-incubating

> Vectorized execution condition check in plan tree 
> --
>
> Key: HAWQ-1593
> URL: https://issues.apache.org/jira/browse/HAWQ-1593
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Query Execution
>Reporter: WANG Weinan
>Assignee: zhangshujie
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> check and assign "v" tag in plan tree each node.
> if a node is a leaf node and all expression can be vectorized execute, assign 
> a "v" tag 
> if a node's all child nodes are assigned "v" tag and all expression can be 
> vectorized execute, assign a "v" tag



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1597) Implement Runtime Filter for Hash Join

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1597:

Fix Version/s: 2.4.0.0-incubating

> Implement Runtime Filter for Hash Join
> --
>
> Key: HAWQ-1597
> URL: https://issues.apache.org/jira/browse/HAWQ-1597
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Query Execution
>Reporter: Lin Wen
>Assignee: Lin Wen
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
> Attachments: HAWQ Runtime Filter Design.pdf
>
>
> Bloom filter is a space-efficient probabilistic data structure invented in 
> 1970, which is used to test whether an element is a member of a set.
> Nowdays, bloom filter is widely used in OLAP or data-intensive applications 
> to quickly filter data. It is usually implemented in OLAP systems for hash 
> join. The basic idea is, when hash join two tables, during the build phase, 
> build a bloomfilter information for the inner table, then push down this 
> bloomfilter information to the scan of the outer table, so that, less tuples 
> from the outer table will be returned to hash join node and joined with hash 
> table. It can greatly improment the hash join performance if the selectivity 
> is high.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1601) Vectorized Scan qualification supported

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1601:

Fix Version/s: (was: backlog)
   2.4.0.0-incubating

> Vectorized Scan qualification supported
> ---
>
> Key: HAWQ-1601
> URL: https://issues.apache.org/jira/browse/HAWQ-1601
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Query Execution
>Reporter: WANG Weinan
>Assignee: WANG Weinan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1598) Vectorized Scan Node Framework initialization

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1598:

Fix Version/s: (was: backlog)
   2.4.0.0-incubating

> Vectorized Scan Node Framework initialization
> -
>
> Key: HAWQ-1598
> URL: https://issues.apache.org/jira/browse/HAWQ-1598
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Query Execution
>Reporter: WANG Weinan
>Assignee: WANG Weinan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Using the previous task defined hook function to init, proc and recycle 
> vectorized "TableScanState" node.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1600) Parquet table data vectorized scan

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1600:

Fix Version/s: (was: backlog)
   2.4.0.0-incubating

> Parquet table data vectorized scan
> --
>
> Key: HAWQ-1600
> URL: https://issues.apache.org/jira/browse/HAWQ-1600
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Query Execution
>Reporter: WANG Weinan
>Assignee: WANG Weinan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> The simple query(e.g. "select col1 from tab1;") can be supported by 
> vectorized execution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1602) AO table data vectorized scan

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1602:

Fix Version/s: (was: backlog)
   2.4.0.0-incubating

> AO table data vectorized scan 
> --
>
> Key: HAWQ-1602
> URL: https://issues.apache.org/jira/browse/HAWQ-1602
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Query Execution
>Reporter: WANG Weinan
>Assignee: WANG Weinan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1603) add new hook api for expressions

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1603:

Fix Version/s: (was: backlog)
   2.4.0.0-incubating

> add new hook api for expressions
> 
>
> Key: HAWQ-1603
> URL: https://issues.apache.org/jira/browse/HAWQ-1603
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Query Execution
>Reporter: zhangshujie
>Assignee: zhangshujie
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> 1.add new hook API for expressions
> 2.add new hook API for refactoring the plan tree



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1581) Separate PXF system parameters from user configurable visible parameters

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1581:

Fix Version/s: (was: 2.4.0.0-incubating)
   2.3.0.0-incubating

> Separate PXF system parameters from user configurable visible parameters
> 
>
> Key: HAWQ-1581
> URL: https://issues.apache.org/jira/browse/HAWQ-1581
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> We need to modify our system such that user configurable options are kept 
> distinct form the internal parameters. The custom parameters are configured 
> in the {{LOCATION}} section of the external table DDL, is exposed to PXF 
> server as {{X-GP-}}.
> {{X-GP-USER}} is an internal parameter used to set the user information. When 
> the DDL has a custom parameter named {{user}} it ends up updating X-GP-USER 
> to also include the user configured in the DDL Location. This causes the JDBC 
> connector to fail.
> We will instead use \{X-GP-OPTIONS-} as the prefix for all user configurable 
> parameters to keep them isolated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1579) x When you enable pxf DEBUG logging you might get annoying exceptions

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1579:

Fix Version/s: (was: 2.4.0.0-incubating)
   2.3.0.0-incubating

> x When you enable pxf DEBUG logging you might get annoying exceptions
> -
>
> Key: HAWQ-1579
> URL: https://issues.apache.org/jira/browse/HAWQ-1579
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables
>Reporter: Dmitriy Dorofeev
>Assignee: Shivram Mani
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> When you enable DEBUG logging you might get annoying exceptions:
>  
> {{SEVERE: The RuntimeException could not be mapped to a response, re-throwing 
> to the HTTP container java.lang.NullPointerException at 
> java.lang.String.(String.java:566) at 
> org.apache.hawq.pxf.service.FragmentsResponseFormatter.printList(FragmentsResponseFormatter.java:147)
>  at 
> org.apache.hawq.pxf.service.FragmentsResponseFormatter.formatResponse(FragmentsResponseFormatter.java:54)
>  at 
> org.apache.hawq.pxf.service.rest.FragmenterResource.getFragments(FragmenterResource.java:88)
>  }}
> {{ }}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1566) Include Pluggable Storage Format Framework in External Table Insert

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1566:

Fix Version/s: (was: 2.4.0.0-incubating)
   2.3.0.0-incubating

> Include Pluggable Storage Format Framework in External Table Insert
> ---
>
> Key: HAWQ-1566
> URL: https://issues.apache.org/jira/browse/HAWQ-1566
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, Storage
>Reporter: Chiyang Wan
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> There are 2 types of operation related to external table, i.e. scan, insert. 
> Including pluggable storage framework in these operations is necessary. 
> We add the external table insert and copy from(write into external table) 
> related feature here.
> In the following steps, we still need to specify some of the critical info 
> that comes from the planner and the file splits info in the pluggable 
> filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1036) Support user impersonation in PXF for external tables

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1036:

Fix Version/s: (was: 2.4.0.0-incubating)
   2.3.0.0-incubating

> Support user impersonation in PXF for external tables
> -
>
> Key: HAWQ-1036
> URL: https://issues.apache.org/jira/browse/HAWQ-1036
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Security
>Reporter: Alastair "Bell" Turner
>Assignee: Alexander Denissov
>Priority: Critical
> Fix For: 2.3.0.0-incubating
>
> Attachments: HAWQ_Impersonation_rationale.txt
>
>
> Currently HAWQ executes all queries as the user running the HAWQ process or 
> the user running the PXF process, not as the user who issued the query via 
> ODBC/JDBC/... This restricts the options available for integrating with 
> existing security defined in HDFS, Hive, etc.
> Impersonation provides an alternative Ranger integration (as discussed in 
> HAWQ-256 ) for consistent security across HAWQ, HDFS, Hive...
> Implementation High Level steps:
> 1) HAWQ needs to integrate with existing authentication components for the 
> user who invokes the query
> 2) HAWQ needs to pass down the user id to PXF after authorization is passed 
> 3) PXF needs to do "run as ..." the user id to execute APIs to access 
> Hive/HDFS 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1058) Create a separated tarball for libhdfs3

2018-04-09 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1058:

Fix Version/s: (was: 2.4.0.0-incubating)
   backlog

> Create a separated tarball for libhdfs3
> ---
>
> Key: HAWQ-1058
> URL: https://issues.apache.org/jira/browse/HAWQ-1058
> Project: Apache HAWQ
>  Issue Type: Test
>  Components: libhdfs
>Affects Versions: 2.0.0.0-incubating
>Reporter: Zhanwei Wang
>Assignee: Lei Chang
>Priority: Major
> Fix For: backlog
>
>
> As discussed in the dev mail list. Proposed by Ramon that create a separated 
> tarball for libhdfs3 at HAWQ release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1530) Illegally killing a JDBC select query causes locking problems

2018-03-12 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1530:

Fix Version/s: (was: 2.3.0.0-incubating)
   backlog

> Illegally killing a JDBC select query causes locking problems
> -
>
> Key: HAWQ-1530
> URL: https://issues.apache.org/jira/browse/HAWQ-1530
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Transaction
>Reporter: Grant Krieger
>Assignee: Yi Jin
>Priority: Major
> Fix For: backlog
>
>
> Hi,
> When you perform a long running select statement on 2 hawq tables (join) from 
> JDBC and illegally kill the JDBC client (CTRL ALT DEL) before completion of 
> the query the 2 tables remained locked even when the query completes on the 
> server. 
> The lock is visible via PG_locks. One cannot kill the query via SELECT 
> pg_terminate_backend(393937). The only way to get rid of it is to kill -9 
> from linux or restart hawq but this can kill other things as well.
> The JDBC client I am using is Aqua Data Studio.
> I can provide exact steps to reproduce if required
> Thank you
> Grant 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1512) Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria

2018-02-06 Thread Radar Lei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354901#comment-16354901
 ] 

Radar Lei commented on HAWQ-1512:
-

The latest third party components page is:

https://cwiki.apache.org/confluence/display/HAWQ/Third+Party+Components

> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> --
>
> Key: HAWQ-1512
> URL: https://issues.apache.org/jira/browse/HAWQ-1512
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Yi Jin
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
> Attachments: HAWQ Ranger Pluggin Service Dependencies.xlsx
>
>
> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> Check the following page for the criteria
> https://cwiki.apache.org/confluence/display/HAWQ/ASF+Maturity+Evaluation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1512) Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria

2018-02-06 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1512.
-
Resolution: Fixed

> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> --
>
> Key: HAWQ-1512
> URL: https://issues.apache.org/jira/browse/HAWQ-1512
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Yi Jin
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
> Attachments: HAWQ Ranger Pluggin Service Dependencies.xlsx
>
>
> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> Check the following page for the criteria
> https://cwiki.apache.org/confluence/display/HAWQ/ASF+Maturity+Evaluation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1512) Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria

2018-02-05 Thread Radar Lei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16352435#comment-16352435
 ] 

Radar Lei commented on HAWQ-1512:
-

Thanks [~yjin] AND [~huor] 's review.

Already added googlemock.

For Ranger dependences, as Ruilong's comment, they are not mandatory so they 
will not be added.

For libhdfs3 I think the previous open source place is retired and the latest 
code only exist in HAWQ, so we do not need to add it. Libyarn is similar case.

Another one I want to mention here is "libgsasl" which is using LGPL, I did not 
add it because it can be treat as a system dependence. See previous discuss 
email: 
https://lists.apache.org/thread.html/5ae122b59529de58c5c668fa0e703a53ad9efb0fddb0fb26ecbcace8@%3Cdev.hawq.apache.org%3E

 

> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> --
>
> Key: HAWQ-1512
> URL: https://issues.apache.org/jira/browse/HAWQ-1512
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Yi Jin
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
> Attachments: HAWQ Ranger Pluggin Service Dependencies.xlsx
>
>
> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> Check the following page for the criteria
> https://cwiki.apache.org/confluence/display/HAWQ/ASF+Maturity+Evaluation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1575) Implement readable Parquet profile

2018-01-30 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1575:

Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> Implement readable Parquet profile
> --
>
> Key: HAWQ-1575
> URL: https://issues.apache.org/jira/browse/HAWQ-1575
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Ed Espino
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> PXF should be able to read data from Parquet files stored in HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1514) TDE feature makes libhdfs3 require openssl1.1

2018-01-29 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1514:
---

Assignee: WANG Weinan  (was: Radar Lei)

> TDE feature makes libhdfs3 require openssl1.1
> -
>
> Key: HAWQ-1514
> URL: https://issues.apache.org/jira/browse/HAWQ-1514
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: libhdfs
>Reporter: Yi Jin
>Assignee: WANG Weinan
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> New TDE feature delivered in libhdfs3 requires specific version of openssl, 
> at least per my test, 1.0.21 does not work, and 1.1 source code built library 
> passed.
> So maybe we need some build and installation instruction improvement. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (HAWQ-1416) hawq_toolkit administrative schema missing in HAWQ installation

2018-01-29 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei closed HAWQ-1416.
---
Resolution: Not A Problem

> hawq_toolkit administrative schema missing in HAWQ installation
> ---
>
> Key: HAWQ-1416
> URL: https://issues.apache.org/jira/browse/HAWQ-1416
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools, DDL
>Reporter: Vineet Goel
>Assignee: Chunling Wang
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> hawq_toolkit administrative schema is not pre-installed with HAWQ, but should 
> actually be available once HAWQ is installed and initialized.
> Current workaround seems to be a manual command to install it:
> psql -f /usr/local/hawq/share/postgresql/gp_toolkit.sql



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1578) Regression Test (Feature->Ranger)Failed because pxfwritable_import_beginscan function was not found

2018-01-02 Thread Radar Lei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16309224#comment-16309224
 ] 

Radar Lei commented on HAWQ-1578:
-

Seems this is related to commit: 
commit  
76e38c53b9377a055e6a2db6f63dc2e984c25025
message 
HAWQ-1565. Include Pluggable Storage Format Framework in External Table Scan

Assign to [~chiyang1] for further check.

> Regression Test (Feature->Ranger)Failed because pxfwritable_import_beginscan 
> function was not found 
> 
>
> Key: HAWQ-1578
> URL: https://issues.apache.org/jira/browse/HAWQ-1578
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF, Tests
>Reporter: WANG Weinan
>Assignee: Chiyang Wan
>
> The TestHawqRanger failed when do test PXFHiveTest and PXFHBaseTest, the test 
> log is shown as follow:
> Note: Google Test filter = TestHawqRanger.PXFHiveTest
> [==] Running 1 test from 1 test case.
> [--] Global test environment set-up.
> [--] 1 test from TestHawqRanger
> [ RUN  ] TestHawqRanger.PXFHiveTest
> lib/sql_util.cpp:197: Failure
> Value of: is_sql_ans_diff
>   Actual: true
> Expected: false
> lib/sql_util.cpp:203: Failure
> Value of: true
>   Actual: true
> Expected: false
> [  FAILED  ] TestHawqRanger.PXFHiveTest (89777 ms)
> [--] 1 test from TestHawqRanger (89777 ms total)
> [--] Global test environment tear-down
> [==] 1 test from 1 test case ran. (89777 ms total)
> [  PASSED  ] 0 tests.
> [  FAILED  ] 1 test, listed below:
> [  FAILED  ] TestHawqRanger.PXFHiveTest
>  1 FAILED TEST
> [125/133] TestHawqRanger.PXFHiveTest returned/aborted with exit code 1 (89787 
> ms)
> [128/133] TestHawqRanger.PXFHBaseTest (87121 ms)  
>   
> Note: Google Test filter = TestHawqRanger.PXFHBaseTest
> [==] Running 1 test from 1 test case.
> [--] Global test environment set-up.
> [--] 1 test from TestHawqRanger
> [ RUN  ] TestHawqRanger.PXFHBaseTest
> lib/sql_util.cpp:197: Failure
> Value of: is_sql_ans_diff
>   Actual: true
> Expected: false
> lib/sql_util.cpp:203: Failure
> Value of: true
>   Actual: true
> Expected: false
> [  FAILED  ] TestHawqRanger.PXFHBaseTest (87098 ms)
> [--] 1 test from TestHawqRanger (87098 ms total)
> [--] Global test environment tear-down
> [==] 1 test from 1 test case ran. (87099 ms total)
> [  PASSED  ] 0 tests.
> [  FAILED  ] 1 test, listed below:
> [  FAILED  ] TestHawqRanger.PXFHBaseTest
> We can find some suspicious log in master segment log file :
> 2018-01-03 05:21:30.170970 
> UTC,"gpadmin","hawq_feature_test_db",p109703,th-290256608,"127.0.0.1","56288",2018-01-03
>  05:21:29 
> UTC,14669,con2342,cmd4,seg-1,,,x14669,sx1,"ERROR","XX000","pxfwritable_import_beginscan
>  function was not found (nodeExternalscan.c:310)",,"select * from 
> test_hbase;",0,,"nodeExternalscan.c",310,"Stack trace:
> 10x8cf31e postgres errstart (elog.c:505)
> 20x8d11bb postgres elog_finish (elog.c:1459)
> 30x69134a postgres ExecInitExternalScan (nodeExternalscan.c:215)
> 40x670b9d postgres ExecInitNode (execProcnode.c:371)
> 50x69b7d1 postgres ExecInitMotion (nodeMotion.c:1096)
> 60x670064 postgres ExecInitNode (execProcnode.c:629)
> 70x66a407 postgres ExecutorStart (execMain.c:2048)
> 80x7f8fcd postgres PortalStart (pquery.c:1308)
> 90x7f0628 postgres  (postgres.c:1795)
> 10   0x7f1cb0 postgres PostgresMain (postgres.c:4897)
> 11   0x7a40c0 postgres  (postmaster.c:5486)
> 12   0x7a6e89 postgres PostmasterMain (postmaster.c:1459)
> 13   0x4a5a59 postgres main (main.c:226)
> 14   0x7fceea8a1d1d libc.so.6 __libc_start_main (??:0)
> 15   0x4a5ad9 postgres  (??:0)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1578) Regression Test (Feature->Ranger)Failed because pxfwritable_import_beginscan function was not found

2018-01-02 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1578:
---

Assignee: Chiyang Wan  (was: Ed Espino)

> Regression Test (Feature->Ranger)Failed because pxfwritable_import_beginscan 
> function was not found 
> 
>
> Key: HAWQ-1578
> URL: https://issues.apache.org/jira/browse/HAWQ-1578
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF, Tests
>Reporter: WANG Weinan
>Assignee: Chiyang Wan
>
> The TestHawqRanger failed when do test PXFHiveTest and PXFHBaseTest, the test 
> log is shown as follow:
> Note: Google Test filter = TestHawqRanger.PXFHiveTest
> [==] Running 1 test from 1 test case.
> [--] Global test environment set-up.
> [--] 1 test from TestHawqRanger
> [ RUN  ] TestHawqRanger.PXFHiveTest
> lib/sql_util.cpp:197: Failure
> Value of: is_sql_ans_diff
>   Actual: true
> Expected: false
> lib/sql_util.cpp:203: Failure
> Value of: true
>   Actual: true
> Expected: false
> [  FAILED  ] TestHawqRanger.PXFHiveTest (89777 ms)
> [--] 1 test from TestHawqRanger (89777 ms total)
> [--] Global test environment tear-down
> [==] 1 test from 1 test case ran. (89777 ms total)
> [  PASSED  ] 0 tests.
> [  FAILED  ] 1 test, listed below:
> [  FAILED  ] TestHawqRanger.PXFHiveTest
>  1 FAILED TEST
> [125/133] TestHawqRanger.PXFHiveTest returned/aborted with exit code 1 (89787 
> ms)
> [128/133] TestHawqRanger.PXFHBaseTest (87121 ms)  
>   
> Note: Google Test filter = TestHawqRanger.PXFHBaseTest
> [==] Running 1 test from 1 test case.
> [--] Global test environment set-up.
> [--] 1 test from TestHawqRanger
> [ RUN  ] TestHawqRanger.PXFHBaseTest
> lib/sql_util.cpp:197: Failure
> Value of: is_sql_ans_diff
>   Actual: true
> Expected: false
> lib/sql_util.cpp:203: Failure
> Value of: true
>   Actual: true
> Expected: false
> [  FAILED  ] TestHawqRanger.PXFHBaseTest (87098 ms)
> [--] 1 test from TestHawqRanger (87098 ms total)
> [--] Global test environment tear-down
> [==] 1 test from 1 test case ran. (87099 ms total)
> [  PASSED  ] 0 tests.
> [  FAILED  ] 1 test, listed below:
> [  FAILED  ] TestHawqRanger.PXFHBaseTest
> We can find some suspicious log in master segment log file :
> 2018-01-03 05:21:30.170970 
> UTC,"gpadmin","hawq_feature_test_db",p109703,th-290256608,"127.0.0.1","56288",2018-01-03
>  05:21:29 
> UTC,14669,con2342,cmd4,seg-1,,,x14669,sx1,"ERROR","XX000","pxfwritable_import_beginscan
>  function was not found (nodeExternalscan.c:310)",,"select * from 
> test_hbase;",0,,"nodeExternalscan.c",310,"Stack trace:
> 10x8cf31e postgres errstart (elog.c:505)
> 20x8d11bb postgres elog_finish (elog.c:1459)
> 30x69134a postgres ExecInitExternalScan (nodeExternalscan.c:215)
> 40x670b9d postgres ExecInitNode (execProcnode.c:371)
> 50x69b7d1 postgres ExecInitMotion (nodeMotion.c:1096)
> 60x670064 postgres ExecInitNode (execProcnode.c:629)
> 70x66a407 postgres ExecutorStart (execMain.c:2048)
> 80x7f8fcd postgres PortalStart (pquery.c:1308)
> 90x7f0628 postgres  (postgres.c:1795)
> 10   0x7f1cb0 postgres PostgresMain (postgres.c:4897)
> 11   0x7a40c0 postgres  (postmaster.c:5486)
> 12   0x7a6e89 postgres PostmasterMain (postmaster.c:1459)
> 13   0x4a5a59 postgres main (main.c:226)
> 14   0x7fceea8a1d1d libc.so.6 __libc_start_main (??:0)
> 15   0x4a5ad9 postgres  (??:0)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1559) Travis CI failing for hawq after travis ci default image upgraded xcode to 8.3

2017-12-19 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1559.
-
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

Fixed by [~outofmemory]

> Travis CI failing for hawq after travis ci default image upgraded xcode to 8.3
> --
>
> Key: HAWQ-1559
> URL: https://issues.apache.org/jira/browse/HAWQ-1559
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Shubham Sharma
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> It looks like our Travis build is broken. I first noticed this for my own 
> fork's build and saw the same behavior in apache github repo as well. It is 
> failing with the error below
> {code}
> configure: error: Please install apr from http://apr.apache.org/ and add dir 
> of 'apr-1-config' to env variable 
> '/Users/travis/.rvm/gems/ruby-2.4.2/bin:/Users/travis/.rvm/gems/ruby-2.4.2@global/bin:/Users/travis/.rvm/rubies/ruby-2.4.2/bin:/Users/travis/.rvm/bin:/Users/travis/bin:/Users/travis/.local/bin:/Users/travis/.nvm/versions/node/v6.11.4/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin'.
> The command "./configure" failed and exited with 1 during .
> Your build has been stopped.
> /Users/travis/.travis/job_stages: line 166: shell_session_update: command not 
> found
> {code}
> Looked into it, the builds started failing November 28th. This is around the 
> same time when Travis CI upgraded their default xcode version to 8.3. Here is 
> the notification .
> Have identified a potential fix and tested it for my fork, the build 
> completes successfully. Currently we don't install apr using brew install, 
> which is one of the pre-requisites as mentioned in the hawq incubator wiki. 
> The fix is to "brew install apr" and then force link it to the path using 
> "brew link apr --force. This resolves the problem.
> But I have couple of additional questions - 
> 1. How did the apr get installed before, was it installed with some other 
> package. Asking this as few packages have been removed from the default image 
> in xcode 8.3
> 2. Though the build for branches is failing continuously, why the build 
> status for master is still green ? 
> Anyhow, since apr is a dependency for our project my proposal is to add a 
> brew install to travis.yml to avoid failure due to such upgrade in future. 
> Let me know your thoughts, I have a PR ready.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1553) User who doesn't have home directory can not run hawq extract command

2017-12-19 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1553.
-
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

Fixed by [~outofmemory]

> User who doesn't have home directory can not run hawq extract command
> -
>
> Key: HAWQ-1553
> URL: https://issues.apache.org/jira/browse/HAWQ-1553
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> HAWQ extract stores information in hawqextract_MMDD.log under directory 
> ~/hawqAdminLogs, and a user who doesn't have it's own home directory 
> encounters failure when running hawq extract.
> We can add -l option in order to set the target log directory for hawq 
> extract.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1368) normal user who doesn't have home directory may have problem when running hawq register

2017-12-19 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1368.
-
Resolution: Fixed

Fixed by [~outofmemory]

> normal user who doesn't have home directory may have problem when running 
> hawq register
> ---
>
> Key: HAWQ-1368
> URL: https://issues.apache.org/jira/browse/HAWQ-1368
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Lili Ma
>Assignee: Radar Lei
> Fix For: backlog
>
>
> HAWQ register stores information in hawqregister_MMDD.log under directory 
> ~/hawqAdminLogs, and normal user who doesn't have own home directory may 
> encounter failure when running hawq regsiter.
> We can add -l option in order to set the target log directory and file name 
> of hawq register.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1532) Recognize CST incorrectly when quey timestamp with time zone in China

2017-12-19 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1532.
-
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

Resolved by [~kuien]

> Recognize CST incorrectly when quey timestamp with time zone in China
> -
>
> Key: HAWQ-1532
> URL: https://issues.apache.org/jira/browse/HAWQ-1532
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Reporter: Kuien Liu
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> On some platforms, CST (China Standard Time) is used as time string suffix of 
> GMT+8, especially for users in China. Then we suffer from following issue:
> postgres=# show log_timezone;
>  log_timezone
> --
>  PRC
> (1 row)
> postgres=# select '2017-09-28 18:26:27.950106 CST'::timestamp with time zone;
>   timestamptz
> ---
>  2017-09-29 08:26:27.950106+08
> (1 row)
> And the 'logtime' in view 'hawq_toolkit.hawq_log_master_concise' is not 
> correct as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1568) install " pxf-hdfs-3.2.1.0-1.el6.noarch.rpm" dependencies pro?

2017-12-06 Thread Radar Lei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279891#comment-16279891
 ] 

Radar Lei commented on HAWQ-1568:
-

[~zhangxin0112zx] You are correct.

> install " pxf-hdfs-3.2.1.0-1.el6.noarch.rpm" dependencies pro?
> --
>
> Key: HAWQ-1568
> URL: https://issues.apache.org/jira/browse/HAWQ-1568
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: xinzhang
>Assignee: Radar Lei
>
> Hi . I install hawq with rpm from 2.2.0 (apache-hawq-2.2.0.0-el7.x86_64.rpm) 
> But when I install other rpm (like pxf-hdfs-3.2.1.0-1.el6.noarch.rpm)
> 
> {code:actionscript}
> [gpadmin@gpmaster hawq_rpm_packages]$ rpm -ivh 
> pxf-hdfs-3.2.1.0-1.el6.noarch.rpm
> error: Failed dependencies:
>   pxf-service >= 3.2.1.0 is needed by pxf-hdfs-0:3.2.1.0-1.el6.noarch
>   hadoop >= 2.7.1 is needed by pxf-hdfs-0:3.2.1.0-1.el6.noarch
>   hadoop-mapreduce >= 2.7.1 is needed by pxf-hdfs-0:3.2.1.0-1.el6.noarch
> {code}
> I install hadoop with .tar 
> 
> {code:actionscript}
> [gpadmin@gpmaster hawq_rpm_packages]$ which hadoop 
> /opt/hadoop/hadoop-2.9.0/bin/hadoop
> [gpadmin@gpmaster hawq_rpm_packages]$ echo $HADOOP_HOME
> /opt/hadoop/hadoop-2.9.0
> {code}
> How to let rpm know my Hadoop had installed.
> 
> btw:
> The rpms builded Centos7 and Centos6 ? 
> (el7=Centos7 el6=Centos6) 
> If so, It is in disorder.
> {code:actionscript}
> [root@gpmaster hawq_rpm_packages]# ll
> total 91304
> -rw-r--r--. 1 root root 84103760 Jun 23 01:54 
> apache-hawq-2.2.0.0-el7.x86_64.rpm
> -rw-r--r--. 1 root root  8892454 Jun 23 01:54 
> apache-tomcat-7.0.62-el6.noarch.rpm
> -rw-r--r--. 1 root root 5668 Jun 23 01:54 pxf-3.2.1.0-1.el6.noarch.rpm
> -rw-r--r--. 1 root root33188 Jun 23 01:54 
> pxf-hbase-3.2.1.0-1.el6.noarch.rpm
> -rw-r--r--. 1 root root57270 Jun 23 01:54 
> pxf-hdfs-3.2.1.0-1.el6.noarch.rpm
> -rw-r--r--. 1 root root74971 Jun 23 01:54 
> pxf-hive-3.2.1.0-1.el6.noarch.rpm
> -rw-r--r--. 1 root root30847 Jun 23 01:54 
> pxf-jdbc-3.2.1.0-1.el6.noarch.rpm
> -rw-r--r--. 1 root root24833 Jun 23 01:54 
> pxf-json-3.2.1.0-1.el6.noarch.rpm
> -rw-r--r--. 1 root root   246454 Jun 23 01:54 
> pxf-service-3.2.1.0-1.el6.noarch.rpm
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1568) install " pxf-hdfs-3.2.1.0-1.el6.noarch.rpm" dependencies pro?

2017-12-06 Thread Radar Lei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279879#comment-16279879
 ] 

Radar Lei commented on HAWQ-1568:
-

If you are installing HAWQ binary rpm, please refer to:
https://cwiki.apache.org/confluence/display/HAWQ/Build+Package+and+Install+with+RPM

We have very detailed steps of HAWQ/Hadoop binary installation in the wiki page.

For the issues you hit:
1. You need to install the rpm packages in sequence following by the wiki page.
2. You need to install hadoop dependencies as rpm packages too, we mentioned 
how to install hadoop from bigtop in the wiki page.

> install " pxf-hdfs-3.2.1.0-1.el6.noarch.rpm" dependencies pro?
> --
>
> Key: HAWQ-1568
> URL: https://issues.apache.org/jira/browse/HAWQ-1568
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: xinzhang
>Assignee: Radar Lei
>
> Hi . I install hawq with rpm from 2.2.0 (apache-hawq-2.2.0.0-el7.x86_64.rpm) 
> But when I install other rpm (like pxf-hdfs-3.2.1.0-1.el6.noarch.rpm)
> 
> {code:actionscript}
> [gpadmin@gpmaster hawq_rpm_packages]$ rpm -ivh 
> pxf-hdfs-3.2.1.0-1.el6.noarch.rpm
> error: Failed dependencies:
>   pxf-service >= 3.2.1.0 is needed by pxf-hdfs-0:3.2.1.0-1.el6.noarch
>   hadoop >= 2.7.1 is needed by pxf-hdfs-0:3.2.1.0-1.el6.noarch
>   hadoop-mapreduce >= 2.7.1 is needed by pxf-hdfs-0:3.2.1.0-1.el6.noarch
> {code}
> I install hadoop with .tar 
> 
> {code:actionscript}
> [gpadmin@gpmaster hawq_rpm_packages]$ which hadoop 
> /opt/hadoop/hadoop-2.9.0/bin/hadoop
> [gpadmin@gpmaster hawq_rpm_packages]$ echo $HADOOP_HOME
> /opt/hadoop/hadoop-2.9.0
> {code}
> How to let rpm know my Hadoop had installed.
> 
> btw:
> The rpms builded Centos7 and Centos6 ? 
> (el7=Centos7 el6=Centos6) 
> If so, It is in disorder.
> {code:actionscript}
> [root@gpmaster hawq_rpm_packages]# ll
> total 91304
> -rw-r--r--. 1 root root 84103760 Jun 23 01:54 
> apache-hawq-2.2.0.0-el7.x86_64.rpm
> -rw-r--r--. 1 root root  8892454 Jun 23 01:54 
> apache-tomcat-7.0.62-el6.noarch.rpm
> -rw-r--r--. 1 root root 5668 Jun 23 01:54 pxf-3.2.1.0-1.el6.noarch.rpm
> -rw-r--r--. 1 root root33188 Jun 23 01:54 
> pxf-hbase-3.2.1.0-1.el6.noarch.rpm
> -rw-r--r--. 1 root root57270 Jun 23 01:54 
> pxf-hdfs-3.2.1.0-1.el6.noarch.rpm
> -rw-r--r--. 1 root root74971 Jun 23 01:54 
> pxf-hive-3.2.1.0-1.el6.noarch.rpm
> -rw-r--r--. 1 root root30847 Jun 23 01:54 
> pxf-jdbc-3.2.1.0-1.el6.noarch.rpm
> -rw-r--r--. 1 root root24833 Jun 23 01:54 
> pxf-json-3.2.1.0-1.el6.noarch.rpm
> -rw-r--r--. 1 root root   246454 Jun 23 01:54 
> pxf-service-3.2.1.0-1.el6.noarch.rpm
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1561) build faild on centos 6.8 bzip2

2017-12-05 Thread Radar Lei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16278599#comment-16278599
 ] 

Radar Lei commented on HAWQ-1561:
-

I can't find good information from the log, but I do got CentOS6.8 works for 
HAWQ build.

Please check your environment to make sure bzip2's header file and libraries 
are installed to the right directory so configure can find it.

Below is the bzip2 version works in my CentOS 6.8 environment.

bzip2-libs-1.0.5-7.el6_0.x86_64
bzip2-1.0.5-7.el6_0.x86_64
bzip2-devel-1.0.5-7.el6_0.x86_64


> build faild on centos 6.8 bzip2
> ---
>
> Key: HAWQ-1561
> URL: https://issues.apache.org/jira/browse/HAWQ-1561
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: xinzhang
>Assignee: Radar Lei
>
> Hi. My env is CentOS release 6.8 .
> env:
>  # bzip2 --version
> bzip2, a block-sorting file compressor.  Version 1.0.6, 6-Sept-2010.
> fail log:
>...
>  checking for library containing BZ2_bzDecompress... no
> configure: error: library 'bzip2' is required.
> 'bzip2' is used for table compression.  Check config.log for details.
> It is possible the compiler isn't looking in the proper directory.
> q:
>   CENTOS 6.X use bzip1.0.5  as default . The dependence libs are the biggest 
> pros.
>What should I to do ? bzip2 1.0.6 had installed .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1561) build faild on centos 6.8 bzip2

2017-12-04 Thread Radar Lei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276752#comment-16276752
 ] 

Radar Lei commented on HAWQ-1561:
-

Hi [~zhangxin0112zx], please provide more error log inside config.log.

configure: error: library 'bzip2' is required.
'bzip2' is used for table compression. Check config.log for details.

> build faild on centos 6.8 bzip2
> ---
>
> Key: HAWQ-1561
> URL: https://issues.apache.org/jira/browse/HAWQ-1561
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: xinzhang
>Assignee: Radar Lei
>
> Hi. My env is CentOS release 6.8 .
> env:
>  # bzip2 --version
> bzip2, a block-sorting file compressor.  Version 1.0.6, 6-Sept-2010.
> fail log:
>...
>  checking for library containing BZ2_bzDecompress... no
> configure: error: library 'bzip2' is required.
> 'bzip2' is used for table compression.  Check config.log for details.
> It is possible the compiler isn't looking in the proper directory.
> q:
>   CENTOS 6.X use bzip1.0.5  as default . The dependence libs are the biggest 
> pros.
>What should I to do ? bzip2 1.0.6 had installed .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1530) Illegally killing a JDBC select query causes locking problems

2017-11-07 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1530:
---

Assignee: Yi Jin  (was: Radar Lei)

> Illegally killing a JDBC select query causes locking problems
> -
>
> Key: HAWQ-1530
> URL: https://issues.apache.org/jira/browse/HAWQ-1530
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Transaction
>Reporter: Grant Krieger
>Assignee: Yi Jin
>
> Hi,
> When you perform a long running select statement on 2 hawq tables (join) from 
> JDBC and illegally kill the JDBC client (CTRL ALT DEL) before completion of 
> the query the 2 tables remained locked even when the query completes on the 
> server. 
> The lock is visible via PG_locks. One cannot kill the query via SELECT 
> pg_terminate_backend(393937). The only way to get rid of it is to kill -9 
> from linux or restart hawq but this can kill other things as well.
> The JDBC client I am using is Aqua Data Studio.
> I can provide exact steps to reproduce if required
> Thank you
> Grant 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1324) Query cancel cause segment to go into Crash recovery

2017-11-02 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei closed HAWQ-1324.
---
Resolution: Fixed

> Query cancel cause segment to go into Crash recovery
> 
>
> Key: HAWQ-1324
> URL: https://issues.apache.org/jira/browse/HAWQ-1324
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.0.0.0-incubating
>Reporter: Ming LI
>Assignee: Ming LI
>Priority: Major
>
> A query was cancelled due to this connection issue to HDFS on Isilon. Seg26 
> then went into crash recovery due to a INSERT query being cancelled. What 
> should be the expected behaviour when HDFS becomes unavailable and a Query 
> fails due to HDFS unavailability.
> There was a core file generated at the time of the Crash recovery. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1324) Query cancel cause segment to go into Crash recovery

2017-11-02 Thread Radar Lei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16235239#comment-16235239
 ] 

Radar Lei commented on HAWQ-1324:
-

Reopen to update the jira information.

> Query cancel cause segment to go into Crash recovery
> 
>
> Key: HAWQ-1324
> URL: https://issues.apache.org/jira/browse/HAWQ-1324
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.0.0.0-incubating
>Reporter: Ming LI
>Assignee: Ming LI
>Priority: Major
> Fix For: 2.1.0.0-incubating
>
>
> A query was cancelled due to this connection issue to HDFS on Isilon. Seg26 
> then went into crash recovery due to a INSERT query being cancelled. What 
> should be the expected behaviour when HDFS becomes unavailable and a Query 
> fails due to HDFS unavailability.
> There was a core file generated at the time of the Crash recovery. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1324) Query cancel cause segment to go into Crash recovery

2017-11-02 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1324:
---

Assignee: Ming LI  (was: Radar Lei)

> Query cancel cause segment to go into Crash recovery
> 
>
> Key: HAWQ-1324
> URL: https://issues.apache.org/jira/browse/HAWQ-1324
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.0.0.0-incubating
>Reporter: Ming LI
>Assignee: Ming LI
>Priority: Major
> Fix For: 2.1.0.0-incubating
>
>
> A query was cancelled due to this connection issue to HDFS on Isilon. Seg26 
> then went into crash recovery due to a INSERT query being cancelled. What 
> should be the expected behaviour when HDFS becomes unavailable and a Query 
> fails due to HDFS unavailability.
> There was a core file generated at the time of the Crash recovery. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1324) Query cancel cause segment to go into Crash recovery

2017-11-02 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1324:

Description: 
A query was cancelled due to this connection issue to HDFS on Isilon. Seg26 
then went into crash recovery due to a INSERT query being cancelled. What 
should be the expected behaviour when HDFS becomes unavailable and a Query 
fails due to HDFS unavailability.

There was a core file generated at the time of the Crash recovery. 




  was:
A query was cancelled due to this connection issue to HDFS on Isilon. Seg26 
then went into crash recovery due to a INSERT query being cancelled. What 
should be the expected behaviour when HDFS becomes unavailable and a Query 
fails due to HDFS unavailability.
Below is the HDFS error
{code}
2017-01-04 03:04:08.382615 
JST,"carund","dwhrun",p574246,th1862944896,"192.168.10.12","47554",2017-01-04 
03:03:08 JST,0,con198952,,seg29,"FATAL","08006","connection to client 
lost",,,0,,"postgres.c",3518,
2017-01-04 03:04:08.420099 
JST,,,p755778,th18629448960,,,seg-1,"LOG","0","3rd party error 
log:
2017-01-04 03:04:08.419969, p574222, th140507423066240, ERROR Handle Exception: 
NamenodeImpl.cpp: 670: Unexpected error: status: STATUS_FILE_NOT_AVAILABLE = 
0xC467 Path: hawq_default/16385/16563/802748/26 with path=
""/hawq_default/16385/16563/802748/26"", 
clientname=libhdfs3_client_random_866998528_count_1_pid_574222_tid_140507423066240
@ Hdfs::Internal::UnWrapper::unwrap(char const, int)
@ Hdfs::Internal::UnWrapper::unwrap(char const, int)
@ Hdfs::Internal::NamenodeImpl::fsync(std::string const&, std::string const&)
@ Hdfs::Internal::NamenodeProxy::fsync(std::string const&, std::string const&)
@ Hdfs::Internal::OutputStreamImpl::closePipeline()
@ Hdfs::Internal::OutputStreamImpl::close()
@ hdfsCloseFile
@ gpfs_hdfs_closefile
@ HdfsCloseFile
@ HdfsFileClose
@ CleanupTempFiles
@ AbortTransaction
@ AbortCurrentTransaction
@ PostgresMain
@ BackendStartup
@ ServerLoop
@ PostmasterMain
@ main
@ Unknown
@ Unknown""SysLoggerMain","syslogger.c",518,
2017-01-04 03:04:08.420272 
JST,"carund","dwhrun",p574222,th1862944896,"192.168.10.12","47550",2017-01-04 
03:03:08 
JST,40678725,con198952,cmd4,seg25,,,x40678725,sx1,"WARNING","58030","could not 
close file 7 : (hdfs://ffd
lakehd.ffwin.fujifilm.co.jp:8020/hawq_default/16385/16563/802748/26) errno 
5","Unexpected error: status: STATUS_FILE_NOT_AVAILABLE = 0xC467 Path: 
hawq_default/16385/16563/802748/26 with path=""/hawq_default/16385/16
563/802748/26"", 
clientname=libhdfs3_client_random_866998528_count_1_pid_574222_tid_140507423066240",,0,,"fd.c",2762,
{code}
Segment 26 going into Crash recovery - from seg26 log file
{code}
2017-01-04 03:04:08.420314 
JST,"carund","dwhrun",p574222,th1862944896,"192.168.10.12","47550",2017-01-04 
03:03:08 JST,40678725,con198952,cmd4,seg25,,,x40678725,sx1,"LOG","08006","could 
not send data to client: 接続が相
手からリセットされました",,,0,,"pqcomm.c",1292,
2017-01-04 03:04:08.420358 
JST,"carund","dwhrun",p574222,th1862944896,"192.168.10.12","47550",2017-01-04 
03:03:08 JST,0,con198952,,seg25,"LOG","08006","could not send data to 
client: パイプが切断されました",,,0,
,"pqcomm.c",1292,
2017-01-04 03:04:08.420375 
JST,"carund","dwhrun",p574222,th1862944896,"192.168.10.12","47550",2017-01-04 
03:03:08 JST,0,con198952,,seg25,"FATAL","08006","connection to client 
lost",,,0,,"postgres.c",3518,
2017-01-04 03:04:08.950354 
JST,,,p755773,th18629448960,,,seg-1,"LOG","0","server process 
(PID 574240) was terminated by signal 11: Segmentation 
fault",,,0,,"postmaster.c",4748,
2017-01-04 03:04:08.950403 
JST,,,p755773,th18629448960,,,seg-1,"LOG","0","terminating any 
other active server processes",,,0,,"postmaster.c",4486,
2017-01-04 03:04:08.954044 
JST,,,p41605,th18629448960,,,seg-1,"LOG","0","Segment RM 
exits.",,,0,,"resourcemanager.c",340,
2017-01-04 03:04:08.954078 
JST,,,p41605,th18629448960,,,seg-1,"LOG","0","Clean up handler 
in message server is called.",,,0,,"rmcomm_MessageServer.c",105,
2017-01-04 03:04:08.972706 
JST,,,p574711,th1862944896,"192.168.10.12","48121",2017-01-04 03:04:08 
JST,0,,,seg-1,"LOG","0","PID 574308 in cancel request did not match 
any process",,,0,,"postmaster.c",3166
,
2017-01-04 03:04:08.976211 

[jira] [Reopened] (HAWQ-1324) Query cancel cause segment to go into Crash recovery

2017-11-02 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reopened HAWQ-1324:
-
  Assignee: Radar Lei  (was: Ming LI)

> Query cancel cause segment to go into Crash recovery
> 
>
> Key: HAWQ-1324
> URL: https://issues.apache.org/jira/browse/HAWQ-1324
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.0.0.0-incubating
>Reporter: Ming LI
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.1.0.0-incubating
>
>
> A query was cancelled due to this connection issue to HDFS on Isilon. Seg26 
> then went into crash recovery due to a INSERT query being cancelled. What 
> should be the expected behaviour when HDFS becomes unavailable and a Query 
> fails due to HDFS unavailability.
> Below is the HDFS error
> {code}
> 2017-01-04 03:04:08.382615 
> JST,"carund","dwhrun",p574246,th1862944896,"192.168.10.12","47554",2017-01-04 
> 03:03:08 JST,0,con198952,,seg29,"FATAL","08006","connection to client 
> lost",,,0,,"postgres.c",3518,
> 2017-01-04 03:04:08.420099 
> JST,,,p755778,th18629448960,,,seg-1,"LOG","0","3rd party 
> error log:
> 2017-01-04 03:04:08.419969, p574222, th140507423066240, ERROR Handle 
> Exception: NamenodeImpl.cpp: 670: Unexpected error: status: 
> STATUS_FILE_NOT_AVAILABLE = 0xC467 Path: 
> hawq_default/16385/16563/802748/26 with path=
> ""/hawq_default/16385/16563/802748/26"", 
> clientname=libhdfs3_client_random_866998528_count_1_pid_574222_tid_140507423066240
> @ Hdfs::Internal::UnWrapper Hdfs::HdfsIOException, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, 
> Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing , 
> Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, 
> Hdfs::Internal::Nothing>::unwrap(char const, int)
> @ Hdfs::Internal::UnWrapper Hdfs::UnresolvedLinkException, Hdfs::HdfsIOException, 
> Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, 
> Hdfs::Internal::Not hing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, 
> Hdfs::Internal::Nothing, Hdfs::Internal::Nothing>::unwrap(char const, int)
> @ Hdfs::Internal::NamenodeImpl::fsync(std::string const&, std::string const&)
> @ Hdfs::Internal::NamenodeProxy::fsync(std::string const&, std::string const&)
> @ Hdfs::Internal::OutputStreamImpl::closePipeline()
> @ Hdfs::Internal::OutputStreamImpl::close()
> @ hdfsCloseFile
> @ gpfs_hdfs_closefile
> @ HdfsCloseFile
> @ HdfsFileClose
> @ CleanupTempFiles
> @ AbortTransaction
> @ AbortCurrentTransaction
> @ PostgresMain
> @ BackendStartup
> @ ServerLoop
> @ PostmasterMain
> @ main
> @ Unknown
> @ Unknown""SysLoggerMain","syslogger.c",518,
> 2017-01-04 03:04:08.420272 
> JST,"carund","dwhrun",p574222,th1862944896,"192.168.10.12","47550",2017-01-04 
> 03:03:08 
> JST,40678725,con198952,cmd4,seg25,,,x40678725,sx1,"WARNING","58030","could 
> not close file 7 : (hdfs://ffd
> lakehd.ffwin.fujifilm.co.jp:8020/hawq_default/16385/16563/802748/26) errno 
> 5","Unexpected error: status: STATUS_FILE_NOT_AVAILABLE = 0xC467 Path: 
> hawq_default/16385/16563/802748/26 with path=""/hawq_default/16385/16
> 563/802748/26"", 
> clientname=libhdfs3_client_random_866998528_count_1_pid_574222_tid_140507423066240",,0,,"fd.c",2762,
> {code}
> Segment 26 going into Crash recovery - from seg26 log file
> {code}
> 2017-01-04 03:04:08.420314 
> JST,"carund","dwhrun",p574222,th1862944896,"192.168.10.12","47550",2017-01-04 
> 03:03:08 
> JST,40678725,con198952,cmd4,seg25,,,x40678725,sx1,"LOG","08006","could not 
> send data to client: 接続が相
> 手からリセットされました",,,0,,"pqcomm.c",1292,
> 2017-01-04 03:04:08.420358 
> JST,"carund","dwhrun",p574222,th1862944896,"192.168.10.12","47550",2017-01-04 
> 03:03:08 JST,0,con198952,,seg25,"LOG","08006","could not send data to 
> client: パイプが切断されました",,,0,
> ,"pqcomm.c",1292,
> 2017-01-04 03:04:08.420375 
> JST,"carund","dwhrun",p574222,th1862944896,"192.168.10.12","47550",2017-01-04 
> 03:03:08 JST,0,con198952,,seg25,"FATAL","08006","connection to client 
> lost",,,0,,"postgres.c",3518,
> 2017-01-04 03:04:08.950354 
> JST,,,p755773,th18629448960,,,seg-1,"LOG","0","server process 
> (PID 574240) was terminated by signal 11: Segmentation 
> fault",,,0,,"postmaster.c",4748,
> 2017-01-04 03:04:08.950403 
> JST,,,p755773,th18629448960,,,seg-1,"LOG","0","terminating 
> any other active server processes",,,0,,"postmaster.c",4486,
> 2017-01-04 03:04:08.954044 
> JST,,,p41605,th18629448960,,,seg-1,"LOG","0","Segment RM 
> exits.",,,0,,"resourcemanager.c",340,
> 2017-01-04 03:04:08.954078 
> 

[jira] [Commented] (HAWQ-1416) hawq_toolkit administrative schema missing in HAWQ installation

2017-10-09 Thread Radar Lei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16196653#comment-16196653
 ] 

Radar Lei commented on HAWQ-1416:
-

hawq_toolkit administrative schema should be created while doing hawq master 
initialize. If the schema is not exist, then it should because the initialize 
process is failed to create hawq toolkit but not error out.

I verified the toolkit views exists without run "psql -f 
/usr/local/hawq/share/postgresql/gp_toolkit.sql". So I think this issue is 
fixed, we can have separate jira to track hawq_toolkit install failed issues if 
someone hit it.

> hawq_toolkit administrative schema missing in HAWQ installation
> ---
>
> Key: HAWQ-1416
> URL: https://issues.apache.org/jira/browse/HAWQ-1416
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools, DDL
>Reporter: Vineet Goel
>Assignee: Chunling Wang
> Fix For: 2.3.0.0-incubating
>
>
> hawq_toolkit administrative schema is not pre-installed with HAWQ, but should 
> actually be available once HAWQ is installed and initialized.
> Current workaround seems to be a manual command to install it:
> psql -f /usr/local/hawq/share/postgresql/gp_toolkit.sql



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1265) Support Complex Hive Schema's

2017-09-04 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1265:
---

Assignee: Oleksandr Diachenko  (was: Radar Lei)

> Support Complex Hive Schema's
> -
>
> Key: HAWQ-1265
> URL: https://issues.apache.org/jira/browse/HAWQ-1265
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Hcatalog, PXF
>Reporter: Michael Andre Pearce
>Assignee: Oleksandr Diachenko
>
> Currently if in hive you have Avro or other formats, where the schema is 
> complex, you cannot currently query the fields in the complex object via 
> hcatalog/pxf integration.
> In terms of Avro Schema - records, enums and complex arrays, do not work. 
> Hive fully supports this complex/object notation, as does many of the other 
> SQL tools in the Hadoop eco-system (Spark, Impala, Drill).
> These all seem to support the same styled solution of using dots for complex 
> object/path negotiation:
> SELECT schema.table.fieldA.nestedRecordFieldS FROM myavrotable;



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1445) PXF JDBC Security, Extra Props, and Max Queries

2017-09-04 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1445:
---

Assignee: Oleksandr Diachenko  (was: Radar Lei)

> PXF JDBC Security, Extra Props, and Max Queries
> ---
>
> Key: HAWQ-1445
> URL: https://issues.apache.org/jira/browse/HAWQ-1445
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Jon Roberts
>Assignee: Oleksandr Diachenko
>
> 1) Security
> The example has the username and password in the connection string.  
> LOCATION ('pxf://localhost:51200/demodb.myclass'
>   '?PROFILE=JDBC'
>   '_DRIVER=com.mysql.jdbc.Driver'
>   
> '_URL=jdbc:mysql://192.168.200.6:3306/demodb=root=root'
>   )
> This creates  security issue because anyone that can connect to the database 
> will be able to see the username and password of the JDBC connection.
> I suggest changing the URL to a connection profile that points to a file 
> outside of the database.  For Greenplum database and S3, the LOCATION syntax 
> includes "config=/path/to/config_file".  The config_file contains the S3 
> credentials.  This seems like a good pattern to follow here too.
> 2) Extra Properties
> Some JDBC drivers will need many additional properties beyond the URL and 
> this requires setting it with a put to a Properties variable.  An example of 
> this is Oracle's defaultRowPrefetch property that needs to be updated from 
> the default of 10 which is designed for OLTP to something larger like 2000 
> which is more ideal for data extracts.  
> Additionally, you will need the ability to set the isolation level which is 
> done with setTransactionIsolation on the Connection.  I don't believe you can 
> set this on the connection URL either.  Many SQL Server and DB2 database 
> still don't use snapshot isolation and use dirty reads instead to prevent 
> blocking locks.  The configuration file I suggested above will need an "extra 
> properties" variable that is a delimited list of key/value pairs so you can 
> add multiple extra properties.
> 3) Max Queries
> The external table definition doesn't limit how many concurrent queries can 
> be executed on the remote server.  It would be pretty simple to create a 
> single external table using PXF JDBC that would issue thousands of concurrent 
> queries to a single source database when doing a single SELECT in HAWQ.
> Initially, we should add a max_queries variable to the configuration file 
> that I'm suggesting, that will reject queries from proceeding when a greater 
> number of PXF instances are being requested than the max_queries variable.  
> Longer term, we should implement a queueing system so we can support external 
> tables that partitions data from the source at a very small grain but without 
> killing the source database.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1301) Decomission existing regression tests related to PXF and move them to feature test framwork

2017-09-04 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1301:
---

Assignee: Oleksandr Diachenko  (was: Radar Lei)

> Decomission existing regression tests related to PXF and move them to feature 
> test framwork
> ---
>
> Key: HAWQ-1301
> URL: https://issues.apache.org/jira/browse/HAWQ-1301
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1300) hawq cannot compile with Bison 3.x.

2017-09-04 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei closed HAWQ-1300.
---
Resolution: Workaround

> hawq cannot compile with Bison 3.x.
> ---
>
> Key: HAWQ-1300
> URL: https://issues.apache.org/jira/browse/HAWQ-1300
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Lei Chang
>Assignee: Radar Lei
> Fix For: backlog
>
>
> Yes, I met similar issue, Bison 3.x does not work for HAWQ now.
> On Mon, Jan 30, 2017 at 12:37 PM, Dmitry Bouzolin <
> dbouzo...@yahoo.com.invalid> wrote:
> > Hi Lei,
> > I use Bison 3.0.2. And looks actually like a bug in gram.c source for this
> > Bison version.The function refers yyscanner which is not defined. I will
> > reach out Bison bug list.Thanks for reply!
> >
> > On Sunday, January 29, 2017 8:09 PM, Lei Chang 
> > wrote:
> >
> >
> >  Hi Dmitry,
> >
> > Which bison version do you use? Looks this is a known issue when compiling
> > hawq on latest bison (3.x) version.  Bison 2.x version should work.
> >
> > Thanks
> > Lei
> >
> >
> >
> >
> > On Mon, Jan 30, 2017 at 3:41 AM, Dmitry Bouzolin <
> > dbouzo...@yahoo.com.invalid> wrote:
> >
> > > Hi All,
> > > Yes, I know arch linux is not supported, however I appreciate any clues
> > on
> > > why the build would fail like so:
> > >
> > > make -C caql allmake[4]: Entering directory
> > '/data/src/incubator-hawq/src/
> > > backend/catalog/caql'
> > > gcc -O3 -std=gnu99  -Wall -Wmissing-prototypes -Wpointer-arith
> > > -Wendif-labels -Wformat-security -fno-strict-aliasing -fwrapv
> > > -fno-aggressive-loop-optimizations  -I/usr/include/libxml2
> > > -I../../../../src/include -D_GNU_SOURCE  -I/data/src/incubator-hawq/
> > > depends/libhdfs3/build/install/opt/hawq/include
> > > -I/data/src/incubator-hawq/depends/libyarn/build/install/
> > opt/hawq/include
> > > -c -o gram.o gram.c
> > > gram.c: In function ‘caql_yyparse’:
> > > gram.c:1368:41: error: ‘yyscanner’ undeclared (first use in this
> > function)
> > >yychar = yylex (, , yyscanner);
> > >  ^
> > > gram.c:1368:41: note: each undeclared identifier is reported only once
> > for
> > > each function it appears in
> > > : recipe for target 'gram.o' failed
> > >
> > > If I build on CentOS, I get different make like for this target and build
> > > succeeds:
> > > make -C caql all
> > > make[4]: Entering directory `/data/src/incubator-hawq/src/
> > > backend/catalog/caql'
> > > gcc -O3 -std=gnu99  -Wall -Wmissing-prototypes -Wpointer-arith
> > > -Wendif-labels -Wformat-security -fno-strict-aliasing -fwrapv
> > > -fno-aggressive-loop-optimizations  -I/usr/include/libxml2
> > > -I../../../../src/include -D_GNU_SOURCE  -I/data/src/incubator-hawq/
> > > depends/libhdfs3/build/install/opt/hawq/include
> > > -I/data/src/incubator-hawq/depends/libyarn/build/install/
> > opt/hawq/include
> > > -c -o caqlanalyze.o caqlanalyze.c
> > >
> > > The difference is in input and output file. The same line in Arch
> > > completes successfully. All dependencies are in place.
> > >
> > > Thanks, Dmitry.
> > >



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-127) Create CI projects for HAWQ releases

2017-09-04 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-127:
---
Fix Version/s: (was: 2.3.0.0-incubating)
   backlog

> Create CI projects for HAWQ releases
> 
>
> Key: HAWQ-127
> URL: https://issues.apache.org/jira/browse/HAWQ-127
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.1.0.0-incubating
>Reporter: Lei Chang
>Assignee: Radar Lei
> Fix For: backlog
>
>
> Create Jenkins projects that build HAWQ binary, source tarballs and docker 
> images, and run sanity tests including at least installcheck-good tests for 
> each commit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1488) PXF HiveVectorizedORC profile should support Timestamp

2017-09-04 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1488:
---

Assignee: Oleksandr Diachenko  (was: Radar Lei)

> PXF HiveVectorizedORC profile should support Timestamp
> --
>
> Key: HAWQ-1488
> URL: https://issues.apache.org/jira/browse/HAWQ-1488
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: backlog
>
>
> As for now Timestamp datatype is not supported in HiveVectorizedORC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1270) Plugged storage back-ends for HAWQ

2017-09-04 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1270:
---

Assignee: Yi Jin  (was: Radar Lei)

> Plugged storage back-ends for HAWQ
> --
>
> Key: HAWQ-1270
> URL: https://issues.apache.org/jira/browse/HAWQ-1270
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Dmitry Buzolin
>Assignee: Yi Jin
>
> Since HAWQ only depends on Hadoop and Parquet for columnar format support, I 
> would like to propose pluggable storage backend design for Hawq. Hadoop is 
> already supported but there is Ceph -  a distributed, storage system which 
> offers standard Posix compliant file system, object and a block storage. Ceph 
> is also data location aware, written in C++. and is more sophisticated 
> storage backend compare to Hadoop at this time. It provides replicated and 
> erasure encoded storage pools, Other great features of Ceph are: snapshots 
> and an algorithmic approach to map data to the nodes rather than having 
> centrally managed namenodes. I don't think HDFS offers any of these features. 
> In terms of performance, Ceph should be faster than HFDS since it is written 
> on C++ and because it doesn't have scalability limitations when mapping data 
> to storage pools, compare to Hadoop, where name node is such point of 
> contention.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1303) Load each partition as separate table for heterogenous tables in HCatalog

2017-09-04 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1303:
---

Assignee: Oleksandr Diachenko  (was: Radar Lei)

> Load each partition as separate table for heterogenous tables in HCatalog
> -
>
> Key: HAWQ-1303
> URL: https://issues.apache.org/jira/browse/HAWQ-1303
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Hcatalog, PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>
> Changes introduced in HAWQ-1228 made HAWQ use optimal profile/format for Hive 
> tables. But there is a limitation when HAWQ loads Hive tables into memory, it 
> loads them as one table even if a table has multiple partitions with 
> different output formats(GPDBWritable, TEXT). Thus currently it uses 
> GBDBWritable format for that case. The idea is to load each partition set of 
> one output format as a separate table, so not optimal profile, but optimal 
> output format could be used.
> Example: 
> We have Hive table with four partitions of following formats - Text, RC, ORC, 
> Sequence file.
> Currently, HAWQ will load it to memory with GPDBWritable format.
> GPDBWritable format is optimal for HiveORC, Hive profiles but not optimal for 
> HIveText and HiveRC profiles.
> With proposed changes, HAWQ should load two tables with TEXT and GPDBWritable 
> formats and use following pairs to read partitions - HiveText/TEXT, 
> HiveRC/TEXT, HiveORC/GPDBWritable, Hive/GPDBWritable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1288) Create a standalone PXF command line client

2017-09-04 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1288:
---

Assignee: Oleksandr Diachenko  (was: Radar Lei)

> Create a standalone PXF command line client
> ---
>
> Key: HAWQ-1288
> URL: https://issues.apache.org/jira/browse/HAWQ-1288
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF
>Reporter: Roman Shaposhnik
>Assignee: Oleksandr Diachenko
>
> In order for PXF to start feeling like a standalone component it would be 
> great if we could create a command line client along the lines of hadoop fs 
> ... CLI.
> A related benefit here is a much crisper articulation of PXF APIs in code 
> (something that is currently pretty difficult to untangle from the only PXF 
> client that exists -- HAWQ's own C stub)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1245) can HAWQ support alternate python module deployment directory?

2017-09-04 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei closed HAWQ-1245.
---
   Resolution: Won't Fix
Fix Version/s: backlog

> can HAWQ support alternate python module deployment directory?
> --
>
> Key: HAWQ-1245
> URL: https://issues.apache.org/jira/browse/HAWQ-1245
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Lisa Owen
>Assignee: Radar Lei
>Priority: Minor
> Fix For: backlog
>
>
> HAWQ no longer embeds python and is now using the system python installation. 
>  with this change, installing a new python module now requires root/sudo 
> access to the system python directories.  is there any reason why HAWQ would 
> not be able to support deploying python modules to an alternate directory 
> that is owned by gpadmin?  or using a python virtual environment?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1516) add hawq start "userinput.ask_yesno"

2017-09-04 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei closed HAWQ-1516.
---
   Resolution: Not A Bug
Fix Version/s: backlog

This works as design.

> add hawq start "userinput.ask_yesno" 
> -
>
> Key: HAWQ-1516
> URL: https://issues.apache.org/jira/browse/HAWQ-1516
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: wangbincmss
>Assignee: Radar Lei
>Priority: Minor
> Fix For: backlog
>
>
> both  hawq  init and   hawq  stop  have "ask_yesno" ,   hawq  start  not  
> have "ask_yesno"
> "ask_yesno" will make users think twice for whether there is an alive cluster 
> and whether to proceed, I think   hawq  start should have "ask_yesno"  like 
> hawq  init and   hawq  stop



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1518) Add a UDF for showing whether the data directory is an encryption zone

2017-09-04 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1518:
---

Assignee: Amy  (was: Radar Lei)

> Add a UDF for showing whether the data directory is an encryption zone
> --
>
> Key: HAWQ-1518
> URL: https://issues.apache.org/jira/browse/HAWQ-1518
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Catalog
>Reporter: Hongxu Ma
>Assignee: Amy
> Fix For: 2.3.0.0-incubating
>
>
> A UDF(read only) for showing whether the data directory is an encryption zone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1504) Namenode hangs during restart of docker environment configured using incubator-hawq/contrib/hawq-docker/

2017-09-04 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1504.
-
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

Fixed by Shubham Sharma.

> Namenode hangs during restart of docker environment configured using 
> incubator-hawq/contrib/hawq-docker/
> 
>
> Key: HAWQ-1504
> URL: https://issues.apache.org/jira/browse/HAWQ-1504
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
>Priority: Minor
> Fix For: 2.3.0.0-incubating
>
>
> After setting up an environment using instructions provided under 
> incubator-hawq/contrib/hawq-docker/, while trying to restart docker 
> containers namenode hangs and tries a namenode -format during every start.
> Steps to reproduce this issue - 
> - Navigate to incubator-hawq/contrib/hawq-docker
> - make stop
> - make start
> - docker exec -it centos7-namenode bash
> - ps -ef | grep java
> You can see namenode -format running.
> {code}
> [gpadmin@centos7-namenode data]$ ps -ef | grep java
> hdfs1110  1 00:56 ?00:00:06 
> /etc/alternatives/java_sdk/bin/java -Dproc_namenode -Xmx1000m 
> -Dhdfs.namenode=centos7-namenode -Dhadoop.log.dir=/var/log/hadoop/hdfs 
> -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.5.0.0-1245/hadoop 
> -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console 
> -Djava.library.path=:/usr/hdp/2.5.0.0-1245/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1245/hadoop/lib/native
>  -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true 
> -Dhadoop.security.logger=INFO,NullAppender 
> org.apache.hadoop.hdfs.server.namenode.NameNode -format
> {code}
> Since namenode -format runs in interactive mode and at this stage it is 
> waiting for a (Yes/No) response, the namenode will remain stuck forever. This 
> makes hdfs unavailable.
> Root cause of the problem - 
> In the dockerfiles present under 
> incubator-hawq/contrib/hawq-docker/centos6-docker/hawq-test and 
> incubator-hawq/contrib/hawq-docker/centos7-docker/hawq-test, the docker 
> directive ENTRYPOINT executes entrypoin.sh during startup.
> The entrypoint.sh in turn executes start-hdfs.sh. start-dfs.sh checks for the 
> following - 
> {code}
> if [ ! -d /tmp/hdfs/name/current ]; then
> su -l hdfs -c "hdfs namenode -format"
>   fi
> {code}
> My assumption is it looks for fsimage and edit logs. If they are not present 
> the script assumes that this a first time initialization and namenode format 
> should be done. However, path /tmp/hdfs/name/current does not exist on 
> namenode. 
> From namenode logs it is clear that fsimage and edit logs are written under 
> /tmp/hadoop-hdfs/dfs/name/current.
> {code}
> 2017-07-18 00:55:20,892 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> No edit log streams selected.
> 2017-07-18 00:55:20,893 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Planning to load image: 
> FSImageFile(file=/tmp/hadoop-hdfs/dfs/name/current/fsimage_000,
>  cpktTxId=000)
> 2017-07-18 00:55:20,995 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
> 2017-07-18 00:55:21,064 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage 
> in 0 seconds.
> 2017-07-18 00:55:21,065 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Loaded image for txid 0 from 
> /tmp/hadoop-hdfs/dfs/name/current/fsimage_000
> 2017-07-18 00:55:21,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? 
> false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
> 2017-07-18 00:55:21,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1
> {code}
> Thus wrong path in 
> incubator-hawq/contrib/hawq-docker/centos*-docker/hawq-test/start-hdfs.sh 
> causes namenode to hang during each restart of the containers making hdfs 
> unavailable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1521) Idle QE Processes Can't Quit After An Interval

2017-09-04 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1521:
---

Assignee: Lin Wen  (was: Radar Lei)

> Idle QE Processes Can't Quit After An Interval
> --
>
> Key: HAWQ-1521
> URL: https://issues.apache.org/jira/browse/HAWQ-1521
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Lin Wen
>Assignee: Lin Wen
>
> After a query is finished, there are some idle QE processes on segments. 
> These QE processes are expected to quit after a time interval, this interval 
> is controlled by a GUC gp_vmem_idle_resource_timeout, the default value is 18 
> seconds.
> However, this does't act as expected. Idle QE processes on segments always 
> exist there, unless the QD process quit. 
> The reason is in postgres.c, the codes to enable this timer can't get 
> executed. function gangsExist() always return false, since gang related 
> structures are all NULL.
>   if (IdleSessionGangTimeout > 0 && gangsExist())
>   if (!enable_sig_alarm( IdleSessionGangTimeout /* ms */, false))
>   elog(FATAL, "could not set timer for client wait 
> timeout");



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1524) Travis CI build failure after upgrading protobuf to 3.4

2017-09-04 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei resolved HAWQ-1524.
-
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

Fixed by Shubham Sharma.

> Travis CI build failure after upgrading protobuf to 3.4
> ---
>
> Key: HAWQ-1524
> URL: https://issues.apache.org/jira/browse/HAWQ-1524
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Shubham Sharma
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> After upgrading the protobuf version to 3.4 , CI pipeline fails with below 
> errors. From the error message it looks like it is a problem with namespace 
> resolution while declaring stringstream and ostringstream
> {code}
> Error message -
> /Users/travis/build/apache/incubator-hawq/depends/libyarn/src/libyarnclient/LibYarnClient.cpp:248:9:
> error: unknown type name 'stringstream'; did you mean
> 'std::stringstream'?
> stringstream ss;
> ^~~~
> std::stringstream
> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/iosfwd:153:38:
> note: 'std::stringstream' declared here
> typedef basic_stringstream stringstream;
> /Users/travis/build/apache/incubator-hawq/depends/libyarn/src/libyarnclient/LibYarnClient.cpp:299:13:
> error: unknown type name 'ostringstream'; did you mean
> 'std::ostringstream'?
> ostringstream key;
> ^
> std::ostringstream
> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/iosfwd:152:38:
> note: 'std::ostringstream' declared here
> typedef basic_ostringstreamostringstream;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1483) cache lookup failure

2017-08-17 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1483:

Fix Version/s: 2.3.0.0-incubating

> cache lookup failure
> 
>
> Key: HAWQ-1483
> URL: https://issues.apache.org/jira/browse/HAWQ-1483
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Rahul Iyer
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> I'm getting a failure when performing a distinct count with another immutable 
> aggregate. We found this issue when running MADlib on HAWQ 2.0.0. Please find 
> below a simple repro. 
> Setup: 
> {code}
> CREATE TABLE example_data(
> id SERIAL,
> outlook text,
> temperature float8,
> humidity float8,
> windy text,
> class text) ;
> COPY example_data (outlook, temperature, humidity, windy, class) FROM stdin 
> DELIMITER ',' NULL '?' ;
> sunny, 85, 85, false, Don't Play
> sunny, 80, 90, true, Don't Play
> overcast, 83, 78, false, Play
> rain, 70, 96, false, Play
> rain, 68, 80, false, Play
> rain, 65, 70, true, Don't Play
> overcast, 64, 65, true, Play
> sunny, 72, 95, false, Don't Play
> sunny, 69, 70, false, Play
> rain, 75, 80, false, Play
> sunny, 75, 70, true, Play
> overcast, 72, 90, true, Play
> overcast, 81, 75, false, Play
> rain, 71, 80, true, Don't Play
> \.
> create function grt_sfunc(agg_state point, el float8)
> returns point
> immutable
> language plpgsql
> as $$
> declare
>   greatest_sum float8;
>   current_sum float8;
> begin
>   current_sum := agg_state[0] + el;
>   if agg_state[1] < current_sum then
> greatest_sum := current_sum;
>   else
> greatest_sum := agg_state[1];
>   end if;
>   return point(current_sum, greatest_sum);
> end;
> $$;
> create function grt_finalfunc(agg_state point)
> returns float8
> immutable
> strict
> language plpgsql
> as $$
> begin
>   return agg_state[1];
> end;
> $$;
> create aggregate greatest_running_total (float8)
> (
> sfunc = grt_sfunc,
> stype = point,
> finalfunc = grt_finalfunc
> );
> {code}
> Error: 
> {code}
> select count(distinct outlook), greatest_running_total(humidity::integer) 
> from example_data;
> {code} 
> {code}
> ERROR:  cache lookup failed for function 0 (fmgr.c:223)
> {code}
> Execution goes through if I remove the {{distinct}} or if I add another 
> column for the {{count(distinct)}}. 
> {code:sql}
> select count(distinct outlook) as c1, count(distinct windy) as c2, 
> greatest_running_total(humidity) from example_data;
> {code}
> {code}
>  c1 | c2 | greatest_running_total
> ++
>   3 |  2 |
> (1 row)
> {code}
> {code:sql}
> select count(outlook) as c1, greatest_running_total(humidity) from 
> example_data;
> {code}
> {code}
>  count | greatest_running_total
> ---+
> 14 |
> (1 row)
> {code}
> It's an older build - I don't have the resources at present to test this on 
> the latest HAWQ. 
> {code}
> select version();
>   
>   version
> ---
>  PostgreSQL 8.2.15 (Greenplum Database 4.2.0 build 1) (HAWQ 2.0.0.0 build 
> 22126) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled 
> on Apr 25 2016 09:52:54
> (1 row)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1368) normal user who doesn't have home directory may have problem when running hawq register

2017-08-17 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1368:

Fix Version/s: backlog

> normal user who doesn't have home directory may have problem when running 
> hawq register
> ---
>
> Key: HAWQ-1368
> URL: https://issues.apache.org/jira/browse/HAWQ-1368
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Lili Ma
>Assignee: Radar Lei
> Fix For: backlog
>
>
> HAWQ register stores information in hawqregister_MMDD.log under directory 
> ~/hawqAdminLogs, and normal user who doesn't have own home directory may 
> encounter failure when running hawq regsiter.
> We can add -l option in order to set the target log directory and file name 
> of hawq register.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1416) hawq_toolkit administrative schema missing in HAWQ installation

2017-08-17 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1416:
---

Assignee: Chunling Wang  (was: Radar Lei)

> hawq_toolkit administrative schema missing in HAWQ installation
> ---
>
> Key: HAWQ-1416
> URL: https://issues.apache.org/jira/browse/HAWQ-1416
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools, DDL
>Reporter: Vineet Goel
>Assignee: Chunling Wang
> Fix For: 2.3.0.0-incubating
>
>
> hawq_toolkit administrative schema is not pre-installed with HAWQ, but should 
> actually be available once HAWQ is installed and initialized.
> Current workaround seems to be a manual command to install it:
> psql -f /usr/local/hawq/share/postgresql/gp_toolkit.sql



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1260) Remove temp tables after hawq restart

2017-08-17 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-1260:

Fix Version/s: backlog

> Remove temp tables after hawq restart 
> --
>
> Key: HAWQ-1260
> URL: https://issues.apache.org/jira/browse/HAWQ-1260
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Paul Guo
>Assignee: Radar Lei
> Fix For: backlog
>
>
> Sometimes hawq encounters errors so have to restart (e.g. oom-kill, debug), 
> useless temp tables are left on hdfs and catalog. It seems that one of the 
> solution is to remove the pg_temp_* schema automatically after hawq restart.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1341) hawq help doesn't have upgrade command

2017-08-17 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei closed HAWQ-1341.
---
   Resolution: Not A Problem
Fix Version/s: 2.3.0.0-incubating

> hawq help doesn't have upgrade command
> --
>
> Key: HAWQ-1341
> URL: https://issues.apache.org/jira/browse/HAWQ-1341
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Oleksandr Diachenko
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> When user runs {code} hawq help
>  
> usage: hawq  [] [options]
> [--version]
> The most commonly used hawq "commands" are:
>start Start hawq service.
>stop  Stop hawq service.
>init  Init hawq service.
>restart   Restart hawq service.
>activate  Activate hawq standby master as master.
>version   Show hawq version information.
>configSet hawq GUC values.
>state Show hawq cluster status.
>filespace Create hawq filespaces.
>extract   Extract table's metadata into a YAML formatted file.
>load  Load data into hawq.
>scp   Copies files between multiple hosts at once.
>ssh   Provides ssh access to multiple hosts at once.
>ssh-exkeysExchanges SSH public keys between hosts.
>check Verifies and validates HAWQ settings.
>checkperf Verifies the baseline hardware performance of hosts.
>register  Register parquet files generated by other system into the 
> corrsponding table in HAWQ
> See 'hawq  help' for more information on a specific command.{code}
> upgrade command is missing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1515) how to build and complie hawq based on suse11

2017-08-17 Thread Radar Lei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16130002#comment-16130002
 ] 

Radar Lei commented on HAWQ-1515:
-

[~fengfeng] 
Per my experience, it's much harder to compile HAWQ on suse11 because of a lot 
HAWQ dependency packages are not found from the suse repo. 

So you would need compiled and install proper packages by yourself. Example: 
libgsasl, boost, thrift, yaml, json-c, protobuf, snappy, curl.

Another point is openssl, now HAWQ requires openssl 1.0.1+  on suse11, you can 
install it from repo 'SLE11-Security-Module'.


> how to build and complie hawq based on suse11
> -
>
> Key: HAWQ-1515
> URL: https://issues.apache.org/jira/browse/HAWQ-1515
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: FengHuang
>Assignee: Radar Lei
>
> three are a little zypper rep for all kinds of dependencies for build of hawq 
> on suse11. can you recommend some available and comprehensive zypper rep?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1341) hawq help doesn't have upgrade command

2017-07-28 Thread Radar Lei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104714#comment-16104714
 ] 

Radar Lei commented on HAWQ-1341:
-

I think HAWQ do not have any upgrade scripts/command yet. 

> hawq help doesn't have upgrade command
> --
>
> Key: HAWQ-1341
> URL: https://issues.apache.org/jira/browse/HAWQ-1341
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Oleksandr Diachenko
>Assignee: Radar Lei
>
> When user runs {code} hawq help
>  
> usage: hawq  [] [options]
> [--version]
> The most commonly used hawq "commands" are:
>start Start hawq service.
>stop  Stop hawq service.
>init  Init hawq service.
>restart   Restart hawq service.
>activate  Activate hawq standby master as master.
>version   Show hawq version information.
>configSet hawq GUC values.
>state Show hawq cluster status.
>filespace Create hawq filespaces.
>extract   Extract table's metadata into a YAML formatted file.
>load  Load data into hawq.
>scp   Copies files between multiple hosts at once.
>ssh   Provides ssh access to multiple hosts at once.
>ssh-exkeysExchanges SSH public keys between hosts.
>check Verifies and validates HAWQ settings.
>checkperf Verifies the baseline hardware performance of hosts.
>register  Register parquet files generated by other system into the 
> corrsponding table in HAWQ
> See 'hawq  help' for more information on a specific command.{code}
> upgrade command is missing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1245) can HAWQ support alternate python module deployment directory?

2017-07-28 Thread Radar Lei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16104693#comment-16104693
 ] 

Radar Lei commented on HAWQ-1245:
-

[~lisakowen] We removed some of the python modules from HAWQ source code due to 
license/compatible issues. Now user need to manage these modules by themselves.

Base on above, I can't see what we can do since user environment is quite 
different. Please advice if you have good solution. Thanks.

> can HAWQ support alternate python module deployment directory?
> --
>
> Key: HAWQ-1245
> URL: https://issues.apache.org/jira/browse/HAWQ-1245
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Lisa Owen
>Assignee: Radar Lei
>Priority: Minor
>
> HAWQ no longer embeds python and is now using the system python installation. 
>  with this change, installing a new python module now requires root/sudo 
> access to the system python directories.  is there any reason why HAWQ would 
> not be able to support deploying python modules to an alternate directory 
> that is owned by gpadmin?  or using a python virtual environment?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1484) Spin PXF into a Separate Project for Data Access

2017-07-28 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei closed HAWQ-1484.
---
   Resolution: Duplicate
Fix Version/s: backlog

> Spin PXF into a Separate Project for Data Access
> 
>
> Key: HAWQ-1484
> URL: https://issues.apache.org/jira/browse/HAWQ-1484
> Project: Apache HAWQ
>  Issue Type: New Feature
>Reporter: Suminda Dharmasena
>Assignee: Radar Lei
> Fix For: backlog
>
>
> Can the PXF be spinned into a seperate projects here they can be used as a 
> basis for other data access projects.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   4   >