[jira] [Assigned] (HAWQ-1514) TDE feature makes libhdfs3 require openssl1.1

2018-01-29 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei reassigned HAWQ-1514:
---

Assignee: WANG Weinan  (was: Radar Lei)

> TDE feature makes libhdfs3 require openssl1.1
> -
>
> Key: HAWQ-1514
> URL: https://issues.apache.org/jira/browse/HAWQ-1514
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: libhdfs
>Reporter: Yi Jin
>Assignee: WANG Weinan
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> New TDE feature delivered in libhdfs3 requires specific version of openssl, 
> at least per my test, 1.0.21 does not work, and 1.1 source code built library 
> passed.
> So maybe we need some build and installation instruction improvement. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (HAWQ-1416) hawq_toolkit administrative schema missing in HAWQ installation

2018-01-29 Thread Radar Lei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei closed HAWQ-1416.
---
Resolution: Not A Problem

> hawq_toolkit administrative schema missing in HAWQ installation
> ---
>
> Key: HAWQ-1416
> URL: https://issues.apache.org/jira/browse/HAWQ-1416
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools, DDL
>Reporter: Vineet Goel
>Assignee: Chunling Wang
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> hawq_toolkit administrative schema is not pre-installed with HAWQ, but should 
> actually be available once HAWQ is installed and initialized.
> Current workaround seems to be a manual command to install it:
> psql -f /usr/local/hawq/share/postgresql/gp_toolkit.sql



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (HAWQ-1582) hawq ssh cmd bug when pipe in cmd

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin closed HAWQ-1582.

Resolution: Fixed

> hawq ssh cmd bug when pipe in cmd
> -
>
> Key: HAWQ-1582
> URL: https://issues.apache.org/jira/browse/HAWQ-1582
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Yang Sen
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> h1. bug description
> {code:bash}
> hawq ssh -h sdw2 -h localhost -e 'ls -1 | wc -l'
> {code}
> When running this command, the expected action is that `ls -1 | wc -l` is 
> executed in each host. The expected output is (the number may be different):
> {code:bash}
> [sdw2] ls -1 | wc -l
> [sdw2] 23
> [localhost] ls -1 | wc -l
> [localhost] 20
> {code}
> While the output got is:
> {code:bash}
> 45
> {code}
> The result looks like `ls -l` was executed in each host and the output of 
> `hawq ssh -h sdw2 -h localhost -e 'ls -1'` was redirect to pipe to `wc -l`.
> h2. Another related issue
> {code:bash}
> hawq ssh -h sdw2 -h localhost -e 'kill -9 $(pgrep lava)'
> {code}
> This command expects to kill process named lava in each host. While `$(pgrep 
> lava)` is executed in localhost, and program gets the process id, for example 
> 5. And then `kill -9 5` is executed in each host, which is definitely 
> not match with our expect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1582) hawq ssh cmd bug when pipe in cmd

2018-01-29 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344402#comment-16344402
 ] 

Yi Jin commented on HAWQ-1582:
--

Close this issue as this has been delivered and verified.

> hawq ssh cmd bug when pipe in cmd
> -
>
> Key: HAWQ-1582
> URL: https://issues.apache.org/jira/browse/HAWQ-1582
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Yang Sen
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> h1. bug description
> {code:bash}
> hawq ssh -h sdw2 -h localhost -e 'ls -1 | wc -l'
> {code}
> When running this command, the expected action is that `ls -1 | wc -l` is 
> executed in each host. The expected output is (the number may be different):
> {code:bash}
> [sdw2] ls -1 | wc -l
> [sdw2] 23
> [localhost] ls -1 | wc -l
> [localhost] 20
> {code}
> While the output got is:
> {code:bash}
> 45
> {code}
> The result looks like `ls -l` was executed in each host and the output of 
> `hawq ssh -h sdw2 -h localhost -e 'ls -1'` was redirect to pipe to `wc -l`.
> h2. Another related issue
> {code:bash}
> hawq ssh -h sdw2 -h localhost -e 'kill -9 $(pgrep lava)'
> {code}
> This command expects to kill process named lava in each host. While `$(pgrep 
> lava)` is executed in localhost, and program gets the process id, for example 
> 5. And then `kill -9 5` is executed in each host, which is definitely 
> not match with our expect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1575) Implement readable Parquet profile

2018-01-29 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344400#comment-16344400
 ] 

Yi Jin commented on HAWQ-1575:
--

Shell we put this feature in 2.3.0.0? If yes, can anyone who has it delivered 
asap? Thanks

> Implement readable Parquet profile
> --
>
> Key: HAWQ-1575
> URL: https://issues.apache.org/jira/browse/HAWQ-1575
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Ed Espino
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> PXF should be able to read data from Parquet files stored in HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1416) hawq_toolkit administrative schema missing in HAWQ installation

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1416:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> hawq_toolkit administrative schema missing in HAWQ installation
> ---
>
> Key: HAWQ-1416
> URL: https://issues.apache.org/jira/browse/HAWQ-1416
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools, DDL
>Reporter: Vineet Goel
>Assignee: Chunling Wang
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> hawq_toolkit administrative schema is not pre-installed with HAWQ, but should 
> actually be available once HAWQ is installed and initialized.
> Current workaround seems to be a manual command to install it:
> psql -f /usr/local/hawq/share/postgresql/gp_toolkit.sql



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1483) cache lookup failure

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1483:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> cache lookup failure
> 
>
> Key: HAWQ-1483
> URL: https://issues.apache.org/jira/browse/HAWQ-1483
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Rahul Iyer
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> I'm getting a failure when performing a distinct count with another immutable 
> aggregate. We found this issue when running MADlib on HAWQ 2.0.0. Please find 
> below a simple repro. 
> Setup: 
> {code}
> CREATE TABLE example_data(
> id SERIAL,
> outlook text,
> temperature float8,
> humidity float8,
> windy text,
> class text) ;
> COPY example_data (outlook, temperature, humidity, windy, class) FROM stdin 
> DELIMITER ',' NULL '?' ;
> sunny, 85, 85, false, Don't Play
> sunny, 80, 90, true, Don't Play
> overcast, 83, 78, false, Play
> rain, 70, 96, false, Play
> rain, 68, 80, false, Play
> rain, 65, 70, true, Don't Play
> overcast, 64, 65, true, Play
> sunny, 72, 95, false, Don't Play
> sunny, 69, 70, false, Play
> rain, 75, 80, false, Play
> sunny, 75, 70, true, Play
> overcast, 72, 90, true, Play
> overcast, 81, 75, false, Play
> rain, 71, 80, true, Don't Play
> \.
> create function grt_sfunc(agg_state point, el float8)
> returns point
> immutable
> language plpgsql
> as $$
> declare
>   greatest_sum float8;
>   current_sum float8;
> begin
>   current_sum := agg_state[0] + el;
>   if agg_state[1] < current_sum then
> greatest_sum := current_sum;
>   else
> greatest_sum := agg_state[1];
>   end if;
>   return point(current_sum, greatest_sum);
> end;
> $$;
> create function grt_finalfunc(agg_state point)
> returns float8
> immutable
> strict
> language plpgsql
> as $$
> begin
>   return agg_state[1];
> end;
> $$;
> create aggregate greatest_running_total (float8)
> (
> sfunc = grt_sfunc,
> stype = point,
> finalfunc = grt_finalfunc
> );
> {code}
> Error: 
> {code}
> select count(distinct outlook), greatest_running_total(humidity::integer) 
> from example_data;
> {code} 
> {code}
> ERROR:  cache lookup failed for function 0 (fmgr.c:223)
> {code}
> Execution goes through if I remove the {{distinct}} or if I add another 
> column for the {{count(distinct)}}. 
> {code:sql}
> select count(distinct outlook) as c1, count(distinct windy) as c2, 
> greatest_running_total(humidity) from example_data;
> {code}
> {code}
>  c1 | c2 | greatest_running_total
> ++
>   3 |  2 |
> (1 row)
> {code}
> {code:sql}
> select count(outlook) as c1, greatest_running_total(humidity) from 
> example_data;
> {code}
> {code}
>  count | greatest_running_total
> ---+
> 14 |
> (1 row)
> {code}
> It's an older build - I don't have the resources at present to test this on 
> the latest HAWQ. 
> {code}
> select version();
>   
>   version
> ---
>  PostgreSQL 8.2.15 (Greenplum Database 4.2.0 build 1) (HAWQ 2.0.0.0 build 
> 22126) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled 
> on Apr 25 2016 09:52:54
> (1 row)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1494) The bug can appear every time when I execute a specific sql: Unexpect internal error (setref.c:298), server closed the connection unexpectedly

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1494:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> The bug can appear every time when I execute a specific sql:  Unexpect 
> internal error (setref.c:298), server closed the connection unexpectedly
> ---
>
> Key: HAWQ-1494
> URL: https://issues.apache.org/jira/browse/HAWQ-1494
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: fangpei
>Assignee: Yi Jin
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> When I execute a specific sql, a serious bug can happen every time. (Hawq 
> version is 2.2.0.0)
> BUG information:
> FATAL: Unexpect internal error (setref.c:298)
> DETAIL: AssertImply failed("!(!var->varattno >= 0) || (var->varattno <= 
> list_length(colNames) + list_length(rte- >pseudocols)))", File: "setrefs.c", 
> Line: 298)
> HINT:  Process 239600 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released  until then.
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
>  before or while processing the request.
> The connection to the server was lost. Attemping reset: Succeeded.
> I use GDB to debug, the GDB information is the same every time. The 
> information is: 
> Loaded symbols for /lib64/libnss_files.so.2
> 0x0032dd40eb5c in recv 0 from /lib64/libpthread.so.0
> (gdb) b setrefs.c:298
> Breakpoint 1 at 0x846063: file setrefs.c, line 298.
> (gdb) c 
> Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe8e930adb8) at setrefs.c:298
> 298 set ref s .c:没有那个文件或目录.
> (gdb) c 1923
> Will ignore next 1922 crossings of breakpoint 1. Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe869c70340) at setrefs.c:298
> 298 in setrefs.c
> (gdb) p list_length(allVars) 
> $1 = 1422
> (gdb) p var->varno 
> $2 = 65001
> (gdb) p list_length(glob->finalrtable) 
> $3 = 66515
> (gdb) p var->varattno 
> $4 = 31
> (gdb) p list_length(colNames) 
> $5 = 30
> (gdb) p list_length(rte->pseudocols) 
> $6 = 0
> the SQL sentence is just like :
> SELECT *
> FROM (select t.*,1001 as ttt from AAA t where  ( aaa = '3201066235'  
> or aaa = '3201066236'  or aaa = '3201026292'  or aaa = 
> '3201066293'  or aaa = '3201060006393' ) and (  bbb between 
> '20170601065900' and '20170601175100'  and (ccc = '2017-06-01' ))  union all  
> select t.*,1002 as ttt from AAA t where  ( aaa = '3201066007'  or aaa 
> = '3201066006' ) and (  bbb between '20170601072900' and 
> '20170601210100'  and ( ccc = '2017-06-01' ))  union all  select t.*,1003 as 
> ttt from AAA t where  ( aaa = '3201062772' ) and (  bbb between 
> '20170601072900' and '20170601170100'  and ( ccc = '2017-06-01' ))  union all 
>  select t.*,1004 as ttt from AAA t where  (aaa = '3201066115'  or aaa 
> = '3201066116'  or aaa = '3201066318'  or aaa = 
> '3201066319' ) and (  bbb between '20170601085900' and 
> '20170601163100' and ( ccc = '2017-06-01' ))  union all  select t.*,1005 as 
> ttt from AAA t where  ( aaa = '3201066180' or aaa = 
> '3201046385' ) and (  bbb between '20170601205900' and 
> '20170601230100'  and ( ccc = '2017-06-01' )) union all  select t.*,1006 as 
> ttt from AAA t where  ( aaa = '3201026423'  or aaa = 
> '3201026255'  or aaa = '3201066258'  or aaa = 
> '3201066259' ) and (  bbb between '20170601215900' and 
> '20170602004900'  and ( ccc = '2017-06-01'  or ccc = '2017-06-02' ))  union 
> all select t.*,1007 as ttt from AAA t where  ( aaa = '3201066175' or 
> aaa = '3201066004' ) and (  bbb between '20170602074900' and 
> '20170602182100'  and ( ccc = '2017-06-02'  )) union all select t.*,1008 as 
> ttt from AAA t where  ( aaa = '3201026648' ) and (  bbb between 
> '20170602132900' and '20170602134600'  and ( ccc = '2017-06-02' ))  union all 
>  select t.*,1009 as ttt from AAA t where  ( aaa = '3201062765'  or 
> aaa = '3201006282' ) and (  bbb between '20170602142900' and 
> '20170603175100'  and ( ccc = '2017-06-02'  or ccc = '2017-06-03' ))  union 
> all  select t.*,1010 as ttt from AAA t where  (aaa = '3201066060' ) 
> and (  bbb between '20170602165900' and '20170603034100'  and ( ccc = 
> '2017-06-02'  or ccc = '2017-06-03' ))  union all select t.*,1011 as ttt from 
> AAA t where  ( aaa = '32010662229'  or aaa = '3201066230'  or 
> aaa = 

[jira] [Updated] (HAWQ-1566) Include Pluggable Storage Format Framework in External Table Insert

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1566:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> Include Pluggable Storage Format Framework in External Table Insert
> ---
>
> Key: HAWQ-1566
> URL: https://issues.apache.org/jira/browse/HAWQ-1566
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, Storage
>Reporter: Chiyang Wan
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> There are 2 types of operation related to external table, i.e. scan, insert. 
> Including pluggable storage framework in these operations is necessary. 
> We add the external table insert and copy from(write into external table) 
> related feature here.
> In the following steps, we still need to specify some of the critical info 
> that comes from the planner and the file splits info in the pluggable 
> filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-786) Framework to support pluggable formats and file systems

2018-01-29 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344191#comment-16344191
 ] 

Yi Jin commented on HAWQ-786:
-

As its demo and feature test will not be delivered in version 2.3.0.0, this 
issue is extended to next version to complete in the future.

> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
> Attachments: HAWQ Pluggable Storage Framework.pdf, 
> ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-786) Framework to support pluggable formats and file systems

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-786:

Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
> Attachments: HAWQ Pluggable Storage Framework.pdf, 
> ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-127) Create CI projects for HAWQ releases

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-127:

Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> Create CI projects for HAWQ releases
> 
>
> Key: HAWQ-127
> URL: https://issues.apache.org/jira/browse/HAWQ-127
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.1.0.0-incubating
>Reporter: Lei Chang
>Assignee: Jiali Yao
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Create Jenkins projects that build HAWQ binary, source tarballs and docker 
> images, and run sanity tests including at least installcheck-good tests for 
> each commit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1576) Add demo for pluggable format scan

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1576:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> Add demo for pluggable format scan
> --
>
> Key: HAWQ-1576
> URL: https://issues.apache.org/jira/browse/HAWQ-1576
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, Storage
>Reporter: Chiyang Wan
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Once the new feature of pluggable storage framework ready, It is necessary to 
> add a demo on how to implement external scan on a new format using the 
> pluggable framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1530) Illegally killing a JDBC select query causes locking problems

2018-01-29 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344186#comment-16344186
 ] 

Yi Jin commented on HAWQ-1530:
--

Close this issue as probably the fix worked for this problem. 

> Illegally killing a JDBC select query causes locking problems
> -
>
> Key: HAWQ-1530
> URL: https://issues.apache.org/jira/browse/HAWQ-1530
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Transaction
>Reporter: Grant Krieger
>Assignee: Yi Jin
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> Hi,
> When you perform a long running select statement on 2 hawq tables (join) from 
> JDBC and illegally kill the JDBC client (CTRL ALT DEL) before completion of 
> the query the 2 tables remained locked even when the query completes on the 
> server. 
> The lock is visible via PG_locks. One cannot kill the query via SELECT 
> pg_terminate_backend(393937). The only way to get rid of it is to kill -9 
> from linux or restart hawq but this can kill other things as well.
> The JDBC client I am using is Aqua Data Studio.
> I can provide exact steps to reproduce if required
> Thank you
> Grant 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (HAWQ-1530) Illegally killing a JDBC select query causes locking problems

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin closed HAWQ-1530.

Resolution: Fixed

> Illegally killing a JDBC select query causes locking problems
> -
>
> Key: HAWQ-1530
> URL: https://issues.apache.org/jira/browse/HAWQ-1530
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Transaction
>Reporter: Grant Krieger
>Assignee: Yi Jin
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> Hi,
> When you perform a long running select statement on 2 hawq tables (join) from 
> JDBC and illegally kill the JDBC client (CTRL ALT DEL) before completion of 
> the query the 2 tables remained locked even when the query completes on the 
> server. 
> The lock is visible via PG_locks. One cannot kill the query via SELECT 
> pg_terminate_backend(393937). The only way to get rid of it is to kill -9 
> from linux or restart hawq but this can kill other things as well.
> The JDBC client I am using is Aqua Data Studio.
> I can provide exact steps to reproduce if required
> Thank you
> Grant 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)