[jira] [Commented] (HAWQ-1547) Increase default table name length from 64 to 128 to match Hive

2017-11-14 Thread Lei Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16252770#comment-16252770
 ] 

Lei Chang commented on HAWQ-1547:
-

Looks a good improvement. 

> Increase default table name length from 64 to 128 to match Hive
> ---
>
> Key: HAWQ-1547
> URL: https://issues.apache.org/jira/browse/HAWQ-1547
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Unknown
>Reporter: Grant Krieger
>Assignee: Radar Lei
>
> Hi,
> Would it be possible to increase 
> incubator-hawq/src/include/pg_config_manual.h property default NAMEDATALEN 
> from 64 to 128. 
> This hopefully will allow one to be able to read Hive tables from Hawq larger 
> than 64 by default without having to change this setting when compiling for 
> downstream systems. It will also allow for equivalently name Hawq tables.
> Does anyone foresee performance challenges with the increase? 
> See problem below:
> In hive
> CREATE TABLE 
> default.test123456789123456789123456789123456789123456789123456789123456789123456789123456789test
>  ( 
>   rtlymth int, 
>   rtlyint)
> STORED AS ORC
> in Hawq
> select * from 
> hcatalog.default.test123456789123456789123456789123456789123456789123456789123456789123456789123456789test
>  ERROR: remote component error (500) from '127.0.0.1:51200':  type  Exception 
> report   message   
> NoSuchObjectException(message:default.test12345678912345678912345678912345678912345678912345678912345
>  table not found)description   The server encountered an internal error 
> that prevented it from fulfilling this request.exception   
> javax.servlet.ServletException: 
> NoSuchObjectException(message:default.test12345678912345678912345678912345678912345678912345678912345
>  table not found) (libchurl.c:897)
>   Position: 15
>  Line: 1 
> Thanks
> Grant



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HAWQ-1623) Automatic master node HA

2018-06-01 Thread Lei Chang (JIRA)
Lei Chang created HAWQ-1623:
---

 Summary: Automatic master node HA
 Key: HAWQ-1623
 URL: https://issues.apache.org/jira/browse/HAWQ-1623
 Project: Apache HAWQ
  Issue Type: New Feature
Reporter: Lei Chang
Assignee: Radar Lei


 

In current HAWQ, when master node dies, it needs manual switch from master to 
standby. It is not convenient for end users. Let's add an automatic failover 
mechanism.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1255) Looks "segment size with penalty" number in "explain analyze" not correct

2017-01-04 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-1255:

Assignee: Hubert Zhang  (was: Lei Chang)

> Looks "segment size with penalty" number in "explain analyze" not correct
> -
>
> Key: HAWQ-1255
> URL: https://issues.apache.org/jira/browse/HAWQ-1255
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Reporter: Lei Chang
>Assignee: Hubert Zhang
>
> "segment size" is about 500MB, while "segment size with penalty" is about 
> 100MB. Looks not reasonable.
> How to reproduce:
> on laptop, 1G tpch data, lineitem table is created as hash distributed with 2 
> buckets, and orders table is randomly.
> ```
> postgres=# explain analyze SELECT l_orderkey, count(l_quantity)   
>
> FROM lineitem_b2, orders  
>
> WHERE l_orderkey = o_orderkey 
>
> GROUP BY l_orderkey;
>   
>   
> QUERY PLAN
>   
>   
> 
> --
>  Gather Motion 2:1  (slice2; segments: 2)  (cost=291580.96..318527.67 
> rows=1230576 width=16)
>Rows out:  Avg 150.0 rows x 1 workers at destination.  
> Max/Last(seg-1:changlei.local/seg-1:changlei.local) 150/150 rows with 
> 2209/2209 ms to first row, 2577/2577 ms to end, start offset by 1.429/1.429 
> ms.
>->  HashAggregate  (cost=291580.96..318527.67 rows=615288 width=16)
>  Group By: lineitem_b2.l_orderkey
>  Rows out:  Avg 75.0 rows x 2 workers.  
> Max/Last(seg1:changlei.local/seg1:changlei.local) 75/75 rows with 
> 2243/2243 ms to first row, 2498/2498 ms to end, start offset by 2.615/2.615 
> ms.
>  Executor memory:  56282K bytes avg, 56282K bytes max 
> (seg1:changlei.local).
>  ->  Hash Join  (cost=70069.00..250010.38 rows=3000608 width=15)
>Hash Cond: lineitem_b2.l_orderkey = orders.o_orderkey
>Rows out:  Avg 3000607.5 rows x 2 workers.  
> Max/Last(seg0:changlei.local/seg1:changlei.local) 3001300/215 rows with 
> 350/350 ms to first row, 1611/1645 ms to end, start offset by 3.819/3.816 ms.
>Executor memory:  49153K bytes avg, 49153K bytes max 
> (seg1:changlei.local).
>Work_mem used:  23438K bytes avg, 23438K bytes max 
> (seg1:changlei.local). Workfile: (0 spilling, 0 reused)
>(seg0)   Hash chain length 1.7 avg, 3 max, using 434205 of 
> 524341 buckets.
>->  Append-only Scan on lineitem_b2  (cost=0.00..89923.15 
> rows=3000608 width=15)
>  Rows out:  Avg 3000607.5 rows x 2 workers.  
> Max/Last(seg0:changlei.local/seg1:changlei.local) 3001300/215 rows with 
> 4.460/4.757 ms to first row, 546/581 ms to end, start offset by 350/349 ms.
>->  Hash  (cost=51319.00..51319.00 rows=75 width=8)
>  Rows in:  Avg 75.0 rows x 2 workers.  
> Max/Last(seg1:changlei.local/seg0:changlei.local) 75/75 rows with 
> 341/344 ms to end, start offset by 8.114/5.610 ms.
>  ->  Redistribute Motion 2:2  (slice1; segments: 2)  
> (cost=0.00..51319.00 rows=75 width=8)
>Hash Key: orders.o_orderkey
>Rows out:  Avg 75.0 rows x 2 workers at 
> destination.  Max/Last(seg1:changlei.local/seg0:changlei.local) 75/75 
> rows with 0.052/2.461 ms to first row, 207/207 ms to end, start offset by 
> 8.114/5.611 ms.
>->  Append-only Scan on orders  
> (cost=0.00..21319.00 rows=75 width=8)
>  Rows out:  Avg 75.0 rows x 2 workers.  
> Max/Last(seg1:changlei.local/seg0:changlei.loc

[jira] [Created] (HAWQ-1255) Looks "segment size with penalty" number in "explain analyze" not correct

2017-01-04 Thread Lei Chang (JIRA)
Lei Chang created HAWQ-1255:
---

 Summary: Looks "segment size with penalty" number in "explain 
analyze" not correct
 Key: HAWQ-1255
 URL: https://issues.apache.org/jira/browse/HAWQ-1255
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Query Execution
Reporter: Lei Chang
Assignee: Lei Chang



"segment size" is about 500MB, while "segment size with penalty" is about 
100MB. Looks not reasonable.

How to reproduce:
on laptop, 1G tpch data, lineitem table is created as hash distributed with 2 
buckets, and orders table is randomly.


```
postgres=# explain analyze SELECT l_orderkey, count(l_quantity) 
 FROM 
lineitem_b2, orders 
WHERE 
l_orderkey = o_orderkey 
   GROUP BY 
l_orderkey;


QUERY PLAN  


  
--
 Gather Motion 2:1  (slice2; segments: 2)  (cost=291580.96..318527.67 
rows=1230576 width=16)
   Rows out:  Avg 150.0 rows x 1 workers at destination.  
Max/Last(seg-1:changlei.local/seg-1:changlei.local) 150/150 rows with 
2209/2209 ms to first row, 2577/2577 ms to end, start offset by 1.429/1.429 ms.
   ->  HashAggregate  (cost=291580.96..318527.67 rows=615288 width=16)
 Group By: lineitem_b2.l_orderkey
 Rows out:  Avg 75.0 rows x 2 workers.  
Max/Last(seg1:changlei.local/seg1:changlei.local) 75/75 rows with 
2243/2243 ms to first row, 2498/2498 ms to end, start offset by 2.615/2.615 ms.
 Executor memory:  56282K bytes avg, 56282K bytes max 
(seg1:changlei.local).
 ->  Hash Join  (cost=70069.00..250010.38 rows=3000608 width=15)
   Hash Cond: lineitem_b2.l_orderkey = orders.o_orderkey
   Rows out:  Avg 3000607.5 rows x 2 workers.  
Max/Last(seg0:changlei.local/seg1:changlei.local) 3001300/215 rows with 
350/350 ms to first row, 1611/1645 ms to end, start offset by 3.819/3.816 ms.
   Executor memory:  49153K bytes avg, 49153K bytes max 
(seg1:changlei.local).
   Work_mem used:  23438K bytes avg, 23438K bytes max 
(seg1:changlei.local). Workfile: (0 spilling, 0 reused)
   (seg0)   Hash chain length 1.7 avg, 3 max, using 434205 of 
524341 buckets.
   ->  Append-only Scan on lineitem_b2  (cost=0.00..89923.15 
rows=3000608 width=15)
 Rows out:  Avg 3000607.5 rows x 2 workers.  
Max/Last(seg0:changlei.local/seg1:changlei.local) 3001300/215 rows with 
4.460/4.757 ms to first row, 546/581 ms to end, start offset by 350/349 ms.
   ->  Hash  (cost=51319.00..51319.00 rows=75 width=8)
 Rows in:  Avg 75.0 rows x 2 workers.  
Max/Last(seg1:changlei.local/seg0:changlei.local) 75/75 rows with 
341/344 ms to end, start offset by 8.114/5.610 ms.
 ->  Redistribute Motion 2:2  (slice1; segments: 2)  
(cost=0.00..51319.00 rows=75 width=8)
   Hash Key: orders.o_orderkey
   Rows out:  Avg 75.0 rows x 2 workers at 
destination.  Max/Last(seg1:changlei.local/seg0:changlei.local) 75/75 
rows with 0.052/2.461 ms to first row, 207/207 ms to end, start offset by 
8.114/5.611 ms.
   ->  Append-only Scan on orders  (cost=0.00..21319.00 
rows=75 width=8)
 Rows out:  Avg 75.0 rows x 2 workers.  
Max/Last(seg1:changlei.local/seg0:changlei.local) 75/75 rows with 
4.773/4.987 ms to first row, 166/171 ms to end, start offset by 2.911/2.697 ms.
 Slice statistics:
   (slice0)Executor memory: 281K bytes.
   (slice1)Executor memory: 319K bytes avg x 2 workers, 319K bytes max 
(seg1:changlei.local).
   (slice2)Executor memory: 105773K bytes avg x 2 workers, 105773K bytes 
max (seg1:chang

[jira] (HAWQ-1300) hawq cannot compile with Bison 3.x.

2017-01-30 Thread Lei Chang (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Lei Chang created an issue 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 Apache HAWQ /  HAWQ-1300 
 
 
 
  hawq cannot compile with Bison 3.x.  
 
 
 
 
 
 
 
 
 

Issue Type:
 
  Bug 
 
 
 

Assignee:
 
 Ed Espino 
 
 
 

Components:
 

 Build 
 
 
 

Created:
 

 31/Jan/17 01:24 
 
 
 

Fix Versions:
 

 backlog 
 
 
 

Priority:
 
  Major 
 
 
 

Reporter:
 
 Lei Chang 
 
 
 
 
 
 
 
 
 
 
Yes, I met similar issue, Bison 3.x does not work for HAWQ now. 
On Mon, Jan 30, 2017 at 12:37 PM, Dmitry Bouzolin < 
dbouzo...@yahoo.com.invalid> wrote: 
> Hi Lei, > I use Bison 3.0.2. And looks actually like a bug in gram.c source for this > Bison version.The function refers yyscanner which is not defined. I will > reach out Bison bug list.Thanks for reply! > > On Sunday, January 29, 2017 8:09 PM, Lei Chang  > wrote: > > > Hi Dmitry, > > Which bison version do you use? Looks this is a known issue when compiling > hawq on latest bison (3.x) version. Bison 2.x version should work. > > Thanks > Lei > > > > > On Mon, Jan 30, 2017 at 3:41 AM, Dmitry Bouzolin < > dbouzo...@yahoo.com.invalid> wrote: > > > Hi All, > > Yes, I know arch linux is not supported, however I appreciate any clues > on > > why the build would fail like so: > > > > make -C caql allmake[4]: Entering directory > '/data/src/incubator-hawq/src/ > > backend/catalog/caql' > > gcc -O3 -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith > > -Wendif-lab

[jira] [Commented] (HAWQ-1436) Implement RPS High availability on HAWQ

2017-04-25 Thread Lei Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15983898#comment-15983898
 ] 

Lei Chang commented on HAWQ-1436:
-

Nice doc. Can we treat RPS process similar to a resource manager process? 
postmaster can restart the process automatically. It might potentially simplify 
the design a lot.



> Implement RPS High availability on HAWQ
> ---
>
> Key: HAWQ-1436
> URL: https://issues.apache.org/jira/browse/HAWQ-1436
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: backlog
>
> Attachments: RPSHADesign_v0.1.pdf
>
>
> Once Ranger is configured, HAWQ will rely on RPS to connect to Ranger. A 
> single point RPS may influence the robustness of HAWQ. 
> Thus We need to investigate and design out the way to implement RPS High 
> availability. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1450) New HAWQ executor with vectorization & possible code generation

2017-05-02 Thread Lei Chang (JIRA)
Lei Chang created HAWQ-1450:
---

 Summary: New HAWQ executor with vectorization & possible code 
generation
 Key: HAWQ-1450
 URL: https://issues.apache.org/jira/browse/HAWQ-1450
 Project: Apache HAWQ
  Issue Type: New Feature
  Components: Query Execution
Reporter: Lei Chang
Assignee: Lei Chang
 Fix For: backlog



Most HAWQ executor code is inherited from postgres & gpdb. Let's discuss how to 
build a new hawq executor with vectorization and possibly code generation. 
These optimization may potentially improve the query performance a lot.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-786) Framework to support pluggable formats and file systems

2017-06-27 Thread Lei Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16065952#comment-16065952
 ] 

Lei Chang commented on HAWQ-786:


This feature has been not active for some time. A lot of hawq users are asking 
this feature. currently, we are working on this feature. I am assigning this 
issue to me. Hope that we can release this feature in next release.





> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: hongwu
> Fix For: backlog
>
> Attachments: ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-786) Framework to support pluggable formats and file systems

2017-06-27 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang reassigned HAWQ-786:
--

Assignee: Lei Chang  (was: hongwu)

> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: backlog
>
> Attachments: ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-786) Framework to support pluggable formats and file systems

2017-07-27 Thread Lei Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16103037#comment-16103037
 ] 

Lei Chang commented on HAWQ-786:


[~rlei] This feature has been implemented in Oushu commercial hawq version. 
currently we are refactoring the code and making it work on oss hawq. After it 
is ready, we will update this JIRA. 

> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: backlog
>
> Attachments: ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1530) Illegally killing a JDBC select query causes locking problems

2017-11-06 Thread Lei Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16241629#comment-16241629
 ] 

Lei Chang commented on HAWQ-1530:
-

Thanks [~kuien] .

I think [~yjin] fixed a bug before, it is quite similar to this bug. 

> Illegally killing a JDBC select query causes locking problems
> -
>
> Key: HAWQ-1530
> URL: https://issues.apache.org/jira/browse/HAWQ-1530
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Transaction
>Reporter: Grant Krieger
>Assignee: Radar Lei
>
> Hi,
> When you perform a long running select statement on 2 hawq tables (join) from 
> JDBC and illegally kill the JDBC client (CTRL ALT DEL) before completion of 
> the query the 2 tables remained locked even when the query completes on the 
> server. 
> The lock is visible via PG_locks. One cannot kill the query via SELECT 
> pg_terminate_backend(393937). The only way to get rid of it is to kill -9 
> from linux or restart hawq but this can kill other things as well.
> The JDBC client I am using is Aqua Data Studio.
> I can provide exact steps to reproduce if required
> Thank you
> Grant 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1546) The data files on hdfs after hawq load data were too large!!!

2017-11-12 Thread Lei Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249065#comment-16249065
 ] 

Lei Chang commented on HAWQ-1546:
-

What is the size of Parquet files on HDFS?

Sometimes, without compression, it is possible binary files are bigger than 
text files. For example, for "int" type, it is 4 bytes if it is not compressed, 
but for text files, small integers might take less than 4 bytes.

And for trickle inserts on parquet, there are a lot of garbage parquet footer 
in between for each insert, because HDFS is not updatable.  The file size is 
possible larger. So it is better to use AO for trickle inserts.



> The data files on hdfs after hawq load data were too large!!!
> -
>
> Key: HAWQ-1546
> URL: https://issues.apache.org/jira/browse/HAWQ-1546
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: lynn
>Assignee: Radar Lei
>
> create table person_l1 (id int, name varchar(20), age int, sex 
> char(1))with(appendonly=true,orientation=parquet,compresstype=snappy);
> create table person_l2 (id int, name varchar(20), age int, sex 
> char(1))with(appendonly=true,orientation=parquet,compresstype=snappy);
> execute 480 insert statement:
> sh insert.sh
> script: 
> i4.sql:
> insert into person_l1 values(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 
> 'lynn', 28, '1'),(4, 'lynn', 28, '1');
> insert.sh:
> #!/bin/bash
> num=480
> for ((i=0; i<$num; i=$[$i+1])) do
>psql -d test -f i4.sql
> done
> execute 1 insert statement:
> 1920 rows:
>  psql -d test -f i1.sql 
> script:
> i1.sql:
> SET hawq_rm_stmt_nvseg=10;
> SET hawq_rm_stmt_vseg_memory='512mb';
> insert into person_l2 values(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 
> 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, 
> '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 
> 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, 
> '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 
> 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, 
> '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 
> 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, 
> '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 
> 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, 
> '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 
> 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, 
> '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 
> 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, 
> '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 
> 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, 
> '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 
> 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, 
> '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 
> 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, 
> '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 
> 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, 
> '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 
> 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, 
> '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 
> 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, 
> '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 
> 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, 
> '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 
> 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, 
> '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 
> 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, 
> '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 
> 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, 
> '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 
> 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, 
> '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, '1'),(1, 
> 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 'lynn', 28, 
> '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, '1'),(4, 
> 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 'lynn', 28, 
> '1'),(4, 'lynn', 28, '1'),(1, 'lynn', 28, '1'),(2, 'lynn', 28, '1'),(3, 
> 'lyn

[jira] [Closed] (HAWQ-37) Abort transaction didn't rollback while 1 QE was terminated.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-37.
-

> Abort transaction didn't rollback while 1 QE was terminated.
> 
>
> Key: HAWQ-37
> URL: https://issues.apache.org/jira/browse/HAWQ-37
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Dispatcher
>Reporter: Xiang Sheng
>Assignee: Ming LI
>Priority: Critical
> Fix For: 2.0.0
>
>
> Run workload tpch_parq10snp, 1 QE was terminated due to disk failure.  And 
> returned error "the relation already exists" while insert into the same 
> relation.
> Steps to repeat the problem:
> 1, create table orders_x (like orders);
> 2, insert into orders_x select * from e_orders ;
> 3, kill 1 MPPEXEC process on segment to simulate the interrupted.
> 4, then "error returned: Query Executor Error in seg432 sfo-w163.ic:4 
> pid=278795: server closed the connection unexpectedly "



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-5) Update HAWQ to support latest hadoop version and ecosystem

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-5.


> Update HAWQ to support latest hadoop version and ecosystem
> --
>
> Key: HAWQ-5
> URL: https://issues.apache.org/jira/browse/HAWQ-5
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core, PXF
>Reporter: Goden Yao
>Assignee: Shivram Mani
>Priority: Blocker
> Fix For: 2.0.0
>
>
> HAWQ was based on Hadoop 2.6, Hive 0.14, Hbase 0.98
> We need to update all dependencies to keep up with latest versions including:
> * Hadoop 2.6 -> Hadoop 2.7.1
> * YARN 2.6 -> YARN 2.7.1
> * Hive 0.14 -> Hive 1.2.1
> * Hbase 0.98 -> Hbase 1.1.1
> Latest hbase api docs can be found here 
> https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-5) Update HAWQ to support latest hadoop version and ecosystem

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-5:
-
Fix Version/s: 2.0.0

> Update HAWQ to support latest hadoop version and ecosystem
> --
>
> Key: HAWQ-5
> URL: https://issues.apache.org/jira/browse/HAWQ-5
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core, PXF
>Reporter: Goden Yao
>Assignee: Shivram Mani
>Priority: Blocker
> Fix For: 2.0.0
>
>
> HAWQ was based on Hadoop 2.6, Hive 0.14, Hbase 0.98
> We need to update all dependencies to keep up with latest versions including:
> * Hadoop 2.6 -> Hadoop 2.7.1
> * YARN 2.6 -> YARN 2.7.1
> * Hive 0.14 -> Hive 1.2.1
> * Hbase 0.98 -> Hbase 1.1.1
> Latest hbase api docs can be found here 
> https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-49) Remove legacy madlib schema in HAWQ

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-49?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-49:
--
Assignee: Ming LI  (was: Lei Chang)

> Remove legacy madlib schema in HAWQ
> ---
>
> Key: HAWQ-49
> URL: https://issues.apache.org/jira/browse/HAWQ-49
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Catalog
>Reporter: Lei Chang
>Assignee: Ming LI
>  Labels: OSS
>
> There are some legacy madlib stuff created during init. It is better to 
> remove it.
> At the same time, cleanup useless comments and code in init sql file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-125) hawq restart should stop with an error message when stop fails

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-125:
---
Assignee: Radar Lei  (was: Lei Chang)

> hawq restart should stop with an error message when stop fails
> --
>
> Key: HAWQ-125
> URL: https://issues.apache.org/jira/browse/HAWQ-125
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Noa Horn
>Assignee: Radar Lei
>
> hawq restart actually runs hawq stop and then hawq start.
> In case hawq stop fails, hawq start still runs.
> That is intentional in case the segments or master were already down, and so 
> stop failed. But in case the segments or master failed to stop and the 
> command timed out, the start should not run, and we should exit immediately 
> with an appropriate error code and message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-316) Build on Mac report error

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-316:
---
Fix Version/s: 2.1.0

> Build on Mac report error 
> --
>
> Key: HAWQ-316
> URL: https://issues.apache.org/jira/browse/HAWQ-316
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Ming LI
>Assignee: Ming LI
>Priority: Critical
> Fix For: 2.1.0
>
>
> crc32c.c:642:25: error: use of undeclared identifier 'bit_SSE4_2'
> bool hasSSE42 = (ecx & bit_SSE4_2) != 0;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-20) Error running analyzedb

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-20?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-20:
--
Fix Version/s: 2.0.0

> Error running analyzedb
> ---
>
> Key: HAWQ-20
> URL: https://issues.apache.org/jira/browse/HAWQ-20
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Caleb Welton
>Assignee: Lei Chang
>Priority: Critical
> Fix For: 2.0.0
>
>
> Reported by an early alpha tester:
> {quote}
> I noticed analyzedb didn't work because HAWQ 2.0 doesn't set the 
> MASTER_DATA_DIRECTORY.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-328) plsql loop three times exit abnormality

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-328:
---
Fix Version/s: 2.0.0

> plsql loop three times exit abnormality
> ---
>
> Key: HAWQ-328
> URL: https://issues.apache.org/jira/browse/HAWQ-328
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: longgeligelong
>Assignee: Ruilong Huo
>Priority: Blocker
> Fix For: 2.0.0
>
>
> plsql of hawq loop three times exit abnormality. 
> Add print in code, Then I found when program looped third time, before 
> running line 824 in src/backend/executor/execMain.c, the value of 
> queryDesc->plannedstmt->resource->type is 1050. After running this line the 
> value  became a random number. But after running this line in the tirst two 
> loop, the value  is still 1050. Because queryDesc is not a actual parameter 
> of prepareDispatchedCatalogRelation in line 824, I cannot  continue to keep 
> track of code. 
> stdour and stderr  as below :
>   psql:test_plsql_loop.sql:66: NOTICE:  for loop: quantity here is 1
>   psql:test_plsql_loop.sql:66: NOTICE:  FOR LOOP: ROW HERE IS (14929)
>   psql:test_plsql_loop.sql:66: NOTICE:  for loop: quantity here is 2
>   psql:test_plsql_loop.sql:66: NOTICE:  FOR LOOP: ROW HERE is (14929)
>   psql:test_plsql_loop.sql:66: NOTICE:  for loop: quantity here is 3
>   psql:test_plsql_loop.sql:66: ERROR:  could not serialize unrecognized node 
> type: 38814640 (outfast.c:4742)
>   CONTEXT:  SQL statement "SELECT COUNT(1) FROM oiq_t_2"
>   PL/pgSQL function "func2" line 11 at SQL statement
> plsql code as below:
> CREATE OR REPLACE FUNCTION funcloop() RETURNS text AS $func$
> DECLARE
> rowvar RECORD;
> BEGIN
> FOR i IN 1..10 LOOP
> RAISE NOTICE 'loop: quantity here is %', i;
> SELECT COUNT(1) INTO rowvar FROM oiq_t_2;
> RAISE NOTICE 'FOR LOOP: ROW HERE IS %', rowvar;
> END LOOP;
> return rowvar;
> END;
> $func$ LANGUAGE plpgsql;
> select funcloop();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-58) Query hang when running test_resourcepool_TolerateLimit

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-58?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-58:
--
Fix Version/s: 2.0.0

> Query hang when running test_resourcepool_TolerateLimit 
> 
>
> Key: HAWQ-58
> URL: https://issues.apache.org/jira/browse/HAWQ-58
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
> Environment: linux
>Reporter: Amy
>Assignee: Yi Jin
>  Labels: features, test
> Fix For: 2.0.0
>
>
> Query hung when running test hawq_rm_tolerate_nseg_limit.
> {noformat} 
> set hawq_rm_tolerate_nseg_limit = 0.
> ALTER RESOURCE QUEUE pg_default 
> WITH(ACTIVE_STATEMENTS=6,MEMORY_LIMIT_CLUSTER=50%,CORE_LIMIT_CLUSTER=50%,RESOURCE_UPPER_FACTOR=2,VSEGMENT_RESOURCE_QUOTA='mem:1gb');
> run 4 concurrency make the result as below:
> test2 : 4 vsegs ; test3 : 3 vsegs ; test4 : 2 vsegs
> run 5th concurrency 3 vsegs
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-122) FILESPACE URL in pg_filespace_entry is confusing

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-122:
---
Fix Version/s: backlog

> FILESPACE URL in pg_filespace_entry is confusing
> 
>
> Key: HAWQ-122
> URL: https://issues.apache.org/jira/browse/HAWQ-122
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Storage
>Reporter: Lirong Jian
>Assignee: Lirong Jian
> Fix For: backlog
>
>
> postgres=# SELECT * FROM pg_filespace_entry;
>  fsefsoid | fsedbid | fselocation  
> --+-+--
> 16384 |   0 | hdfs://Lirong-MBP:9000/gpsql
> (1 row)
> postgres=# CREATE FILESPACE fs_1 ON HDFS (
> postgres(# 'Lirong-MBP:9000/hawq/fs_1');
> CREATE FILESPACE
> postgres=# CREATE FILESPACE fs_2 ON HDFS (
>   
>
> 'Lirong-MBP:9000/hawq/fs_2') WITH (NUMREPLICA=2);
> CREATE FILESPACE
> postgres=# CREATE FILESPACE fs_3 ON HDFS (
>   
>
> 'Lirong-MBP:9000/hawq/fs_3') WITH (NUMREPLICA=3);
> CREATE FILESPACE
> postgres=# SELECT * FROM pg_filespace_entry;
>  fsefsoid | fsedbid | fselocation 
> --+-+-
> 16384 |   0 | hdfs://Lirong-MBP:9000/gpsql
> 16532 |   0 | hdfs://Lirong-MBP:9000/hawq/fs_1
> 16533 |   0 | hdfs://{replica=2}Lirong-MBP:9000/hawq/fs_2
> 16534 |   0 | hdfs://{replica=3}Lirong-MBP:9000/hawq/fs_3
> (4 rows)
> The {replica=3} option is very confusing. It is not a valid HDSF url, which 
> means we can only access that URL through external tools such as hdfs, 
> although it is valid inside HAWQ (we exclude that part when we are really 
> going to access that path).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-49) Remove legacy madlib schema in HAWQ

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-49?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-49:
--
Fix Version/s: 2.0.0

> Remove legacy madlib schema in HAWQ
> ---
>
> Key: HAWQ-49
> URL: https://issues.apache.org/jira/browse/HAWQ-49
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Catalog
>Reporter: Lei Chang
>Assignee: Ming LI
>  Labels: OSS
> Fix For: 2.0.0
>
>
> There are some legacy madlib stuff created during init. It is better to 
> remove it.
> At the same time, cleanup useless comments and code in init sql file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-125) hawq restart should stop with an error message when stop fails

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-125:
---
Fix Version/s: 2.0.0

> hawq restart should stop with an error message when stop fails
> --
>
> Key: HAWQ-125
> URL: https://issues.apache.org/jira/browse/HAWQ-125
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Noa Horn
>Assignee: Radar Lei
> Fix For: 2.0.0
>
>
> hawq restart actually runs hawq stop and then hawq start.
> In case hawq stop fails, hawq start still runs.
> That is intentional in case the segments or master were already down, and so 
> stop failed. But in case the segments or master failed to stop and the 
> command timed out, the start should not run, and we should exit immediately 
> with an appropriate error code and message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-107) Beyond the Hadoop Ecosystem

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-107:
---
Fix Version/s: backlog

> Beyond the Hadoop Ecosystem
> ---
>
> Key: HAWQ-107
> URL: https://issues.apache.org/jira/browse/HAWQ-107
> Project: Apache HAWQ
>  Issue Type: New Feature
>Reporter: Suminda Dharmasena
>Assignee: Shivram Mani
> Fix For: backlog
>
>
> It would be good if you can support other storage formats ecosystems beyond 
> that of Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-77) Fix source code comment for new ALTER/CREATE RESOURCE QUEUE ddl statements

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-77?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-77:
--
Fix Version/s: 2.0.0

> Fix source code comment for new ALTER/CREATE RESOURCE QUEUE ddl statements
> --
>
> Key: HAWQ-77
> URL: https://issues.apache.org/jira/browse/HAWQ-77
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.0.0
>
>
> This is open to fix unresolved comments from HAWQ-25.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-22) ALTER DATABASE ... RENAME is not supported

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-22?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-22:
--
Affects Version/s: 2.0.0

> ALTER DATABASE ... RENAME is not supported
> --
>
> Key: HAWQ-22
> URL: https://issues.apache.org/jira/browse/HAWQ-22
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: DDL
>Affects Versions: 2.0.0
>Reporter: Caleb Welton
>Assignee: Lei Chang
>
> Currently when you try to rename a database you get the following error 
> message.
> {noformat}
> sql> ALTER DATABASE test RENAME TO test2;
> ERROR:  Cannot support rename database statement yet
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-22) ALTER DATABASE ... RENAME is not supported

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-22?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-22:
--
Fix Version/s: backlog

> ALTER DATABASE ... RENAME is not supported
> --
>
> Key: HAWQ-22
> URL: https://issues.apache.org/jira/browse/HAWQ-22
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: DDL
>Affects Versions: 2.0.0
>Reporter: Caleb Welton
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Currently when you try to rename a database you get the following error 
> message.
> {noformat}
> sql> ALTER DATABASE test RENAME TO test2;
> ERROR:  Cannot support rename database statement yet
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-27) filespace created in same directory cause problems

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-27?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-27:
--
Fix Version/s: backlog

> filespace created in same directory cause problems
> --
>
> Key: HAWQ-27
> URL: https://issues.apache.org/jira/browse/HAWQ-27
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Storage
>Reporter: Dong Li
>Assignee: Lirong Jian
> Fix For: backlog
>
>
> --
> -- if your hdfs port is 9000, use localhost:9000 to run the test
> --
> create FILESPACE fs1 ON hdfs ('localhost:8020/fs');
> create FILESPACE fs2 ON hdfs ('localhost:8020/fs');
> create tablespace tsinfs1 filespace fs1;
> create table a (i int) tablespace tsinfs1;
> insert into a VALUE (1);
> drop filespace fs2;
> select * from a;
> ERROR:  Append-Only Storage Read could not open segment file 
> 'hdfs://localhost:8020/testfs/17201/17198/17203/1' for relation 'a'  (seg0 
> localhost:4 pid=25656)
> DETAIL:
> File does not exist: /testfs/17201/17198/17203/1
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:68)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:58)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1895)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1836)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1816)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1788)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:543)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:364)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> The directory fs was removed, and the table doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-25) Add resource queue new ddl statement implementation

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-25?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-25:
--
Fix Version/s: 2.0.0

> Add resource queue new ddl statement implementation
> ---
>
> Key: HAWQ-25
> URL: https://issues.apache.org/jira/browse/HAWQ-25
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
>  Labels: features
> Fix For: 2.0.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> This is a big code merge for adding recent code work from old hawq 
> repository. This merge mainly includes
> 1) implement new CREATE/ALTER RESOURCE QUEUE attributes;
> 2) refine partial GUC variable names;
> 3) use new libyarn lib with Kerberos support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-25) Add resource queue new ddl statement implementation

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-25?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-25.
-
Resolution: Fixed

> Add resource queue new ddl statement implementation
> ---
>
> Key: HAWQ-25
> URL: https://issues.apache.org/jira/browse/HAWQ-25
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
>  Labels: features
> Fix For: 2.0.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> This is a big code merge for adding recent code work from old hawq 
> repository. This merge mainly includes
> 1) implement new CREATE/ALTER RESOURCE QUEUE attributes;
> 2) refine partial GUC variable names;
> 3) use new libyarn lib with Kerberos support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-19) Money type overflow

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-19?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-19:
--
Fix Version/s: backlog

> Money type overflow
> ---
>
> Key: HAWQ-19
> URL: https://issues.apache.org/jira/browse/HAWQ-19
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Feng Tian
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Use tpch schema, but change l_extendedprice to use MONEY type, run Q1, you 
> should see negative amounts.   
> I believe this is due to overflow.
> Side mark, postgres 9 money type use 8 bytes and will return correct result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-24) Support superuser to GRANT/REVOKE CREATION privilege to/from non-superuser on TABLESPACE

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-24?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-24:
--
Fix Version/s: backlog

> Support superuser to GRANT/REVOKE CREATION privilege to/from non-superuser on 
> TABLESPACE
> 
>
> Key: HAWQ-24
> URL: https://issues.apache.org/jira/browse/HAWQ-24
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: DDL, Storage
>Reporter: Ruilong Huo
>Assignee: Lei Chang
> Fix For: backlog
>
>
> It raises error "Cannot support GRANT/REVOKE on TABLESPACE statement" while 
> following the HAWQ guide 
> (http://hawq.docs.pivotal.io/docs-gpdb/admin_guide/ddl/ddl-tablespace.html) 
> to GRANT/REVOKE CREATION privilege to/from non-superuser on TABLESPACE.
> {code}
> gpadmin=# GRANT CREATE ON TABLESPACE fstbs TO tstuser;
> ERROR:  Cannot support GRANT/REVOKE on TABLESPACE statement
> {code}
> As a consequence, with the user as SUPERUSER is possible to create tables on 
> top of the tablespace, but with a user as NOSUPERUSER its not possible:
> {code}
> tstuser=> CREATE TABLE testfs3 ( col01 INTEGER ) TABLESPACE fstbs;
> NOTICE:  Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 
> 'col01' as the Greenplum Database data distribution key for this table.
> HINT:  The 'DISTRIBUTED BY' clause determines the distribution of data. Make 
> sure column(s) chosen are the optimal data distribution key to minimize skew.
> ERROR:  permission denied for tablespace fstbs
> {code}
>  
> {code}
> gpadmin=# alter user tstuser with superuser;
> ALTER ROLE
> [gpadmin@ai2hdm1 ~]$ psql -d tstuser -U tstuser
> Password for user tstuser: 
> psql (8.2.15)
> Type "help" for help.
> tstuser=# CREATE TABLE testfs3 ( col01 INTEGER ) TABLESPACE fstbs;
> NOTICE:  Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 
> 'col01' as the Greenplum Database data distribution key for this table.
> HINT:  The 'DISTRIBUTED BY' clause determines the distribution of data. Make 
> sure column(s) chosen are the optimal data distribution key to minimize skew.
> CREATE TABLE
> {code}
> Due to security consideration, it is not acceptable for some HAWQ users to 
> always use SUPERUSER to create TABLESPACE.  Thus, we need to support:
> 1. Superuser can GRANT/REVOKE CREATION privilege to/from non-superuser on 
> TABLESPACE.
> 2. Non-supuser can create TABLESPACE once it is granted with creation 
> privilege.
> 3. Non-superuser to GRANT/REVOKE CREATION privilege on TABLESPACE to other 
> users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-52) filespace can be created in invalid hdfs port

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-52?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-52:
--
Fix Version/s: backlog

> filespace can be created in invalid hdfs port
> -
>
> Key: HAWQ-52
> URL: https://issues.apache.org/jira/browse/HAWQ-52
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Storage
>Reporter: Dong Li
>Assignee: Lirong Jian
> Fix For: backlog
>
>
> you can create filespace in invalid hdfs port , it didn't check the hdfs port 
> and  not check if it was created successfully. Actually it was not created in 
> hdfs and  can not be used.
> create filespace fsinvalid on hdfs ('localhost:10086/fsinvalid');
> CREATE FILESPACE
> create TABLESPACE tsinvalid  FILESPACE fsinvalid;
> WARNING:  could not remove tablespace directory 17464: Input/output error
> CONTEXT:  Dropping file-system object -- Tablespace Directory: '17464'
> ERROR:  could not create tablespace directory 17464: Input/output error
> select * from gp_persistent_filespace_node;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-54) TID for persistent 'Relation File...' tuple is invalid

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-54?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-54:
--
Assignee: Ming LI  (was: Lirong Jian)

> TID for persistent 'Relation File...' tuple is invalid
> --
>
> Key: HAWQ-54
> URL: https://issues.apache.org/jira/browse/HAWQ-54
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Goden Yao
>Assignee: Ming LI
> Fix For: backlog
>
>
> Inserting `\c -- Fresh session`:
> ./cdb-pg/src/test/regress/input/external_oid.source
> {code:sql}
>  57 -- --
>  58 -- Create a tuple with Oid larger than FirstExternalObjectId (4293918720)
>  59 -- --
>  60 SELECT caql_insert_into_heap_pg_class(4293918750, 'table_xl');
>  61
>  62 \c -- Fresh session
>  63 -- NextExternalObjectId is uninitialized
>  64 SELECT next_external_oid();
> {code}
> Causes test to fail (as expected, as I later find out '--' comment does not 
> work for \c), AND puts database 'regression' in a bad state (not expected). 
> When I try to run subsequent tests which attempts to drop database 
> 'regression', the following error prints out:
> {code:sql}
> == dropping database "regression" ==
> ERROR:  TID for persistent 'Relation File: '131072/54992/167 (segment file 
> #0)'' tuple is invalid (0,0) (index 0, transaction kind 'Commit') 
> (persistentendxactrec.c:249)
> command failed: "/home/gpadmin/greenplum-db-devel/bin/psql" -X -c "DROP 
> DATABASE IF EXISTS \"regression\"" "postgres"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-147) Create Parquet table and insert data in template1 would cause CREATE DATABASE fail.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-147:
---
Fix Version/s: 2.0.0

> Create Parquet table and insert data in template1 would cause CREATE DATABASE 
> fail.
> ---
>
> Key: HAWQ-147
> URL: https://issues.apache.org/jira/browse/HAWQ-147
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Dong Li
>Assignee: Zhanwei Wang
> Fix For: 2.0.0
>
>
> Follow are the steps to reproduce this issue:
> $ psql -d template1
> psql (8.2.15)
> Type "help" for help.
> template1=# CREATE TABLE foo (
> template1(# a INT)
> template1-# WITH (appendonly=true, orientation=parquet);
> CREATE TABLE
> template1=# INSERT INTO foo VALUES(1);
> INSERT 0 1
> template1=# CREATE DATABASE gptest;
> ERROR:  Append-Only relation 'foo' gp_relation_node entry for segment file #1 
> without an aoseg entry (case #2) (cdbdatabaseinfo.c:1554)
> template1=# 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-54) TID for persistent 'Relation File...' tuple is invalid

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-54?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-54:
--
Fix Version/s: backlog

> TID for persistent 'Relation File...' tuple is invalid
> --
>
> Key: HAWQ-54
> URL: https://issues.apache.org/jira/browse/HAWQ-54
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Goden Yao
>Assignee: Ming LI
> Fix For: backlog
>
>
> Inserting `\c -- Fresh session`:
> ./cdb-pg/src/test/regress/input/external_oid.source
> {code:sql}
>  57 -- --
>  58 -- Create a tuple with Oid larger than FirstExternalObjectId (4293918720)
>  59 -- --
>  60 SELECT caql_insert_into_heap_pg_class(4293918750, 'table_xl');
>  61
>  62 \c -- Fresh session
>  63 -- NextExternalObjectId is uninitialized
>  64 SELECT next_external_oid();
> {code}
> Causes test to fail (as expected, as I later find out '--' comment does not 
> work for \c), AND puts database 'regression' in a bad state (not expected). 
> When I try to run subsequent tests which attempts to drop database 
> 'regression', the following error prints out:
> {code:sql}
> == dropping database "regression" ==
> ERROR:  TID for persistent 'Relation File: '131072/54992/167 (segment file 
> #0)'' tuple is invalid (0,0) (index 0, transaction kind 'Commit') 
> (persistentendxactrec.c:249)
> command failed: "/home/gpadmin/greenplum-db-devel/bin/psql" -X -c "DROP 
> DATABASE IF EXISTS \"regression\"" "postgres"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-147) Create Parquet table and insert data in template1 would cause CREATE DATABASE fail.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-147:
---
Assignee: Zhanwei Wang  (was: Lirong Jian)

> Create Parquet table and insert data in template1 would cause CREATE DATABASE 
> fail.
> ---
>
> Key: HAWQ-147
> URL: https://issues.apache.org/jira/browse/HAWQ-147
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Dong Li
>Assignee: Zhanwei Wang
> Fix For: 2.0.0
>
>
> Follow are the steps to reproduce this issue:
> $ psql -d template1
> psql (8.2.15)
> Type "help" for help.
> template1=# CREATE TABLE foo (
> template1(# a INT)
> template1-# WITH (appendonly=true, orientation=parquet);
> CREATE TABLE
> template1=# INSERT INTO foo VALUES(1);
> INSERT 0 1
> template1=# CREATE DATABASE gptest;
> ERROR:  Append-Only relation 'foo' gp_relation_node entry for segment file #1 
> without an aoseg entry (case #2) (cdbdatabaseinfo.c:1554)
> template1=# 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-20) Error running analyzedb

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-20?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-20:
--
Assignee: Radar Lei  (was: Lei Chang)

> Error running analyzedb
> ---
>
> Key: HAWQ-20
> URL: https://issues.apache.org/jira/browse/HAWQ-20
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Caleb Welton
>Assignee: Radar Lei
>Priority: Critical
> Fix For: 2.0.0
>
>
> Reported by an early alpha tester:
> {quote}
> I noticed analyzedb didn't work because HAWQ 2.0 doesn't set the 
> MASTER_DATA_DIRECTORY.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-171) Upgrade PXF to Java 8

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-171:
---
Fix Version/s: 2.0.0

> Upgrade PXF to Java 8
> -
>
> Key: HAWQ-171
> URL: https://issues.apache.org/jira/browse/HAWQ-171
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: 2.0.0
>
>
> JDK 8 has introduced a slew of new features including support for a Stream 
> API, lambda expressions and security and performance improvements. Updating 
> PXF to use JDK8 would allow us to benefit using these improvements.
> Overview of changes intorduced in JDK8 
> http://www.oracle.com/technetwork/java/javase/8-whats-new-2157071.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-150) External tables can be designated for both READ and WRITE

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-150:
---
Component/s: (was: PXF)

> External tables can be designated for both READ and WRITE
> -
>
> Key: HAWQ-150
> URL: https://issues.apache.org/jira/browse/HAWQ-150
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: External Tables
>Reporter: C.J. Jameson
>Assignee: Lei Chang
> Fix For: 3.0.0
>
>
> Currently, external tables are either read-only or write-only when they are 
> created. We could support an external table with the capability for both 
> reads and writes.
> As pointed out by hawqst...@163.com



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-150) External tables can be designated for both READ and WRITE

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-150:
---
Fix Version/s: 3.0.0

> External tables can be designated for both READ and WRITE
> -
>
> Key: HAWQ-150
> URL: https://issues.apache.org/jira/browse/HAWQ-150
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: External Tables
>Reporter: C.J. Jameson
>Assignee: Lei Chang
> Fix For: 3.0.0
>
>
> Currently, external tables are either read-only or write-only when they are 
> created. We could support an external table with the capability for both 
> reads and writes.
> As pointed out by hawqst...@163.com



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-176) REORGANIZE parameter is useless when change distribute policy from hash to random

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-176:
---
Fix Version/s: backlog

> REORGANIZE parameter is useless when  change distribute policy from hash to 
> random 
> ---
>
> Key: HAWQ-176
> URL: https://issues.apache.org/jira/browse/HAWQ-176
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Storage
>Reporter: Dong Li
>Assignee: Lei Chang
> Fix For: backlog
>
>
> When change distribute policy from hash to random with REORGANIZE=true, the 
> data distribution is not reorgnized.
> Run commands as follow.
> {code}
> set default_segment_num=2;
> create table testreorg( i int , j int ,q int) distributed by (q);
> insert into testreorg VALUES (1,1,1);
> insert into testreorg VALUES (1,2,1);
> insert into testreorg VALUES (2,3,1);
> insert into testreorg VALUES (2,4,1);
> insert into testreorg VALUES (2,5,1);
> {code}
> gpadmin=# select relfilenode from pg_class where relname='testreorg';
>  relfilenode
> -
>16840
> (1 row)
> gpadmin=# select * from pg_aoseg.pg_aoseg_16840;
>  segno | eof | tupcount | varblockcount | eofuncompressed | content
> ---+-+--+---+-+-
>  2 |   0 |0 | 0 |   0 |  -1
>  1 | 160 |5 | 5 | 160 |  -1
> (2 rows)
> {code}
> alter TABLE testreorg set with (REORGANIZE=true) DISTRIBUTED randomly;
> {code}
> gpadmin=# select relfilenode from pg_class where relname='testreorg';
>  relfilenode
> -
>16845
> (1 row)
> gpadmin=# select * from pg_aoseg.pg_aoseg_16845;
>  segno | eof | tupcount | varblockcount | eofuncompressed | content
> ---+-+--+---+-+-
>  2 |   0 |0 | 0 |   0 |  -1
>  1 | 120 |5 | 1 | 120 |  -1
> (2 rows)
> The aoseg file is changed , but the data distribution has not changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-150) External tables can be designated for both READ and WRITE

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-150:
---
Assignee: Lei Chang  (was: Goden Yao)

> External tables can be designated for both READ and WRITE
> -
>
> Key: HAWQ-150
> URL: https://issues.apache.org/jira/browse/HAWQ-150
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: External Tables
>Reporter: C.J. Jameson
>Assignee: Lei Chang
> Fix For: 3.0.0
>
>
> Currently, external tables are either read-only or write-only when they are 
> created. We could support an external table with the capability for both 
> reads and writes.
> As pointed out by hawqst...@163.com



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-144) Build HAWQ on MacOS

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-144:
---
Fix Version/s: 2.1.0

> Build HAWQ on MacOS
> ---
>
> Key: HAWQ-144
> URL: https://issues.apache.org/jira/browse/HAWQ-144
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Build
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: 2.1.0
>
>
> Currently, the only tested build platform for HAWQ is redhat 6.x. It will be 
> very nice if it can work on Mac with clang. This will make new contributions 
> much easier.
> Instructions on building HAWQ on linux is at: 
> https://github.com/apache/incubator-hawq/blob/master/BUILD_INSTRUCTIONS.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-181) Collect advanced statistics for Hive plugins

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-181:
---
Fix Version/s: backlog

> Collect advanced statistics for Hive plugins
> 
>
> Key: HAWQ-181
> URL: https://issues.apache.org/jira/browse/HAWQ-181
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Noa Horn
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Implement getFragmentsStats in Hive's fragmenters (HiveDataFragmenter and 
> HiveInputFormatFragmenter).
> As a result when running ANALYZE on PXF tables with Hive profile, advanced 
> statistics will be collected for that table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-231) Alter table by drop all columns of it, then it has some interesting problems

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-231:
---
Fix Version/s: backlog

> Alter table by drop all columns of it, then it has some interesting problems
> 
>
> Key: HAWQ-231
> URL: https://issues.apache.org/jira/browse/HAWQ-231
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Storage
>Reporter: Dong Li
>Assignee: Lei Chang
> Fix For: backlog
>
>
> It is a design behavior problem. 
> When we drop all the columns, should it truncate the table?
> Otherwise if the count of invisible rows has meaning.
> You can not see anything, but it shows that there are 1000 rows here.
> I know that in storage view and design view it is ok.
> But in  user view, it may be puzzled.
> {code}
> intern=# create table alterall (i int, j int);
> CREATE TABLE
> intern=# insert into alterall VALUES 
> (generate_series(1,1000),generate_series(1,2));
> INSERT 0 1000
> intern=# alter table alterall drop COLUMN i;
> ALTER TABLE
> intern=# alter TABLE alterall drop COLUMN j;
> ALTER TABLE
> intern=# select * from alterall ;
> --
> (1000 rows)
> intern=# alter TABLE alterall add column k int default 3;
> ALTER TABLE
> intern=# select * from alterall;
>  k
> ---
>  3
>  3
>  3
>  3
>  3
>  3
>  3
> ...
> (1000 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-235) HAWQ init report error message on centos7

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-235:
---
Fix Version/s: 2.1.0

> HAWQ init report error message on centos7
> -
>
> Key: HAWQ-235
> URL: https://issues.apache.org/jira/browse/HAWQ-235
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Zhanwei Wang
>Assignee: Radar Lei
> Fix For: 2.1.0
>
>
> {code}
> [gpadmin@centos7-namenode hawq-devel]$ hawq init cluster
> 20151209:03:02:07:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-Prepare 
> to do 'hawq init'
> 20151209:03:02:07:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-You can 
> check log in /home/gpadmin/hawqAdminLogs/hawq_init_20151209.log
> 20151209:03:02:07:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-Init hawq 
> with args: ['init', 'cluster']
> Continue with HAWQ init Yy|Nn (default=N):
> > y
> 20151209:03:02:08:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-Check if 
> hdfs path is available
> 20151209:03:02:08:000292 
> hawq_init:centos7-namenode:gpadmin-[WARNING]:-WARNING:'hdfs://centos7-namenode:8020/hawq_default'
>  does not exist, create it ...
> 20151209:03:02:08:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-3 segment 
> hosts defined
> 20151209:03:02:08:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-Set 
> default_segment_num as: 24
> The authenticity of host 'centos7-datanode1 (172.17.0.85)' can't be 
> established.
> ECDSA key fingerprint is 51:84:fc:86:2c:d3:30:0b:06:ac:49:f4:a8:5d:e1:bd.
> Are you sure you want to continue connecting (yes/no)? yes
> The authenticity of host 'centos7-datanode2 (172.17.0.86)' can't be 
> established.
> ECDSA key fingerprint is 51:84:fc:86:2c:d3:30:0b:06:ac:49:f4:a8:5d:e1:bd.
> Are you sure you want to continue connecting (yes/no)? yes
> The authenticity of host 'centos7-datanode3 (172.17.0.87)' can't be 
> established.
> ECDSA key fingerprint is 51:84:fc:86:2c:d3:30:0b:06:ac:49:f4:a8:5d:e1:bd.
> Are you sure you want to continue connecting (yes/no)? yes
> The authenticity of host 'centos7-namenode (172.17.0.84)' can't be 
> established.
> ECDSA key fingerprint is 51:84:fc:86:2c:d3:30:0b:06:ac:49:f4:a8:5d:e1:bd.
> Are you sure you want to continue connecting (yes/no)? yes
> 20151209:03:02:15:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-Start to 
> init master node: 'centos7-namenode'
> 20151209:03:02:23:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-Master 
> init successfully
> 20151209:03:02:23:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-Init 
> segments in list: ['centos7-datanode1', 'centos7-datanode2', 
> 'centos7-datanode3']
> .20151209:03:02:32:000292 
> hawq_init:centos7-namenode:gpadmin-[INFO]:-/data/hawq-devel/bin/lib/hawq_bash_functions.sh:
>  line 59: return: Problem in hawq_bash_functions, command 'ifconfig' not 
> found in COMMAND path. You will need to edit the script named 
> hawq_bash_functions.sh to properly locate the needed commands 
> for your platform.: numeric argument required
> /data/hawq-devel/bin/lib/hawq_bash_functions.sh: line 59: return: Problem in 
> hawq_bash_functions, command 'netstat' not found in COMMAND path. 
> You will need to edit the script named hawq_bash_functions.sh to properly 
> locate the needed commands for your platform.: numeric 
> argument required
> Host key verification failed.
> /data/hawq-devel/bin/lib/hawqinit.sh: line 72: ifconfig: command not found
> 20151209:03:02:32:000292 
> hawq_init:centos7-namenode:gpadmin-[INFO]:-/data/hawq-devel/bin/lib/hawq_bash_functions.sh:
>  line 59: return: Problem in hawq_bash_functions, command 'ifconfig' not 
> found in COMMAND path. You will need to edit the script named 
> hawq_bash_functions.sh to properly locate the needed commands 
> for your platform.: numeric argument required
> /data/hawq-devel/bin/lib/hawq_bash_functions.sh: line 59: return: Problem in 
> hawq_bash_functions, command 'netstat' not found in COMMAND path. 
> You will need to edit the script named hawq_bash_functions.sh to properly 
> locate the needed commands for your platform.: numeric 
> argument required
> Host key verification failed.
> /data/hawq-devel/bin/lib/hawqinit.sh: line 72: ifconfig: command not found
> 20151209:03:02:32:000292 
> hawq_init:centos7-namenode:gpadmin-[INFO]:-/data/hawq-devel/bin/lib/hawq_bash_functions.sh:
>  line 59: return: Problem in hawq_bash_functions, command 'ifconfig' not 
> found in COMMAND path. You will need to edit the script named 
> hawq_bash_functions.sh to properly locate the needed commands 
> for your platform.: numeric argument required
> /data/hawq-devel/bin/lib/hawq_bash_functions.sh: line

[jira] [Updated] (HAWQ-240) Set Progress as 50% When Return Resource

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-240:
---
Fix Version/s: (was: backlog)
   2.0.0

> Set Progress as 50% When Return Resource
> 
>
> Key: HAWQ-240
> URL: https://issues.apache.org/jira/browse/HAWQ-240
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libyarn
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: 2.0.0
>
>
> In current implementation, when return resource to Hadoop Yarn, the progress 
> of hawq becomes 100%.
> Hawq with yarn is a unmanaged AM and a long-term application, the progress 
> should be 50% by design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-240) Set Progress as 50% When Return Resource

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-240:
---
Fix Version/s: backlog

> Set Progress as 50% When Return Resource
> 
>
> Key: HAWQ-240
> URL: https://issues.apache.org/jira/browse/HAWQ-240
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libyarn
>Reporter: Lin Wen
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> In current implementation, when return resource to Hadoop Yarn, the progress 
> of hawq becomes 100%.
> Hawq with yarn is a unmanaged AM and a long-term application, the progress 
> should be 50% by design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-182) Collect advanced statistics for HBase plugin

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-182:
---
Fix Version/s: backlog

> Collect advanced statistics for HBase plugin
> 
>
> Key: HAWQ-182
> URL: https://issues.apache.org/jira/browse/HAWQ-182
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Noa Horn
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Implement getFragmentsStats in HBase's fragmenter (HBaseDataFragmenter).
> As a result when running ANALYZE on PXF tables with HBase profile, advanced 
> statistics will be collected for that table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-240) Set Progress as 50% When Return Resource

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-240:
---
Assignee: Lin Wen  (was: Lei Chang)

> Set Progress as 50% When Return Resource
> 
>
> Key: HAWQ-240
> URL: https://issues.apache.org/jira/browse/HAWQ-240
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libyarn
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: 2.0.0
>
>
> In current implementation, when return resource to Hadoop Yarn, the progress 
> of hawq becomes 100%.
> Hawq with yarn is a unmanaged AM and a long-term application, the progress 
> should be 50% by design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-282) Refine reject limit check and handle for error table in external table and COPY

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-282:
---
Fix Version/s: backlog

> Refine reject limit check and handle for error table in external table and 
> COPY
> ---
>
> Key: HAWQ-282
> URL: https://issues.apache.org/jira/browse/HAWQ-282
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: External Tables, Storage
>Affects Versions: 2.0.0-beta-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: backlog
>
>
> It uses macros to implement  reject limit check and handle for error table in 
> external table and COPY. It is better to refine them using inline functions 
> or just functions to improve readability.
> The related macros include:
> 1. src/backend/access/external/fileam.c
> {noformat}
> EXT_RESET_LINEBUF
> FILEAM_HANDLE_ERROR
> CSV_IS_UNPARSABLE
> FILEAM_IF_REJECT_LIMIT_REACHED_ABORT
> {noformat}
> 2. src/backend/commands/copy.c
> {noformat}
> RESET_LINEBUF
> COPY_HANDLE_ERROR
> QD_GOTO_NEXT_ROW
> QE_GOTO_NEXT_ROW
> CSV_IS_UNPARSABLE
> IF_REJECT_LIMIT_REACHED_ABORT
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-165) PXF loggers should all be private static final

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-165:
---
Fix Version/s: 2.0.0

> PXF loggers should all be private static final
> --
>
> Key: HAWQ-165
> URL: https://issues.apache.org/jira/browse/HAWQ-165
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Noa Horn
>Assignee: Noa Horn
> Fix For: 2.0.0
>
>
> PXF uses org.apache.commons.logging.Log as its logging mechanism.
> In some classes the logger is initialized as a private variable, at others as 
> static. We should consolidate all of the loggers to be private static final.
> e.g. 
> {noformat}
> private static final Log Log = LogFactory.getLog(ReadBridge.class);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-99) OpenSSL 0.9.x to 1.x upgrade

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-99?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-99:
--
Fix Version/s: backlog

> OpenSSL 0.9.x to 1.x upgrade
> 
>
> Key: HAWQ-99
> URL: https://issues.apache.org/jira/browse/HAWQ-99
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Goden Yao
>Assignee: Lei Chang
> Fix For: backlog
>
>
> 0.9.x product line will be deprecated by end of 2015.
> We need to move to the new 1.x product line.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-215) View gp_distributed_log and gp_distributed_xacts need to be removed if we don't want to support it anymore.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-215:
---
Fix Version/s: 2.0.0

> View gp_distributed_log and gp_distributed_xacts need to be removed if we 
> don't want to support it anymore.
> ---
>
> Key: HAWQ-215
> URL: https://issues.apache.org/jira/browse/HAWQ-215
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Dong Li
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> View gp_distributed_log   depends on built-in function gp_distributed_log(). 
> And  gp_distributed_log() just return null. So the view can't work at all.
> So do view gp_distributed_xacts.
> {code}
> e=# select * from gp_distributed_log;
> ERROR:  function returning set of rows cannot return null value
> e=# select * from gp_distributed_xacts;
> ERROR:  function returning set of rows cannot return null value
> {code}
> function gp_distributed_log is defined in gp_distributed_log.c :27
> function gp_distributed_xacts  is defined in cdbdistributedxacts.c:44



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-214) Build-in functions for gp_partition will cause cores.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-214:
---
Assignee: Ming LI  (was: Lei Chang)

> Build-in functions for gp_partition will cause cores.
> -
>
> Key: HAWQ-214
> URL: https://issues.apache.org/jira/browse/HAWQ-214
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Unknown
>Reporter: Dong Li
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> There are four build-in functions for gp_patition , and all of them will 
> cause core.
> gp_partition_expansion
> gp_partition_inverse
> gp_partition_propagation
> gp_partition_selection
> {code}
> create table pt_table(a int, b int) distributed by (a) partition by range(b) 
> (default partition others,start(1) end(100) every(10));
> {code}
> {code}
> e=# select pg_catalog.gp_partition_selection(16550,1);
> FATAL:  Unexpected internal error (gp_partition_functions.c:197)
> DETAIL:  FailedAssertion("!(dynamicTableScanInfo != ((void *)0))", File: 
> "gp_partition_functions.c", Line: 197)
> HINT:  Process 22247 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released until then.
> server closed the connection unexpectedly
>   This probably means the server terminated abnormally
>   before or while processing the request.
> The connection to the server was lost. Attempting reset: Succeeded.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-214) Build-in functions for gp_partition will cause cores.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-214:
---
Fix Version/s: 2.0.0

> Build-in functions for gp_partition will cause cores.
> -
>
> Key: HAWQ-214
> URL: https://issues.apache.org/jira/browse/HAWQ-214
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Unknown
>Reporter: Dong Li
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> There are four build-in functions for gp_patition , and all of them will 
> cause core.
> gp_partition_expansion
> gp_partition_inverse
> gp_partition_propagation
> gp_partition_selection
> {code}
> create table pt_table(a int, b int) distributed by (a) partition by range(b) 
> (default partition others,start(1) end(100) every(10));
> {code}
> {code}
> e=# select pg_catalog.gp_partition_selection(16550,1);
> FATAL:  Unexpected internal error (gp_partition_functions.c:197)
> DETAIL:  FailedAssertion("!(dynamicTableScanInfo != ((void *)0))", File: 
> "gp_partition_functions.c", Line: 197)
> HINT:  Process 22247 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released until then.
> server closed the connection unexpectedly
>   This probably means the server terminated abnormally
>   before or while processing the request.
> The connection to the server was lost. Attempting reset: Succeeded.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-215) View gp_distributed_log and gp_distributed_xacts need to be removed if we don't want to support it anymore.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-215:
---
Assignee: Ming LI  (was: Lei Chang)

> View gp_distributed_log and gp_distributed_xacts need to be removed if we 
> don't want to support it anymore.
> ---
>
> Key: HAWQ-215
> URL: https://issues.apache.org/jira/browse/HAWQ-215
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Dong Li
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> View gp_distributed_log   depends on built-in function gp_distributed_log(). 
> And  gp_distributed_log() just return null. So the view can't work at all.
> So do view gp_distributed_xacts.
> {code}
> e=# select * from gp_distributed_log;
> ERROR:  function returning set of rows cannot return null value
> e=# select * from gp_distributed_xacts;
> ERROR:  function returning set of rows cannot return null value
> {code}
> function gp_distributed_log is defined in gp_distributed_log.c :27
> function gp_distributed_xacts  is defined in cdbdistributedxacts.c:44



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-337) Support Label Based Scheduling in Libyarn and RM

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-337:
---
Fix Version/s: backlog

> Support Label Based Scheduling in Libyarn and RM
> 
>
> Key: HAWQ-337
> URL: https://issues.apache.org/jira/browse/HAWQ-337
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: libyarn, Resource Manager
>Reporter: Lin Wen
>Assignee: Lei Chang
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-206) Use the DELIMITER in FORMAT for External Table DDL Creation

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-206:
---
Fix Version/s: backlog

> Use the DELIMITER in FORMAT for External Table DDL Creation
> ---
>
> Key: HAWQ-206
> URL: https://issues.apache.org/jira/browse/HAWQ-206
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Goden Yao
>Assignee: Goden Yao
> Fix For: backlog
>
>
> As a HAWQ user, I should be able to:
> not type delimiter twice in DDL in the case of using HiveRC/Text profile via 
> PXF
> *Background*
> Currently user has to specify the same delimiter twice in DDL(in the case of 
> HiveRC/Text profiles):
> {code}
> ...
> location(E'pxf://...&delimiter=\x01') FORMAT TEXT (delimiter = E'\x01');
> {code}
> It would be really helpful if we could use the delimiter provided in TEXT 
> this would reduce error prone DDL grammar and duplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-335) Cannot query parquet hive table through PXF

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-335:
---
Fix Version/s: backlog

> Cannot query parquet hive table through PXF
> ---
>
> Key: HAWQ-335
> URL: https://issues.apache.org/jira/browse/HAWQ-335
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0-beta-incubating
>Reporter: zharui
>Assignee: Goden Yao
> Fix For: backlog
>
>
> I created an external table in hawq that exist in hive with parquet format, 
> but I cannot query this table in hawq. The segment processes are idle and 
> nothing happened.
> The clause of creating external hive parquet table as below:
> {code}
> create external table zc_parquet800_partitioned 
> (
> start_time bigint,
> cdr_id int,
> "offset" int,
> calling varchar(255),
> imsi varchar(255),
> user_ip int,
> tmsi int,
> p_tmsi int,
> imei varchar(255),
> mcc int,
> mnc int,
> lac int,
> rac int,
> cell_id int,
> bsc_ip int,
> opc int,
> dpc int,
> sgsn_sg_ip int,
> ggsn_sg_ip int,
> sgsn_data_ip int,
> ggsn_data_ip int,
> apn varchar(255),
> rat int,
> service_type smallint,
> service_group smallint,
> up_packets int,
> down_packets int,
> up_bytes int,
> down_bytes int,
> up_speed real,
> down_speed real,
> trans_time int,
> first_time timestamp,
> end_time timestamp,
> is_end int,
> user_port int,
> proto_type int,
> dest_ip int,
> dest_port int,
> paging_count smallint,
> assignment_count smallint,
> joiner_id varchar(255),
> operation smallint,
> country smallint,
> loc_prov smallint,
> loc_city smallint,
> roam_prov smallint,
> roam_city smallint,
> sgsn varchar(255),
> bsc_rnc varchar(255),
> terminal_fac smallint,
> terminal_type int,
> terminal_class smallint,
> roaming_type smallint,
> host_operator smallint,
> net_type smallint, 
> time int, 
> calling_hash int) 
> LOCATION ('pxf://ws01.mzhen.cn:51200/zc_parquet800_partitioned?PROFILE=Hive') 
> FORMAT 'custom' (formatter='pxfwritable_import');
> {code}
> The catalina logs as below:
> {code}
> Jan 13, 2016 11:26:29 AM WARNING: parquet.hadoop.ParquetRecordReader: Can not 
> initialize counter due to context is not a instance of 
> TaskInputOutputContext, but is 
> org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
> Jan 13, 2016 11:26:29 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> RecordReader initialized will read a total of 1332450 records.
> Jan 13, 2016 11:26:29 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
> row 0. reading next block
> Jan 13, 2016 11:26:30 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> block read in memory in 398 ms. row count = 1332450
> Jan 13, 2016 11:26:58 AM WARNING: parquet.hadoop.ParquetRecordReader: Can not 
> initialize counter due to context is not a instance of 
> TaskInputOutputContext, but is 
> org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
> Jan 13, 2016 11:26:58 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> RecordReader initialized will read a total of 1460760 records.
> Jan 13, 2016 11:26:58 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
> row 0. reading next block
> Jan 13, 2016 11:26:59 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> block read in memory in 441 ms. row count = 1460760
> Jan 13, 2016 11:27:34 AM WARNING: parquet.hadoop.ParquetRecordReader: Can not 
> initialize counter due to context is not a instance of 
> TaskInputOutputContext, but is 
> org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
> Jan 13, 2016 11:27:34 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> RecordReader initialized will read a total of 1396605 records.
> Jan 13, 2016 11:27:34 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
> row 0. reading next block
> Jan 13, 2016 11:27:34 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> block read in memory in 367 ms. row count = 1396605
> Jan 13, 2016 11:28:06 AM WARNING: parquet.hadoop.ParquetRecordReader: Can not 
> initialize counter due to context is not a instance of 
> TaskInputOutputContext, but is 
> org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
> Jan 13, 2016 11:28:06 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> RecordReader initialized will read a total of 1337385 records.
> Jan 13, 2016 11:28:06 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
> row 0. reading next block
> Jan 13, 2016 11:28:06 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> block read in memory in 348 ms. row count = 1337385
> Jan 13, 2016 11:28:32 AM WARNING: parquet.hadoop.ParquetRecordReader: Can not 
> initialize counter due to context is not a instance of 
> TaskInputOutputContext, but is 
> org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
> Jan 13, 2016 11:28:32 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> RecordRea

[jira] [Resolved] (HAWQ-264) Fix Coverity issues

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang resolved HAWQ-264.

   Resolution: Fixed
Fix Version/s: 2.0.0

> Fix Coverity issues
> ---
>
> Key: HAWQ-264
> URL: https://issues.apache.org/jira/browse/HAWQ-264
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: Entong Shen
>Assignee: Entong Shen
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-253) Separate pxf-hdfs and pxf-hive packages from pxf-service

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-253:
---
Fix Version/s: backlog

> Separate pxf-hdfs and pxf-hive packages from pxf-service
> 
>
> Key: HAWQ-253
> URL: https://issues.apache.org/jira/browse/HAWQ-253
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Noa Horn
>Assignee: Goden Yao
> Fix For: backlog
>
>
> The PXF plugins should only depend on pxf-api package.
> pxf-service is supposed to be an internal package, not exposed to the plugins.
> Currently both pxf-hdfs and pxf-hive depend on pxf-service, which should be 
> fixed.
> {noformat}
> $ grep -rI "pxf.service" pxf-hdfs/src/main/.
> pxf-hdfs/src/main/./java/org/apache/hawq/pxf/plugins/hdfs/HdfsAnalyzer.java:import
>  org.apache.hawq.pxf.service.ReadBridge;
> pxf-hdfs/src/main/./java/org/apache/hawq/pxf/plugins/hdfs/utilities/HdfsUtilities.java:import
>  org.apache.hawq.pxf.service.utilities.Utilities;
> pxf-hdfs/src/main/./java/org/apache/hawq/pxf/plugins/hdfs/WritableResolver.java:import
>  org.apache.hawq.pxf.service.utilities.Utilities;
> $ grep -rI "pxf.service" pxf-hive/src/main/.
> pxf-hive/src/main/./java/org/apache/hawq/pxf/plugins/hive/HiveColumnarSerdeResolver.java:import
>  org.apache.hawq.pxf.service.utilities.Utilities;
> pxf-hive/src/main/./java/org/apache/hawq/pxf/plugins/hive/HiveResolver.java:import
>  org.apache.hawq.pxf.service.utilities.Utilities;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-256) Integrate Security with Apache Ranger

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-256:
---
Fix Version/s: backlog

> Integrate Security with Apache Ranger
> -
>
> Key: HAWQ-256
> URL: https://issues.apache.org/jira/browse/HAWQ-256
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Security
>Reporter: Michael Andre Pearce (IG)
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Integrate security with Apache Ranger for a unified Hadoop security solution. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-275) After killing QE of segment, the QE pool is not updated when dispatch

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-275:
---
Fix Version/s: 2.0.0

> After killing QE of segment, the QE pool is not updated when dispatch
> -
>
> Key: HAWQ-275
> URL: https://issues.apache.org/jira/browse/HAWQ-275
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Dispatcher
>Reporter: Dong Li
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> When we kill a QE of segment, the segment will restart all of its child 
> process, and not have QE anymore. But the master still believes the QEs are 
> cached, and it will dispatch these non-exist QE to handle query. 
> The most problem is that if we have 6 QEs before kill them, and we want to 
> execute a simple sql which only need 2 QEs. Then it will check and error 
> three times and only after that it order segment to start QEs.
> {code}
> intern=# insert into b values (2 );
> ERROR:  Query Executor Error in seg4 localhost:4 pid=19024: server closed 
> the connection unexpectedly
> DETAIL:
>   This probably means the server terminated abnormally
>   before or while processing the request.
> intern=# insert into b values (2 );
> ERROR:  Query Executor Error in seg0 localhost:4 pid=19020: server closed 
> the connection unexpectedly
> DETAIL:
>   This probably means the server terminated abnormally
>   before or while processing the request.
> intern=# insert into b values (2 );
> ERROR:  Query Executor Error in seg2 localhost:4 pid=19022: server closed 
> the connection unexpectedly
> DETAIL:
>   This probably means the server terminated abnormally
>   before or while processing the request.
> intern=# insert into b values (2 );
> INSERT 0 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-299) Extra ";" in udpSignalTimeoutWait

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-299:
---
Fix Version/s: 2.0.0

> Extra ";" in udpSignalTimeoutWait
> -
>
> Key: HAWQ-299
> URL: https://issues.apache.org/jira/browse/HAWQ-299
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Interconnect
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> In function udpSignalTimeoutWait, there is an extra ";" which should be 
> removed.
> {code}
>   if (udpSignalGet(sig));
>   ret = 0;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-91) “Out of memory” error when use gpload to load data

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-91?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-91:
--
Fix Version/s: backlog

> “Out of memory” error when use gpload to load data
> --
>
> Key: HAWQ-91
> URL: https://issues.apache.org/jira/browse/HAWQ-91
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: dingyuanpu
>Assignee: Lei Chang
> Fix For: backlog
>
>
> I have some problems with HAWQ : My HAWQ version is 1.3 on HDP2.2.6 ,which is 
> on 4 servers with x86 system(256G memory and 1T hard disk for each)
> The detail information is follow:
> I used the gpload tools to upload the store_sales.dat(the data is 188G) of 
> TPC-DS, the errors are:
> 2015-10-27 01:24:51|INFO|gpload session started 2015-10-27 01:24:51
> 2015-10-27 01:24:51|INFO|setting schema 'public' for table 'store_sales'
> 2015-10-27 01:24:52|INFO|started gpfdist -p 8081 -P 8082 -f 
> "tpc500g-data/store_sales_aa_aa_aa" -t 30
> 2015-10-27 01:30:25|ERROR|ERROR:  Out of memory  (seg0 node1.fd.h3c.com:4 
> pid=74456)
> DETAIL:  
> VM Protect failed to allocate 8388608 bytes, 7 MB available
> External table ext_gpload20151027_012451_543181, line N/A of 
> gpfdist://node2:8081/tpc500g-data/store_sales_aa_aa_aa: ""
> encountered while running INSERT INTO public."store_sales" 
> ("ss_sold_date_sk","ss_sold_time_sk","ss_item_sk","ss_customer_sk","ss_cdemo_sk","ss_hdemo_sk","ss_addr_sk","ss_store_sk","ss_promo_sk","ss_ticket_number","ss_quantity","ss_wholesale_cost","ss_list_price","ss_sales_price","ss_ext_discount_amt","ss_ext_sales_price","ss_ext_wholesale_cost","ss_ext_list_price","ss_ext_tax","ss_coupon_amt","ss_net_paid","ss_net_paid_inc_tax","ss_net_profit")
>  SELECT 
> "ss_sold_date_sk","ss_sold_time_sk","ss_item_sk","ss_customer_sk","ss_cdemo_sk","ss_hdemo_sk","ss_addr_sk","ss_store_sk","ss_promo_sk","ss_ticket_number","ss_quantity","ss_wholesale_cost","ss_list_price","ss_sales_price","ss_ext_discount_amt","ss_ext_sales_price","ss_ext_wholesale_cost","ss_ext_list_price","ss_ext_tax","ss_coupon_amt","ss_net_paid","ss_net_paid_inc_tax","ss_net_profit"
>  FROM ext_gpload20151027_012451_543181
> 2015-10-27 01:30:25|INFO|rows Inserted  = 0
> 2015-10-27 01:30:25|INFO|rows Updated   = 0
> 2015-10-27 01:30:25|INFO|data formatting errors = 0
> 2015-10-27 01:30:25|INFO|gpload failed
> I have used the following command to modify the parameters,the errors still 
> exist:
> gpconfig -c gp_vmem_protect_limit -v 8192MB (I have also tried 
> 4096、8192、16384、32768、81920、245760、262144)
> gpstop –r
> please help me to solve the problem ,thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-178) Add JSON plugin support in code base

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-178:
---
Fix Version/s: backlog

> Add JSON plugin support in code base
> 
>
> Key: HAWQ-178
> URL: https://issues.apache.org/jira/browse/HAWQ-178
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Goden Yao
>Assignee: Goden Yao
> Fix For: backlog
>
>
> JSON has been a popular format used in HDFS as well as in the community, 
> there has been a few JSON PXF plugins developed by the community and we'd 
> like to see it being incorporated into the code base as an optional package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-315) Invalid Byte Sequence Error when loading a large(100MB+) csv file

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-315:
---
Fix Version/s: 2.0.0

> Invalid Byte Sequence Error when loading a large(100MB+) csv file
> -
>
> Key: HAWQ-315
> URL: https://issues.apache.org/jira/browse/HAWQ-315
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Goden Yao
>Assignee: Ruilong Huo
> Fix For: 2.0.0
>
>
> This bug occurs when copying or reading a large csv file. The reproducible 
> file we tried is 100MB+ so cannot be uploaded to JIRA.
> *Repro steps*
> The large CSV file size needs to be over 100MB at least.
> The pattern in the csv file should contain the following:
> {code:actionscript}
> ..., "dummy data text1
> dummy data text2,
> dummy data text3,
> dummy data text4"
> {code}
> basically , a long text broken into multiple lines but within quotes.
> This doesn't cause issue in a smaller size file though.
> {code:SQL}
> DROP TABLE IF EXISTS <_test table name_>;
> CREATE TABLE <_test table name_>
> (
> <_define test table schema_>
>  ...
> );
> COPY <_test table name_> FROM '<_csv file path_>'
> DELIMITER ','
> NULL ''
> ESCAPE '"'
> CSV QUOTE '"'
> LOG ERRORS INTO <_error reject table name_>
> SEGMENT REJECT LIMIT 10 rows
> ;
> {code}
> *Errors*
> Error in first line with quoted data:
> {code}
> DEBUG5:  invalid byte sequence for encoding "UTF8": 0x00
> HINT:  This error can also happen if the byte sequence does not match the 
> encoding expected by the server, which is controlled by "client_encoding".
> CONTEXT:  COPY addresses_heap, line 604932
> {code}
> Error in second line with quoted data: this is due to wrong formatting (as 
> the first half line within the quotes was mishandled.
> {code}
> DEBUG5:  missing data for column "sourceid"
> CONTEXT:  COPY addresses_heap, line 604933: "...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-315) Invalid Byte Sequence Error when loading a large(100MB+) csv file

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-315:
---
Assignee: Ruilong Huo  (was: Lei Chang)

> Invalid Byte Sequence Error when loading a large(100MB+) csv file
> -
>
> Key: HAWQ-315
> URL: https://issues.apache.org/jira/browse/HAWQ-315
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Goden Yao
>Assignee: Ruilong Huo
> Fix For: 2.0.0
>
>
> This bug occurs when copying or reading a large csv file. The reproducible 
> file we tried is 100MB+ so cannot be uploaded to JIRA.
> *Repro steps*
> The large CSV file size needs to be over 100MB at least.
> The pattern in the csv file should contain the following:
> {code:actionscript}
> ..., "dummy data text1
> dummy data text2,
> dummy data text3,
> dummy data text4"
> {code}
> basically , a long text broken into multiple lines but within quotes.
> This doesn't cause issue in a smaller size file though.
> {code:SQL}
> DROP TABLE IF EXISTS <_test table name_>;
> CREATE TABLE <_test table name_>
> (
> <_define test table schema_>
>  ...
> );
> COPY <_test table name_> FROM '<_csv file path_>'
> DELIMITER ','
> NULL ''
> ESCAPE '"'
> CSV QUOTE '"'
> LOG ERRORS INTO <_error reject table name_>
> SEGMENT REJECT LIMIT 10 rows
> ;
> {code}
> *Errors*
> Error in first line with quoted data:
> {code}
> DEBUG5:  invalid byte sequence for encoding "UTF8": 0x00
> HINT:  This error can also happen if the byte sequence does not match the 
> encoding expected by the server, which is controlled by "client_encoding".
> CONTEXT:  COPY addresses_heap, line 604932
> {code}
> Error in second line with quoted data: this is due to wrong formatting (as 
> the first half line within the quotes was mishandled.
> {code}
> DEBUG5:  missing data for column "sourceid"
> CONTEXT:  COPY addresses_heap, line 604933: "...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-321) Support plpython3u

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-321:
---
Fix Version/s: backlog

> Support plpython3u
> --
>
> Key: HAWQ-321
> URL: https://issues.apache.org/jira/browse/HAWQ-321
> Project: Apache HAWQ
>  Issue Type: New Feature
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-326) Support RPM build for HAWQ

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-326:
---
Fix Version/s: 2.1.0

> Support RPM build for HAWQ
> --
>
> Key: HAWQ-326
> URL: https://issues.apache.org/jira/browse/HAWQ-326
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Build
>Reporter: Lei Chang
> Fix For: 2.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-321) Support plpython3u

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-321:
---
Assignee: (was: Lei Chang)

> Support plpython3u
> --
>
> Key: HAWQ-321
> URL: https://issues.apache.org/jira/browse/HAWQ-321
> Project: Apache HAWQ
>  Issue Type: New Feature
>Reporter: Lei Chang
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-319) REST API for HAWQ

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-319:
---
Fix Version/s: backlog

> REST API for HAWQ
> -
>
> Key: HAWQ-319
> URL: https://issues.apache.org/jira/browse/HAWQ-319
> Project: Apache HAWQ
>  Issue Type: New Feature
>Reporter: Lei Chang
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-331) Fix HAWQ Jenkins pullrequest build reporting SUCCESS when it was a failure

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-331:
---
Fix Version/s: 2.1.0

> Fix HAWQ Jenkins pullrequest build reporting SUCCESS when it was a failure
> --
>
> Key: HAWQ-331
> URL: https://issues.apache.org/jira/browse/HAWQ-331
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Goden Yao
>Assignee: Radar Lei
> Fix For: 2.1.0
>
>
> https://builds.apache.org/job/HAWQ-build-pullrequest/83/console
> It has been recently discovered that Jenkins reports SUCCESS even when a 
> build was actually failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-258) Investigate whether gp_fastsequence is still needed

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-258:
---
Fix Version/s: 2.0.0

> Investigate whether gp_fastsequence is still needed
> ---
>
> Key: HAWQ-258
> URL: https://issues.apache.org/jira/browse/HAWQ-258
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Storage
>Reporter: Lirong Jian
>Assignee: Lirong Jian
> Fix For: 2.0.0
>
>
> Since the block directory for AO relations is not supported any more, we 
> suspect that gp_fastsequence is not needed anymore. However, further 
> investigations are needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-323) Cannot query when cluster include more than 1 segment

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-323:
---
Fix Version/s: 2.0.0

> Cannot query when cluster include more than 1 segment
> -
>
> Key: HAWQ-323
> URL: https://issues.apache.org/jira/browse/HAWQ-323
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core, Resource Manager
>Affects Versions: 2.0.0-beta-incubating
>Reporter: zharui
>Assignee: Lin Wen
> Fix For: 2.0.0
>
>
> The version I use is 2.0.0-beta-RC2. I can query data normally when cluster 
> just have 1 segment. Once the cluster have more then 1 segments online, I 
> cannot finish any query and being informed that "ERROR:  failed to acquire 
> resource from resource manager, 7 of 8 segments are unavailable 
> (pquery.c:788)".
> I have read the segment logs and the source code about resource manager. I 
> guess this issue is because of the communication failure between segment 
> instance and resource manager server. I can find the logs of the segment 
> connect to resource manager successfully such as "AsyncComm framework 
> receives message 518 from FD5" and "Resource enforcer increases memory quota 
> to: total memory quota=65536 MB, delta memory quota = 65536 MB", but the 
> other online segments have no these log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-258) Investigate whether gp_fastsequence is still needed

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-258.
--
Resolution: Fixed

> Investigate whether gp_fastsequence is still needed
> ---
>
> Key: HAWQ-258
> URL: https://issues.apache.org/jira/browse/HAWQ-258
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Storage
>Reporter: Lirong Jian
>Assignee: Lirong Jian
> Fix For: 2.0.0
>
>
> Since the block directory for AO relations is not supported any more, we 
> suspect that gp_fastsequence is not needed anymore. However, further 
> investigations are needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-343) Core when setting enable_secure_filesystem to true

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-343:
---
Assignee: Zhanwei Wang  (was: Lei Chang)

> Core when setting enable_secure_filesystem to true
> --
>
> Key: HAWQ-343
> URL: https://issues.apache.org/jira/browse/HAWQ-343
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Noa Horn
>Assignee: Zhanwei Wang
> Fix For: 2.0.0
>
>
> Happened only with debug build. In optimized build seems to work ok.
> Repro:
> {noformat}
> # set enable_secure_filesystem to true;
> FATAL:  Unexpected internal error (cdbfilesystemcredential.c:357)
> DETAIL:  FailedAssertion("!(((void *)0) != credentials)", File: 
> "cdbfilesystemcredential.c", Line: 357)
> HINT:  Process 21815 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released until then.
> server closed the connection unexpectedly
>   This probably means the server terminated abnormally
>   before or while processing the request.
> The connection to the server was lost. Attempting reset: Succeeded.
> {noformat}
> Backtrace:
> {noformat}
> (gdb) bt
> #0  0x0031e06e14f3 in select () from /lib64/libc.so.6
> #1  0x00b8f108 in pg_usleep (microsec=3000) at pgsleep.c:43
> #2  0x009e4418 in elog_debug_linger (edata=0x1186600) at elog.c:4125
> #3  0x009dca95 in errfinish (dummy=0) at elog.c:595
> #4  0x009db1e8 in ExceptionalCondition (conditionName=0xe045d5 
> "!(((void *)0) != credentials)", errorType=0xe04466 "FailedAssertion", 
> fileName=0xe0432d "cdbfilesystemcredential.c", lineNumber=357) at assert.c:66
> #5  0x00b7169f in cancel_filesystem_credentials (credentials=0x0, 
> mcxt=0x0) at cdbfilesystemcredential.c:357
> #6  0x00b7176f in cleanup_filesystem_credentials (portal=0x20b8388) 
> at cdbfilesystemcredential.c:388
> #7  0x00a19ade in PortalDrop (portal=0x20b8388, isTopCommit=0 '\000') 
> at portalmem.c:419
> #8  0x008f57c5 in exec_simple_query (query_string=0x206d1f8 "set 
> enable_secure_filesystem to true;", seqServerHost=0x0, seqServerPort=-1) at 
> postgres.c:1758
> #9  0x008fa3cf in PostgresMain (argc=4, argv=0x1fb8c88, 
> username=0x1fb8a80 "hornn") at postgres.c:4711
> #10 0x008a093e in BackendRun (port=0x1f68c80) at postmaster.c:5875
> #11 0x0089fdc8 in BackendStartup (port=0x1f68c80) at postmaster.c:5468
> #12 0x00899df5 in ServerLoop () at postmaster.c:2147
> #13 0x00898eb8 in PostmasterMain (argc=9, argv=0x1f7f940) at 
> postmaster.c:1439
> #14 0x007b2812 in main (argc=9, argv=0x1f7f940) at main.c:226
> (gdb) f 5
> #5  0x00b7169f in cancel_filesystem_credentials (credentials=0x0, 
> mcxt=0x0) at cdbfilesystemcredential.c:357
> 357   Assert(NULL != credentials);
> (gdb) p mcxt
> $1 = (MemoryContext) 0x0
> (gdb) 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-351) Add movefilespace option to 'hawq filespace'

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-351:
---
Assignee: Radar Lei  (was: Lei Chang)

> Add movefilespace option to 'hawq filespace'
> 
>
> Key: HAWQ-351
> URL: https://issues.apache.org/jira/browse/HAWQ-351
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Radar Lei
> Fix For: 2.0.0
>
>
> Currently hawq filespace can only create new filespace, will add 
> '--movefilespace' option and '--location' option to support change existing 
> filespace locations.
> This is important for change filespace hdfs location from non-HA to HA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-350) Disable some of installcheck tests due to plpython is not installed by default

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-350:
---
Fix Version/s: 2.1.0

> Disable some of installcheck tests due to plpython is not installed by default
> --
>
> Key: HAWQ-350
> URL: https://issues.apache.org/jira/browse/HAWQ-350
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 2.0.0-beta-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.1.0
>
>
> We have 3 three suites in installcheck test failes:
> 1. subplan and set_functions fail due to plpython is not installed by default
> 2. exttab1 fails due to gpfdist changes
> {noformat}
> [gpadmin@localhost incubator-hawq]$ make installcheck-good
> == dropping database "regression" ==
> NOTICE:  database "regression" does not exist, skipping
> DROP DATABASE
> == creating database "regression" ==
> CREATE DATABASE
> ALTER DATABASE
> == checking optimizer status  ==
> Optimizer disabled. Using planner answer files
> == running regression test queries==
> test type_sanity  ... ok (0.05 sec)
> test querycontext ... ok (8.19 sec)
> test errortbl ... ok (4.80 sec)
> test goh_create_type_composite ... ok (3.10 sec)
> test goh_partition... ok (37.64 sec)
> test goh_toast... ok (1.25 sec)
> test goh_database ... ok (2.84 sec)
> test goh_gp_dist_random   ... ok (0.24 sec)
> test gpsql_alter_table... ok (12.11 sec)
> test goh_portals  ... ok (8.25 sec)
> test goh_prepare  ... ok (7.08 sec)
> test goh_alter_owner  ... ok (0.25 sec)
> test boolean  ... ok (3.18 sec)
> test char ... ok (2.58 sec)
> test name ... ok (2.29 sec)
> test varchar  ... ok (2.58 sec)
> test text ... ok (0.58 sec)
> test int2 ... ok (4.35 sec)
> test int4 ... ok (5.63 sec)
> test int8 ... ok (4.17 sec)
> test oid  ... ok (2.01 sec)
> test float4   ... ok (2.89 sec)
> test date ... ok (2.45 sec)
> test time ... ok (1.98 sec)
> test insert   ... ok (4.44 sec)
> test create_function_1... ok (0.01 sec)
> test function ... ok (8.10 sec)
> test function_extensions  ... ok (0.03 sec)
> test subplan  ... FAILED (9.59 sec)
> test create_table_test... ok (0.25 sec)
> test create_table_distribution ... ok (3.20 sec)
> test copy ... ok (35.09 sec)
> test create_aggregate ... ok (9.77 sec)
> test aggregate_with_groupingsets ... ok (0.81 sec)
> test information_schema   ... ok (0.09 sec)
> test transactions ... ok (6.32 sec)
> test temp ... ok (4.09 sec)
> test set_functions... FAILED (5.86 sec)
> test sequence ... ok (1.19 sec)
> test polymorphism ... ok (3.99 sec)
> test rowtypes ... ok (2.67 sec)
> test exttab1  ... FAILED (13.85 sec)
> test gpcopy   ... ok (29.14 sec)
> test madlib_svec_test ... ok (1.57 sec)
> test agg_derived_win  ... ok (3.18 sec)
> test parquet_ddl  ... ok (8.29 sec)
> test parquet_multipletype ... ok (2.81 sec)
> test parquet_pagerowgroup_size ... ok (14.40 sec)
> test parquet_compression  ... ok (13.93 sec)
> test parquet_subpartition ... ok (7.24 sec)
> test caqlinmem... ok (0.14 sec)
> test hcatalog_lookup  ... ok (2.02 sec)
> test json_load... ok (0.42 sec)
> test external_oid ... ok (0.79 sec)
> test validator_function   ... ok (0.03 sec)
> ===
>  3 of 55 tests failed.
> ===
> The differences that caused some tests to fail can be viewed in the
> file "./regression.diffs".  A copy of the test summary that you see
> above is saved in the file "./regression.out".
> {noformat}
> {noformat}
> [gpadmin@localhost incubator-hawq]$ cat src/test/regress/regression.diffs
> *** ./expected/subplan.out2016-01-18 05:36:05.000680391 -0800
> --- ./results/subplan.out 2016-01-18 05:36:05.048608087 -0800
> ***
> *** 20,25 
> --- 20,26 
>   insert into i4 select i, i-10 from generate_series(-5,0)i;
>   DROP LANGUAGE IF EXISTS plpythonu CASCADE;
>   CREATE LANGUAGE plpythonu;
> + ERROR:  could not access file "$libdir/plpython": No such file or directory
>   create or replace function twice(int) returns int as $$
>  select 2 * $1;
>   $$ language sql;
> ***
> *** 34,56 
>   else:
>   return x * 3
>   $$ language plpythonu;
>   select t1.* from t1 where (t1.a, t

[jira] [Updated] (HAWQ-343) Core when setting enable_secure_filesystem to true

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-343:
---
Fix Version/s: 2.0.0

> Core when setting enable_secure_filesystem to true
> --
>
> Key: HAWQ-343
> URL: https://issues.apache.org/jira/browse/HAWQ-343
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Noa Horn
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> Happened only with debug build. In optimized build seems to work ok.
> Repro:
> {noformat}
> # set enable_secure_filesystem to true;
> FATAL:  Unexpected internal error (cdbfilesystemcredential.c:357)
> DETAIL:  FailedAssertion("!(((void *)0) != credentials)", File: 
> "cdbfilesystemcredential.c", Line: 357)
> HINT:  Process 21815 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released until then.
> server closed the connection unexpectedly
>   This probably means the server terminated abnormally
>   before or while processing the request.
> The connection to the server was lost. Attempting reset: Succeeded.
> {noformat}
> Backtrace:
> {noformat}
> (gdb) bt
> #0  0x0031e06e14f3 in select () from /lib64/libc.so.6
> #1  0x00b8f108 in pg_usleep (microsec=3000) at pgsleep.c:43
> #2  0x009e4418 in elog_debug_linger (edata=0x1186600) at elog.c:4125
> #3  0x009dca95 in errfinish (dummy=0) at elog.c:595
> #4  0x009db1e8 in ExceptionalCondition (conditionName=0xe045d5 
> "!(((void *)0) != credentials)", errorType=0xe04466 "FailedAssertion", 
> fileName=0xe0432d "cdbfilesystemcredential.c", lineNumber=357) at assert.c:66
> #5  0x00b7169f in cancel_filesystem_credentials (credentials=0x0, 
> mcxt=0x0) at cdbfilesystemcredential.c:357
> #6  0x00b7176f in cleanup_filesystem_credentials (portal=0x20b8388) 
> at cdbfilesystemcredential.c:388
> #7  0x00a19ade in PortalDrop (portal=0x20b8388, isTopCommit=0 '\000') 
> at portalmem.c:419
> #8  0x008f57c5 in exec_simple_query (query_string=0x206d1f8 "set 
> enable_secure_filesystem to true;", seqServerHost=0x0, seqServerPort=-1) at 
> postgres.c:1758
> #9  0x008fa3cf in PostgresMain (argc=4, argv=0x1fb8c88, 
> username=0x1fb8a80 "hornn") at postgres.c:4711
> #10 0x008a093e in BackendRun (port=0x1f68c80) at postmaster.c:5875
> #11 0x0089fdc8 in BackendStartup (port=0x1f68c80) at postmaster.c:5468
> #12 0x00899df5 in ServerLoop () at postmaster.c:2147
> #13 0x00898eb8 in PostmasterMain (argc=9, argv=0x1f7f940) at 
> postmaster.c:1439
> #14 0x007b2812 in main (argc=9, argv=0x1f7f940) at main.c:226
> (gdb) f 5
> #5  0x00b7169f in cancel_filesystem_credentials (credentials=0x0, 
> mcxt=0x0) at cdbfilesystemcredential.c:357
> 357   Assert(NULL != credentials);
> (gdb) p mcxt
> $1 = (MemoryContext) 0x0
> (gdb) 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-350) Disable some of installcheck tests due to plpython is not installed by default

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-350:
---
Fix Version/s: (was: 2.1.0)
   2.0.0

> Disable some of installcheck tests due to plpython is not installed by default
> --
>
> Key: HAWQ-350
> URL: https://issues.apache.org/jira/browse/HAWQ-350
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 2.0.0-beta-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.0.0
>
>
> We have 3 three suites in installcheck test failes:
> 1. subplan and set_functions fail due to plpython is not installed by default
> 2. exttab1 fails due to gpfdist changes
> {noformat}
> [gpadmin@localhost incubator-hawq]$ make installcheck-good
> == dropping database "regression" ==
> NOTICE:  database "regression" does not exist, skipping
> DROP DATABASE
> == creating database "regression" ==
> CREATE DATABASE
> ALTER DATABASE
> == checking optimizer status  ==
> Optimizer disabled. Using planner answer files
> == running regression test queries==
> test type_sanity  ... ok (0.05 sec)
> test querycontext ... ok (8.19 sec)
> test errortbl ... ok (4.80 sec)
> test goh_create_type_composite ... ok (3.10 sec)
> test goh_partition... ok (37.64 sec)
> test goh_toast... ok (1.25 sec)
> test goh_database ... ok (2.84 sec)
> test goh_gp_dist_random   ... ok (0.24 sec)
> test gpsql_alter_table... ok (12.11 sec)
> test goh_portals  ... ok (8.25 sec)
> test goh_prepare  ... ok (7.08 sec)
> test goh_alter_owner  ... ok (0.25 sec)
> test boolean  ... ok (3.18 sec)
> test char ... ok (2.58 sec)
> test name ... ok (2.29 sec)
> test varchar  ... ok (2.58 sec)
> test text ... ok (0.58 sec)
> test int2 ... ok (4.35 sec)
> test int4 ... ok (5.63 sec)
> test int8 ... ok (4.17 sec)
> test oid  ... ok (2.01 sec)
> test float4   ... ok (2.89 sec)
> test date ... ok (2.45 sec)
> test time ... ok (1.98 sec)
> test insert   ... ok (4.44 sec)
> test create_function_1... ok (0.01 sec)
> test function ... ok (8.10 sec)
> test function_extensions  ... ok (0.03 sec)
> test subplan  ... FAILED (9.59 sec)
> test create_table_test... ok (0.25 sec)
> test create_table_distribution ... ok (3.20 sec)
> test copy ... ok (35.09 sec)
> test create_aggregate ... ok (9.77 sec)
> test aggregate_with_groupingsets ... ok (0.81 sec)
> test information_schema   ... ok (0.09 sec)
> test transactions ... ok (6.32 sec)
> test temp ... ok (4.09 sec)
> test set_functions... FAILED (5.86 sec)
> test sequence ... ok (1.19 sec)
> test polymorphism ... ok (3.99 sec)
> test rowtypes ... ok (2.67 sec)
> test exttab1  ... FAILED (13.85 sec)
> test gpcopy   ... ok (29.14 sec)
> test madlib_svec_test ... ok (1.57 sec)
> test agg_derived_win  ... ok (3.18 sec)
> test parquet_ddl  ... ok (8.29 sec)
> test parquet_multipletype ... ok (2.81 sec)
> test parquet_pagerowgroup_size ... ok (14.40 sec)
> test parquet_compression  ... ok (13.93 sec)
> test parquet_subpartition ... ok (7.24 sec)
> test caqlinmem... ok (0.14 sec)
> test hcatalog_lookup  ... ok (2.02 sec)
> test json_load... ok (0.42 sec)
> test external_oid ... ok (0.79 sec)
> test validator_function   ... ok (0.03 sec)
> ===
>  3 of 55 tests failed.
> ===
> The differences that caused some tests to fail can be viewed in the
> file "./regression.diffs".  A copy of the test summary that you see
> above is saved in the file "./regression.out".
> {noformat}
> {noformat}
> [gpadmin@localhost incubator-hawq]$ cat src/test/regress/regression.diffs
> *** ./expected/subplan.out2016-01-18 05:36:05.000680391 -0800
> --- ./results/subplan.out 2016-01-18 05:36:05.048608087 -0800
> ***
> *** 20,25 
> --- 20,26 
>   insert into i4 select i, i-10 from generate_series(-5,0)i;
>   DROP LANGUAGE IF EXISTS plpythonu CASCADE;
>   CREATE LANGUAGE plpythonu;
> + ERROR:  could not access file "$libdir/plpython": No such file or directory
>   create or replace function twice(int) returns int as $$
>  select 2 * $1;
>   $$ language sql;
> ***
> *** 34,56 
>   else:
>   return x * 3
>   $$ language plpythonu;
> 

[jira] [Updated] (HAWQ-348) Optimizer (ORCA/Planner) should not preprocess (table) functions at planning phase

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-348:
---
Fix Version/s: backlog

> Optimizer (ORCA/Planner) should not preprocess (table) functions at planning 
> phase
> --
>
> Key: HAWQ-348
> URL: https://issues.apache.org/jira/browse/HAWQ-348
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Optimizer
>Reporter: Ruilong Huo
>Assignee: Amr El-Helw
> Fix For: backlog
>
>
> Optimizer (ORCA/Planner) currently preprocesses (table) functions (either in 
> target list or from clause) at planning phase. This introduces:
> 1. Much lower performance since the result of the (table) function is 
> motioned to QD from QEs after the preprocessing, and it is further processed 
> there at QD, especially the result is large. In this case, QD does heavy 
> workload and becomes the bottleneck. It shows about 20x performance 
> difference in below example.
> 2. Much more memory overhead at QD since it needs to hold the result of the 
> (table) function. This is risky since the result might be unpredictably large.
> Here are the steps to reproduce this issue, as well as some initial analysis:
> Step 1: Prepare schema and data
> {noformat}
> CREATE TABLE t (id INT);
> CREATE TABLE
> INSERT INTO t SELECT generate_series(1, 1);
> INSERT 0 1
> CREATE OR REPLACE FUNCTION get_t()
> RETURNS SETOF t
> LANGUAGE SQL AS
> 'SELECT * FROM t'
> STABLE;
> CREATE FUNCTION
> {noformat}
> 2. With optimizer = OFF (Planner)
> {noformat}
> SET optimizer='OFF';
> SET
> select sum(id) from t;
>sum
> --
>  50005000
> (1 row)
> Time: 8801.577 ms
> select sum(id) from get_t();
>sum
> --
>  50005000
> (1 row)
> Time: 189992.273 ms
> EXPLAIN SELECT sum(id) FROM get_t();
>  QUERY PLAN
> 
>  Aggregate  (cost=32.50..32.51 rows=1 width=8)
>->  Function Scan on get_t  (cost=0.00..12.50 rows=8000 width=4)
>  Settings:  default_segment_num=8; optimizer=off
>  Optimizer status: legacy query optimizer
> (4 rows)
> {noformat}
> 3. With optimizer = ON (ORCA)
> {noformat}
> SET optimizer='ON';
> SET
> select sum(id) from t;
>sum
> --
>  50005000
> (1 row)
> Time: 10103.436 ms
> select sum(id) from get_t();
>sum
> --
>  50005000
> (1 row)
> Time: 195551.740 ms
> EXPLAIN SELECT sum(id) FROM get_t();
>  QUERY PLAN
> 
>  Aggregate  (cost=32.50..32.51 rows=1 width=8)
>->  Function Scan on get_t  (cost=0.00..12.50 rows=8000 width=4)
>  Settings:  default_segment_num=8
>  Optimizer status: legacy query optimizer
> (4 rows)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-358) Installcheck good failures in hawq-dev environment

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-358:
---
Fix Version/s: 2.0.0

> Installcheck good failures in hawq-dev environment
> --
>
> Key: HAWQ-358
> URL: https://issues.apache.org/jira/browse/HAWQ-358
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Tests
>Reporter: Caleb Welton
>Assignee: Jiali Yao
> Fix For: 2.0.0
>
>
> Build and test within a hawq dev environment setup via the instructions 
> outlined in the hawq-devel docker enviroment: 
> https://hub.docker.com/r/mayjojo/hawq-devel/
> Results in the following errors
> {noformat}
> ...
> test errortbl ... FAILED (6.83 sec)
> ...
> test subplan  ... FAILED (8.15 sec)
> ...
> test create_table_distribution ... FAILED (3.47 sec)
> test copy ... FAILED (34.76 sec)
> ...
> test set_functions... FAILED (4.90 sec)
> ...
> test exttab1  ... FAILED (17.66 sec)
> ...
> {noformat}
> Summary of issues:
> * *errortbl* - every connection to gpfdist results in "connection with 
> gpfdist failed for gpfdist://localhost:7070/nation.tbl"
> * *subplan* - trying to create plpython resulted in "could not access file 
> "$libdir/plpython": No such file or directory", lack of plpython causes many 
> other statements to fail
> * *create_table_distribution* - test likely needs some refactoring to reflect 
> calculating correct bucketnum based on current system configuration
> * *copy* - seems to be failing because rows aren't coming out in the expected 
> order, test needs fixing to be able to handle this
> * *set_functions* - same plpythonu issue described above
> * *exttab1* - same issue reading from gpfdist described above



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-358) Installcheck good failures in hawq-dev environment

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-358:
---
Assignee: Ruilong Huo  (was: Jiali Yao)

> Installcheck good failures in hawq-dev environment
> --
>
> Key: HAWQ-358
> URL: https://issues.apache.org/jira/browse/HAWQ-358
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Tests
>Reporter: Caleb Welton
>Assignee: Ruilong Huo
> Fix For: 2.0.0
>
>
> Build and test within a hawq dev environment setup via the instructions 
> outlined in the hawq-devel docker enviroment: 
> https://hub.docker.com/r/mayjojo/hawq-devel/
> Results in the following errors
> {noformat}
> ...
> test errortbl ... FAILED (6.83 sec)
> ...
> test subplan  ... FAILED (8.15 sec)
> ...
> test create_table_distribution ... FAILED (3.47 sec)
> test copy ... FAILED (34.76 sec)
> ...
> test set_functions... FAILED (4.90 sec)
> ...
> test exttab1  ... FAILED (17.66 sec)
> ...
> {noformat}
> Summary of issues:
> * *errortbl* - every connection to gpfdist results in "connection with 
> gpfdist failed for gpfdist://localhost:7070/nation.tbl"
> * *subplan* - trying to create plpython resulted in "could not access file 
> "$libdir/plpython": No such file or directory", lack of plpython causes many 
> other statements to fail
> * *create_table_distribution* - test likely needs some refactoring to reflect 
> calculating correct bucketnum based on current system configuration
> * *copy* - seems to be failing because rows aren't coming out in the expected 
> order, test needs fixing to be able to handle this
> * *set_functions* - same plpythonu issue described above
> * *exttab1* - same issue reading from gpfdist described above



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-357) Track how many times a segment can not get expected containers from global resource manager

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-357:
---
Fix Version/s: 2.0.0

> Track how many times a segment can not get expected containers from global 
> resource manager
> ---
>
> Key: HAWQ-357
> URL: https://issues.apache.org/jira/browse/HAWQ-357
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.0.0
>
>
> This improvement makes HAWQ RM able to track the times one segment can not 
> get expected containers from global resource manager, YARN for example. 
> In some cases, another YARN application may hold containers without returning 
> them in time, HAWQ RM may always find some segments having no resource. This 
> improvement make HAWQ RM log this situation as a warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-216) Built-in functions gp_update_global_sequence_entry has a bug

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-216:
---
Assignee: Ming LI  (was: Lei Chang)

> Built-in functions gp_update_global_sequence_entry has a bug
> 
>
> Key: HAWQ-216
> URL: https://issues.apache.org/jira/browse/HAWQ-216
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Dong Li
>Assignee: Ming LI
>
> The code in persistentutil.c:200 is as follow.
> {code}
> line 200: int8sequenceVal;
> line 212: sequenceVal = PG_GETARG_INT64(1);
> {code}
> It make put a int64 to int8, which will make bugs as follow.
> {code}
> ff=# select * from gp_global_sequence ;
>  sequence_num
> --
>  1200
>   100
>   100
>   100
>   100
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> (15 rows)
> ff=# select gp_update_global_sequence_entry('(0,2)'::tid,128);
> ERROR:  sequence number too low (persistentutil.c:232)
> {code}
> It compares 128 with 100, and judges that 128<100.
> Because it makes 128 into  int8 type, which make 0x80(128) be calculated as  
> -128 in int8 type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-133) core when use plpython udf

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-133:
---
Fix Version/s: 2.0.0

> core when use plpython udf
> --
>
> Key: HAWQ-133
> URL: https://issues.apache.org/jira/browse/HAWQ-133
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Dong Li
>Assignee: Ruilong Huo
> Fix For: 2.0.0
>
>
> Run sqls below can recur the core.
> {code}
> CREATE PROCEDURAL LANGUAGE plpythonu;
> CREATE TABLE users (
>   fname text not null,
>   lname text not null,
>   username text,
>   userid serial
>   -- , PRIMARY KEY(lname, fname) 
>   ) DISTRIBUTED BY (userid);
> INSERT INTO users (fname, lname, username) VALUES ('jane', 'doe', 'j_doe');
> INSERT INTO users (fname, lname, username) VALUES ('john', 'doe', 'johnd');
> INSERT INTO users (fname, lname, username) VALUES ('willem', 'doe', 'w_doe');
> INSERT INTO users (fname, lname, username) VALUES ('rick', 'smith', 'slash');
> CREATE FUNCTION spi_prepared_plan_test_one(a text) RETURNS text
>   AS
> 'if not SD.has_key("myplan"):
>   q = "SELECT count(*) FROM users WHERE lname = $1"
>   SD["myplan"] = plpy.prepare(q, [ "text" ])
> try:
>   rv = plpy.execute(SD["myplan"], [a])
>   return "there are " + str(rv[0]["count"]) + " " + str(a) + "s"
> except Exception, ex:
>   plpy.error(str(ex))
> return None
> '
>   LANGUAGE plpythonu;
> select spi_prepared_plan_test_one('doe');
> select spi_prepared_plan_test_one('smith');
> {code}
> when execute "select spi_prepared_plan_test_one('smith');"
> server closed the connection unexpectedly
>   This probably means the server terminated abnormally
>   before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.
> !>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-216) Built-in functions gp_update_global_sequence_entry has a bug

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-216:
---
Fix Version/s: 2.0.0

> Built-in functions gp_update_global_sequence_entry has a bug
> 
>
> Key: HAWQ-216
> URL: https://issues.apache.org/jira/browse/HAWQ-216
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Dong Li
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> The code in persistentutil.c:200 is as follow.
> {code}
> line 200: int8sequenceVal;
> line 212: sequenceVal = PG_GETARG_INT64(1);
> {code}
> It make put a int64 to int8, which will make bugs as follow.
> {code}
> ff=# select * from gp_global_sequence ;
>  sequence_num
> --
>  1200
>   100
>   100
>   100
>   100
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> (15 rows)
> ff=# select gp_update_global_sequence_entry('(0,2)'::tid,128);
> ERROR:  sequence number too low (persistentutil.c:232)
> {code}
> It compares 128 with 100, and judges that 128<100.
> Because it makes 128 into  int8 type, which make 0x80(128) be calculated as  
> -128 in int8 type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-229) External table can be altered, which make errors.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-229:
---
Fix Version/s: backlog

> External table can be altered, which make errors.
> -
>
> Key: HAWQ-229
> URL: https://issues.apache.org/jira/browse/HAWQ-229
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables
>Reporter: Dong Li
>Assignee: Lei Chang
> Fix For: backlog
>
>
> We can't use "alter external table" to alter an external table, but we can 
> use "alter table" to alter an external table.
> {code}
> mytest=# create external web table e4 (c1 int, c2 int) execute 'echo 1, 1' ON 
> 2 format 'CSV';
> CREATE EXTERNAL TABLE
> mytest=# select * from e4;
>  c1 | c2
> +
>   1 |  1
>   1 |  1
> (2 rows)
> mytest=# alter table e4 drop column c2;
> WARNING:  "e4" is an external table. ALTER TABLE for external tables is 
> deprecated.
> HINT:  Use ALTER EXTERNAL TABLE instead
> ALTER TABLE
> mytest=# select * from e4;
> ERROR:  extra data after last expected column  (seg0 localhost:4 
> pid=57645)
> DETAIL:  External table e4, line 1 of execute:echo 1, 1: "1, 1"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-360) Data loss when alter partition table by add two column in one time.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-360:
---
Assignee: Ming LI  (was: Lei Chang)

> Data loss when alter partition table by add two column in one time.
> ---
>
> Key: HAWQ-360
> URL: https://issues.apache.org/jira/browse/HAWQ-360
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: DDL
>Reporter: Dong Li
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> {code}
> CREATE TABLE part_1 (a int, b int, c int)
> WITH (appendonly=true, compresslevel=5)
> partition by range (a)
> (
>  partition b start (1) end (50) every (1)
> );
> insert into part_1 values(1,1,1);
> select * from part_1;
>  a | b | c
> ---+---+---
>  1 | 1 | 1
> (1 row)
> alter table part_1 add column p int default 3,add column q int default 4;
> select * from part_1;
>  a | b | c | p | q
> ---+---+---+---+---
> (0 rows)
> {code}
> When I check hdfs file, I find the size of new hdfs files is 0, which means 
> the data loss when alter the table and create new file for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-360) Data loss when alter partition table by add two column in one time.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-360:
---
Fix Version/s: 2.0.0

> Data loss when alter partition table by add two column in one time.
> ---
>
> Key: HAWQ-360
> URL: https://issues.apache.org/jira/browse/HAWQ-360
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: DDL
>Reporter: Dong Li
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> {code}
> CREATE TABLE part_1 (a int, b int, c int)
> WITH (appendonly=true, compresslevel=5)
> partition by range (a)
> (
>  partition b start (1) end (50) every (1)
> );
> insert into part_1 values(1,1,1);
> select * from part_1;
>  a | b | c
> ---+---+---
>  1 | 1 | 1
> (1 row)
> alter table part_1 add column p int default 3,add column q int default 4;
> select * from part_1;
>  a | b | c | p | q
> ---+---+---+---+---
> (0 rows)
> {code}
> When I check hdfs file, I find the size of new hdfs files is 0, which means 
> the data loss when alter the table and create new file for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-124) Create Project Maturity Model summary file

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-124:
---
Fix Version/s: backlog

> Create Project Maturity Model summary file
> --
>
> Key: HAWQ-124
> URL: https://issues.apache.org/jira/browse/HAWQ-124
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Core
>Reporter: Caleb Welton
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Graduating from an Apache Incubator project requires showing the Apache 
> Incubator IPMC that we have reached a level of maturity as an incubator 
> project.  One tool that can be used to assess our maturity is the [Apache 
> Project Maturity Model 
> Document|https://community.apache.org/apache-way/apache-project-maturity-model.html].
>   
> I propose we do something similar to what Groovy did and include a Project 
> Maturity Self assessment in our source code and evaluate ourselves with 
> respect to project maturity with each of our reports.  
> To do:
> 1. Create a MATURITY.adoc file in our root project directory containing our 
> self assessment.
> See 
> https://github.com/apache/groovy/blob/67b87a3592f13a6281f5b20081c37a66c80079b9/MATURITY.adoc
>  as an example document in the Groovy project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-274) Add disk check for JBOD temporary directory in segment FTS

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-274.
--
Resolution: Fixed

> Add disk check for JBOD temporary directory in segment FTS
> --
>
> Key: HAWQ-274
> URL: https://issues.apache.org/jira/browse/HAWQ-274
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Fault Tolerance, Resource Manager
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: 2.0.0
>
>
> Add disk check for JBOD temporary directory in segment FTS.
> Add column for catalog table gp_segment_configuration, indicates which 
> directory is failed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-355) order by problem: sorting varchar column with space is not correct.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-355:
---
Fix Version/s: backlog

> order by problem: sorting varchar column with space is not correct.
> ---
>
> Key: HAWQ-355
> URL: https://issues.apache.org/jira/browse/HAWQ-355
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: SuperJDC
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Firstly, my hawq download from official website:  
> https://network.pivotal.io/products/pivotal-hdb, a released stable version. 
> My steps:
> DROP TABLE IF EXISTS testorder;
> CREATE TABLE testorder(
>   ss VARCHAR(10)
> ) distributed randomly;
> INSERT INTO testorder 
> VALUES ('cc'), ('c c'), ('cc'), 
> ('aa'), ('a a'), ('ac'), 
> ('b c'), ('bc'), ('bb');
> SELECT ss FROM testorder 
> ORDER BY ss;
> The result:
> aa
> a a
> ac
> bb
> bc
> b c
> cc
> cc
> c c
> It seems that when a colum has a space char, the sorted result would been not 
> correct.
> I followed the document steps and successfully integrated with the ambari. 
> All of hawq configurations are the default.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-274) Add disk check for JBOD temporary directory in segment FTS

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-274:
---
Fix Version/s: 2.0.0

> Add disk check for JBOD temporary directory in segment FTS
> --
>
> Key: HAWQ-274
> URL: https://issues.apache.org/jira/browse/HAWQ-274
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Fault Tolerance, Resource Manager
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: 2.0.0
>
>
> Add disk check for JBOD temporary directory in segment FTS.
> Add column for catalog table gp_segment_configuration, indicates which 
> directory is failed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-351) Add movefilespace option to 'hawq filespace'

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-351:
---
Fix Version/s: 2.0.0

> Add movefilespace option to 'hawq filespace'
> 
>
> Key: HAWQ-351
> URL: https://issues.apache.org/jira/browse/HAWQ-351
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Radar Lei
> Fix For: 2.0.0
>
>
> Currently hawq filespace can only create new filespace, will add 
> '--movefilespace' option and '--location' option to support change existing 
> filespace locations.
> This is important for change filespace hdfs location from non-HA to HA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-98) Moving HAWQ docker file into code base

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-98?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-98:
--
Fix Version/s: 2.1.0

> Moving HAWQ docker file into code base
> --
>
> Key: HAWQ-98
> URL: https://issues.apache.org/jira/browse/HAWQ-98
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: Goden Yao
>Assignee: Roman Shaposhnik
> Fix For: 2.1.0
>
>
> We have a pre-built docker image (check [HAWQ build & 
> install|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61320026]
>  sitting outside the codebase.
> It should be incorporated in the Apache git and maintained by the community.
> Proposed location is to create a  folder under root



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   6   7   8   9   >