[GitHub] incubator-hawq issue #999: HAWQ-1140. Parallelize test cases for hawqregiste...

2016-11-03 Thread xunzhang
Github user xunzhang commented on the issue:

https://github.com/apache/incubator-hawq/pull/999
  
Merged.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #999: HAWQ-1140. Parallelize test cases for hawq...

2016-11-03 Thread xunzhang
Github user xunzhang closed the pull request at:

https://github.com/apache/incubator-hawq/pull/999


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HAWQ-1146) docs - some hawq reload commands missing

2016-11-03 Thread Lisa Owen (JIRA)
Lisa Owen created HAWQ-1146:
---

 Summary: docs - some hawq reload commands missing 
 Key: HAWQ-1146
 URL: https://issues.apache.org/jira/browse/HAWQ-1146
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Documentation
Reporter: Lisa Owen
Assignee: David Yozie
 Fix For: 2.0.1.0-incubating


reloading HAWQ config without restart is invoked with the following command:
 - hawq stop  [-u | --reload] 

some of the references to this command in the docs are missing the .  
validate all references and correct the ones that are inconsistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq issue #1002: HAWQ-1130. Draft implementation.

2016-11-03 Thread sansanichfb
Github user sansanichfb commented on the issue:

https://github.com/apache/incubator-hawq/pull/1002
  
@hornn haven't test performance yet, will test soon. As far as I 
understood, we invoke this function only one time per session, when importing 
first relation from Hive. Also before we were using SPI, which supposed to be 
slower(supports ANSI SQL, checks for permissions etc) and now we will be using 
CAQL which I hope supposed to light-weight. Haven't found any unit-tests yet, 
working on adding some.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1002: HAWQ-1130. Draft implementation.

2016-11-03 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1002#discussion_r86456829
  
--- Diff: src/backend/access/transam/varsup.c ---
@@ -474,73 +479,53 @@ ResetExternalObjectId(void)
 
 /*
  * master_highest_used_oid
- * Query the database to find the highest used Oid by
+ * Uses CAQL to find the highest used Oid by
  * 1) Find all the relations that has Oids
  * 2) Find max oid from those relations
  */
 Oid
 master_highest_used_oid(void)
 {
Oid oidMax = InvalidOid;
+   Oid currentOid;
+   Form_pg_class classForm;
+   int fetchCount;
 
-   if (SPI_OK_CONNECT != SPI_connect())
-   {
-   ereport(ERROR, (errcode(ERRCODE_CDB_INTERNAL_ERROR),
-   errmsg("Unable to connect to execute internal 
query for HCatalog.")));
-   }
-
-   int ret = SPI_execute("SELECT relname FROM pg_class where 
relhasoids=true", true, 0);
+   cqContext *pcqOuterCtx = caql_beginscan(
+   NULL,
+   cql("SELECT * FROM pg_class where relhasoids = :1",
+   BoolGetDatum(true)));
 
-   int rows = SPI_processed;
+   HeapTuple tuple = caql_getnext(pcqOuterCtx);
 
-   char *tableNames[rows];
-
-   if (rows == 0 || ret <= 0 || NULL == SPI_tuptable)
+   if (!HeapTupleIsValid(tuple))
{
-   SPI_finish();
+   caql_endscan(pcqOuterCtx);
+   elog(DEBUG1, "Unable to get list of tables having oids");
return oidMax;
}
 
-   TupleDesc tupdesc = SPI_tuptable->tupdesc;
-   SPITupleTable *tuptable = SPI_tuptable;
-
-   for (int i = 0; i < rows; i++)
-   {
-   HeapTuple tuple = tuptable->vals[i];
-   tableNames[i] = SPI_getvalue(tuple, tupdesc, 1);
-   }
-
/* construct query to get max oid from all tables with oids */
-   StringInfoData sqlstr;
-   initStringInfo();
-   appendStringInfo(, "SELECT max(oid) FROM (");
-   for (int i = 0; i < rows; i++)
+   StringInfo sqlstr = makeStringInfo();
+   while (HeapTupleIsValid(tuple))
{
-   if (i > 0)
-   {
-   appendStringInfo(, " UNION ALL ");
-   }
-   appendStringInfo(, "SELECT max(oid) AS oid FROM %s", 
tableNames[i]);
-   }
-   appendStringInfo(, ") AS x");
+   classForm = (Form_pg_class) GETSTRUCT(tuple);
+   appendStringInfo(sqlstr, "SELECT oid FROM %s WHERE oid >= :1 
ORDER BY oid", classForm->relname.data);
--- End diff --

I found out that caql doesn't support MAX and it supports only ORDER BY, 
but doesn't support ORDER BY ... DESC so we actually need last row. Will 
updated code to iterate fetchCount times and get last value.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1002: HAWQ-1130. Draft implementation.

2016-11-03 Thread hornn
Github user hornn commented on the issue:

https://github.com/apache/incubator-hawq/pull/1002
  
Also forgot to mention - I think there are unittests around this code. You 
might need to revise them, I am not sure.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1002: HAWQ-1130. Draft implementation.

2016-11-03 Thread hornn
Github user hornn commented on the issue:

https://github.com/apache/incubator-hawq/pull/1002
  
@sansanichfb - did you compare performance before and after the change? 
IIRC this function is used internally every time an hcatalog object is created, 
so it should be fast.
After this change we'll have 1 + [number of tables with oids] queries, 
where before we had 2 queries (one with [number of tables with oids] joins), so 
it is probably good to check that the new approach is not slower. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1002: HAWQ-1130. Draft implementation.

2016-11-03 Thread hornn
Github user hornn commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1002#discussion_r86455407
  
--- Diff: src/backend/access/transam/varsup.c ---
@@ -474,73 +479,53 @@ ResetExternalObjectId(void)
 
 /*
  * master_highest_used_oid
- * Query the database to find the highest used Oid by
+ * Uses CAQL to find the highest used Oid by
  * 1) Find all the relations that has Oids
  * 2) Find max oid from those relations
  */
 Oid
 master_highest_used_oid(void)
 {
Oid oidMax = InvalidOid;
+   Oid currentOid;
+   Form_pg_class classForm;
+   int fetchCount;
 
-   if (SPI_OK_CONNECT != SPI_connect())
-   {
-   ereport(ERROR, (errcode(ERRCODE_CDB_INTERNAL_ERROR),
-   errmsg("Unable to connect to execute internal 
query for HCatalog.")));
-   }
-
-   int ret = SPI_execute("SELECT relname FROM pg_class where 
relhasoids=true", true, 0);
+   cqContext *pcqOuterCtx = caql_beginscan(
+   NULL,
+   cql("SELECT * FROM pg_class where relhasoids = :1",
+   BoolGetDatum(true)));
 
-   int rows = SPI_processed;
+   HeapTuple tuple = caql_getnext(pcqOuterCtx);
 
-   char *tableNames[rows];
-
-   if (rows == 0 || ret <= 0 || NULL == SPI_tuptable)
+   if (!HeapTupleIsValid(tuple))
{
-   SPI_finish();
+   caql_endscan(pcqOuterCtx);
+   elog(DEBUG1, "Unable to get list of tables having oids");
return oidMax;
}
 
-   TupleDesc tupdesc = SPI_tuptable->tupdesc;
-   SPITupleTable *tuptable = SPI_tuptable;
-
-   for (int i = 0; i < rows; i++)
-   {
-   HeapTuple tuple = tuptable->vals[i];
-   tableNames[i] = SPI_getvalue(tuple, tupdesc, 1);
-   }
-
/* construct query to get max oid from all tables with oids */
-   StringInfoData sqlstr;
-   initStringInfo();
-   appendStringInfo(, "SELECT max(oid) FROM (");
-   for (int i = 0; i < rows; i++)
+   StringInfo sqlstr = makeStringInfo();
+   while (HeapTupleIsValid(tuple))
{
-   if (i > 0)
-   {
-   appendStringInfo(, " UNION ALL ");
-   }
-   appendStringInfo(, "SELECT max(oid) AS oid FROM %s", 
tableNames[i]);
-   }
-   appendStringInfo(, ") AS x");
+   classForm = (Form_pg_class) GETSTRUCT(tuple);
+   appendStringInfo(sqlstr, "SELECT oid FROM %s WHERE oid >= :1 
ORDER BY oid", classForm->relname.data);
 
-   ret = SPI_execute(sqlstr.data, true, 1);
+   currentOid = caql_getoid_plus(NULL, , NULL, 
cql(sqlstr->data, oidMax));
--- End diff --

Also, if the query returns no rows, `currentOid` will be `InvalidOid`. It 
is probably better to check specifically for this condition and only update 
`oidMax` if it's a valid result. Something like
```
if (oidMax == InvalidOid)
oidMax = currentOid
if (currentOid != InvalidOid)
oidMax = currentOid > oidMax ? currentOid : oidMax;
```
What do you think?
I am also not sure the `WHERE > maxOid` is needed - logically the loop will 
work fine without it because we only update maxOid if the result is bigger. 
Would be interesting to see if there is any difference in performance without 
this condition.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1002: HAWQ-1130. Draft implementation.

2016-11-03 Thread sansanichfb
Github user sansanichfb commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1002#discussion_r86454619
  
--- Diff: src/backend/access/transam/varsup.c ---
@@ -474,73 +479,53 @@ ResetExternalObjectId(void)
 
 /*
  * master_highest_used_oid
- * Query the database to find the highest used Oid by
+ * Uses CAQL to find the highest used Oid by
  * 1) Find all the relations that has Oids
  * 2) Find max oid from those relations
  */
 Oid
 master_highest_used_oid(void)
 {
Oid oidMax = InvalidOid;
+   Oid currentOid;
+   Form_pg_class classForm;
+   int fetchCount;
 
-   if (SPI_OK_CONNECT != SPI_connect())
-   {
-   ereport(ERROR, (errcode(ERRCODE_CDB_INTERNAL_ERROR),
-   errmsg("Unable to connect to execute internal 
query for HCatalog.")));
-   }
-
-   int ret = SPI_execute("SELECT relname FROM pg_class where 
relhasoids=true", true, 0);
+   cqContext *pcqOuterCtx = caql_beginscan(
+   NULL,
+   cql("SELECT * FROM pg_class where relhasoids = :1",
--- End diff --

Oh, nice, will use just relname, thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1002: HAWQ-1130. Draft implementation.

2016-11-03 Thread hornn
Github user hornn commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1002#discussion_r86451395
  
--- Diff: src/backend/access/transam/varsup.c ---
@@ -474,73 +479,53 @@ ResetExternalObjectId(void)
 
 /*
  * master_highest_used_oid
- * Query the database to find the highest used Oid by
+ * Uses CAQL to find the highest used Oid by
  * 1) Find all the relations that has Oids
  * 2) Find max oid from those relations
  */
 Oid
 master_highest_used_oid(void)
 {
Oid oidMax = InvalidOid;
+   Oid currentOid;
+   Form_pg_class classForm;
+   int fetchCount;
 
-   if (SPI_OK_CONNECT != SPI_connect())
-   {
-   ereport(ERROR, (errcode(ERRCODE_CDB_INTERNAL_ERROR),
-   errmsg("Unable to connect to execute internal 
query for HCatalog.")));
-   }
-
-   int ret = SPI_execute("SELECT relname FROM pg_class where 
relhasoids=true", true, 0);
+   cqContext *pcqOuterCtx = caql_beginscan(
+   NULL,
+   cql("SELECT * FROM pg_class where relhasoids = :1",
+   BoolGetDatum(true)));
 
-   int rows = SPI_processed;
+   HeapTuple tuple = caql_getnext(pcqOuterCtx);
 
-   char *tableNames[rows];
-
-   if (rows == 0 || ret <= 0 || NULL == SPI_tuptable)
+   if (!HeapTupleIsValid(tuple))
{
-   SPI_finish();
+   caql_endscan(pcqOuterCtx);
+   elog(DEBUG1, "Unable to get list of tables having oids");
return oidMax;
}
 
-   TupleDesc tupdesc = SPI_tuptable->tupdesc;
-   SPITupleTable *tuptable = SPI_tuptable;
-
-   for (int i = 0; i < rows; i++)
-   {
-   HeapTuple tuple = tuptable->vals[i];
-   tableNames[i] = SPI_getvalue(tuple, tupdesc, 1);
-   }
-
/* construct query to get max oid from all tables with oids */
-   StringInfoData sqlstr;
-   initStringInfo();
-   appendStringInfo(, "SELECT max(oid) FROM (");
-   for (int i = 0; i < rows; i++)
+   StringInfo sqlstr = makeStringInfo();
+   while (HeapTupleIsValid(tuple))
{
-   if (i > 0)
-   {
-   appendStringInfo(, " UNION ALL ");
-   }
-   appendStringInfo(, "SELECT max(oid) AS oid FROM %s", 
tableNames[i]);
-   }
-   appendStringInfo(, ") AS x");
+   classForm = (Form_pg_class) GETSTRUCT(tuple);
+   appendStringInfo(sqlstr, "SELECT oid FROM %s WHERE oid >= :1 
ORDER BY oid", classForm->relname.data);
--- End diff --

why not use `MAX(oid)` like it was before? is it because it's not supported 
by caql?
Also, maybe add `LIMIT 1` to the query because we only need the first row.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1002: HAWQ-1130. Draft implementation.

2016-11-03 Thread hornn
Github user hornn commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1002#discussion_r86447937
  
--- Diff: src/backend/access/transam/varsup.c ---
@@ -474,73 +479,53 @@ ResetExternalObjectId(void)
 
 /*
  * master_highest_used_oid
- * Query the database to find the highest used Oid by
+ * Uses CAQL to find the highest used Oid by
  * 1) Find all the relations that has Oids
  * 2) Find max oid from those relations
  */
 Oid
 master_highest_used_oid(void)
 {
Oid oidMax = InvalidOid;
+   Oid currentOid;
+   Form_pg_class classForm;
+   int fetchCount;
 
-   if (SPI_OK_CONNECT != SPI_connect())
-   {
-   ereport(ERROR, (errcode(ERRCODE_CDB_INTERNAL_ERROR),
-   errmsg("Unable to connect to execute internal 
query for HCatalog.")));
-   }
-
-   int ret = SPI_execute("SELECT relname FROM pg_class where 
relhasoids=true", true, 0);
+   cqContext *pcqOuterCtx = caql_beginscan(
+   NULL,
+   cql("SELECT * FROM pg_class where relhasoids = :1",
--- End diff --

shouldn't it be `SELECT relname ...` like it was in the previous query?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1002: HAWQ-1130. Draft implementation.

2016-11-03 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/1002

HAWQ-1130. Draft implementation.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sansanichfb/incubator-hawq HAWQ-1130

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1002.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1002


commit 45ab3a679e224e1173b9370a08ce088a6b7ae228
Author: Oleksandr Diachenko 
Date:   2016-11-03T18:52:38Z

HAWQ-1130. Draft implementation.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1001: HAWQ-1136. Disable .psqlrc in minirepro

2016-11-03 Thread hsyuan
Github user hsyuan commented on the issue:

https://github.com/apache/incubator-hawq/pull/1001
  
@paul-guo- @linwen @liming01 Please take a look.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1001: HAWQ-1136. Disable .psqlrc in minirepro

2016-11-03 Thread hsyuan
GitHub user hsyuan opened a pull request:

https://github.com/apache/incubator-hawq/pull/1001

HAWQ-1136. Disable .psqlrc in minirepro

.psqlrc can create unexpected output and changes in formatting that don't 
play nice with parse_oids().

```
psql database --pset footer -Atq -h localhost -p 5432 -U gpadmin -f 
/tmp/20161012232709/toolkit.sql

{"relids": "573615536", "funcids": ""}
Time: 2.973 ms
```

Generates an Exception:
```
Traceback (most recent call last):
  File "/usr/local/greenplum-db/./bin/minirepro", line 386, in 
main()
  File "/usr/local/greenplum-db/./bin/minirepro", line 320, in main
mr_query = parse_oids(cursor, json_str)
  File "/usr/local/greenplum-db/./bin/minirepro", line 151, in parse_oids
result.relids = json.loads(json_oids)['relids']
  File "/usr/local/greenplum-db/ext/python/lib/python2.6/json/__init__.py", 
line 307, in loads
return _default_decoder.decode(s)
  File "/usr/local/greenplum-db/ext/python/lib/python2.6/json/decoder.py", 
line 322, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 2 column 1 - line 3 column 1 (char 39 - 54)
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hsyuan/incubator-hawq HAWQ-1136

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1001.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1001


commit 4a9eb3f513fef7b1d483de46fb7462b2013864d7
Author: Haisheng Yuan 
Date:   2016-11-03T11:05:14Z

HAWQ-1136. Disable .psqlrc in minirepro

.psqlrc can create unexpected output and changes in formatting that don't 
play nice with parse_oids().

```
psql database --pset footer -Atq -h localhost -p 5432 -U gpadmin -f 
/tmp/20161012232709/toolkit.sql

{"relids": "573615536", "funcids": ""}
Time: 2.973 ms
```

Generates an Exception:
```
Traceback (most recent call last):
  File "/usr/local/greenplum-db/./bin/minirepro", line 386, in 
main()
  File "/usr/local/greenplum-db/./bin/minirepro", line 320, in main
mr_query = parse_oids(cursor, json_str)
  File "/usr/local/greenplum-db/./bin/minirepro", line 151, in parse_oids
result.relids = json.loads(json_oids)['relids']
  File "/usr/local/greenplum-db/ext/python/lib/python2.6/json/__init__.py", 
line 307, in loads
return _default_decoder.decode(s)
  File "/usr/local/greenplum-db/ext/python/lib/python2.6/json/decoder.py", 
line 322, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 2 column 1 - line 3 column 1 (char 39 - 54)
```




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (HAWQ-1145) After registering a partition table, if we want to insert some data into the table, it fails.

2016-11-03 Thread hongwu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hongwu updated HAWQ-1145:
-
Affects Version/s: 2.0.1.0-incubating

> After registering a partition table, if we want to insert some data into the 
> table, it fails.
> -
>
> Key: HAWQ-1145
> URL: https://issues.apache.org/jira/browse/HAWQ-1145
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Affects Versions: 2.0.1.0-incubating
>Reporter: Lili Ma
>Assignee: Hubert Zhang
> Fix For: 2.0.1.0-incubating
>
>
> Reproduce Steps:
> 1. Create a partition table
> {code}
> CREATE TABLE parquet_LINEITEM_uncompressed(   
>   
>   
>   
>  L_ORDERKEY INT8, 
>   
>   
>   
>  L_PARTKEY BIGINT,
>   
>   
>   
>  L_SUPPKEY BIGINT,
>   
>   
>   
>  L_LINENUMBER BIGINT, 
>   
>   
>   
>  L_QUANTITY decimal,  
>   
>   
>   
>  L_EXTENDEDPRICE decimal, 
>   
>   
>   
>  L_DISCOUNT decimal,  
>   
>   
>   
>  L_TAX decimal,   
>   
>   
>   
>  L_RETURNFLAG CHAR(1),
>   
>   
>   
>  L_LINESTATUS 
> CHAR(1),  
>   
>   
>  
> L_SHIPDATE date,  
>   
>   
>   
> L_COMMITDATE date,
>   
>

[jira] [Updated] (HAWQ-1140) Parallelize test cases for hawqregister

2016-11-03 Thread hongwu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hongwu updated HAWQ-1140:
-
Affects Version/s: 2.0.1.0-incubating

> Parallelize test cases for hawqregister
> ---
>
> Key: HAWQ-1140
> URL: https://issues.apache.org/jira/browse/HAWQ-1140
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Command Line Tools
>Affects Versions: 2.0.1.0-incubating
>Reporter: hongwu
>Assignee: hongwu
> Fix For: 2.0.1.0-incubating
>
>
> Refactor test cases to make hawqregister tests run parallel.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1145) After registering a partition table, if we want to insert some data into the table, it fails.

2016-11-03 Thread Lili Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lili Ma updated HAWQ-1145:
--
Description: 
Reproduce Steps:
1. Create a partition table
{code}
CREATE TABLE parquet_LINEITEM_uncompressed( 



 L_ORDERKEY INT8,   



 L_PARTKEY BIGINT,  



 L_SUPPKEY BIGINT,  



 L_LINENUMBER BIGINT,   


 
L_QUANTITY decimal, 



L_EXTENDEDPRICE decimal,



L_DISCOUNT decimal, 



L_TAX decimal,  



L_RETURNFLAG CHAR(1),   



L_LINESTATUS CHAR(1),   



L_SHIPDATE date,



L_COMMITDATE date,  



L_RECEIPTDATE date, 



L_SHIPINSTRUCT CHAR(25),



[jira] [Updated] (HAWQ-1145) After registering a partition table, if we want to insert some data into the table, it fails.

2016-11-03 Thread Lili Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lili Ma updated HAWQ-1145:
--
Assignee: Hubert Zhang  (was: Lei Chang)

> After registering a partition table, if we want to insert some data into the 
> table, it fails.
> -
>
> Key: HAWQ-1145
> URL: https://issues.apache.org/jira/browse/HAWQ-1145
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Lili Ma
>Assignee: Hubert Zhang
> Fix For: 2.0.1.0-incubating
>
>
> Reproduce Steps:
> 1. Create a partition table
> CREATE TABLE parquet_LINEITEM_uncompressed(   
>   
>   
>   
>  L_ORDERKEY INT8, 
>   
>   
>   
>  L_PARTKEY BIGINT,
>   
>   
>   
>  L_SUPPKEY BIGINT,
>   
>   
>   
>  L_LINENUMBER BIGINT, 
>   
>   
>   
>  L_QUANTITY decimal,  
>   
>   
>   
>  L_EXTENDEDPRICE decimal, 
>   
>   
>   
>  L_DISCOUNT decimal,  
>   
>   
>   
>  L_TAX decimal,   
>   
>   
>   
>  L_RETURNFLAG CHAR(1),
>   
>   
>   
>  L_LINESTATUS 
> CHAR(1),  
>   
>   
>  
> L_SHIPDATE date,  
>   
>   
>   
> L_COMMITDATE date,
>   
>   
> 

[jira] [Updated] (HAWQ-1144) Register into a 2-level partition table, hawq register didn't throw error, and indicates that hawq register succeed, but no data can be selected out.

2016-11-03 Thread Lili Ma (JIRA)
s('F')   WITH 
(appendonly=true, orientation=parquet,compresstype=gzip,compresslevel=2)) 
(start(1)  end(5000) every(1000) );
{code}
5. call register
{code}
 hawq register -d postgres -c ~/parquet.yaml parquet_wt_subpartgzip2
{code}
6. It reflects register succeed.
{code}
malilis-MacBook-Pro:tpch malili$ hawq register -d postgres -c ~/parquet.yaml 
parquet_wt_subpartgzip2
20161103:15:58:10:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-try to 
connect database localhost:5432 postgres
20161103:15:58:10:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check...
20161103:15:58:11:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check done...
20161103:15:58:11:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check...
20161103:15:58:13:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check done...
20161103:15:58:13:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check...
20161103:15:58:14:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check done...
20161103:15:58:14:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check...
20161103:15:58:16:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check done...
20161103:15:58:16:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check...
20161103:15:58:17:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check done...
20161103:15:58:20:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New 
file(s) to be registered: 
['hdfs://localhost:8020/hawq_default/16385/16387/17065/1']
20161103:15:58:29:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New 
file(s) to be registered: 
['hdfs://localhost:8020/hawq_default/16385/16387/17074/1']
20161103:15:58:35:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New 
file(s) to be registered: 
['hdfs://localhost:8020/hawq_default/16385/16387/17083/1']
20161103:15:58:41:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New 
file(s) to be registered: 
['hdfs://localhost:8020/hawq_default/16385/16387/17092/1']
20161103:15:58:47:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New 
file(s) to be registered: 
['hdfs://localhost:8020/hawq_default/16385/16387/17101/1']
hdfscmd: "hadoop fs -mv hdfs://localhost:8020/hawq_default/16385/16387/17065/1 
hdfs://localhost:8020/hawq_default/16385/16387/16784/1"
hdfscmd: "hadoop fs -mv hdfs://localhost:8020/hawq_default/16385/16387/17074/1 
hdfs://localhost:8020/hawq_default/16385/16387/16822/1"
hdfscmd: "hadoop fs -mv hdfs://localhost:8020/hawq_default/16385/16387/17083/1 
hdfs://localhost:8020/hawq_default/16385/16387/16860/1"
hdfscmd: "hadoop fs -mv hdfs://localhost:8020/hawq_default/16385/16387/17092/1 
hdfs://localhost:8020/hawq_default/16385/16387/16898/1"
hdfscmd: "hadoop fs -mv hdfs://localhost:8020/hawq_default/16385/16387/17101/1 
hdfs://localhost:8020/hawq_default/16385/16387/16936/1"
20161103:15:58:58:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Hawq 
Register Succeed.
{code}
7. But when we select the table, no data can be selected out.  
{code}
postgres=# select count(*) from parquet_wt_subpartgzip2;
 count
---
 0
(1 row)
{code}
Actually we should throw error if hawq register wants to register into a 2 or 
more -level partition table.

  was:
Register into a 2-level partition table, hawq register didn't throw error, and 
indicates that hawq register succeed, but no data can be selected out.

Reproduce Steps:
1. Create a one-level partition table
{code}
 create table parquet_wt (id SERIAL,a1 int,a2 char(5),a3 numeric,a4 boolean 
DEFAULT false ,a5 char DEFAULT 'd',a6 text,a7 timestamp,a8 character 
varying(705),a9 bigint,a10 date,a11 varchar(600),a12 text,a13 decimal,a14 
real,a15 bigint,a16 int4 ,a17 bytea,a18 timestamp with time zone,a19 timetz,a20 
path,a21 box,a22 macaddr,a23 interval,a24 character varying(800),a25 lseg,a26 
point,a27 double precision,a28 circle,a29 int4,a30 numeric(8),a31 polygon,a32 
date,a33 real,a34 money,a35 cidr,a36 inet,a37 time,a38 text,a39 bit,a40 bit 
varying(5),a41 smallint,a42 int )   WITH (appendonly=true, orientation=parquet) 
distributed randomly  Partition by range(a1) (start(1)  end(5000) every(1000) );
{code}
2. insert some data into this table
```
insert into parquet_wt 
(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14,a15,a16,a17,a18,a19,a20,a21,a22,a23,a24,a25,a26,a27,a28,a29,a30,a31,a32,a33,a34,a35,a36,a37,a38,a39,a40,a41,a42)
 values(generate_series(1,20),'M',2011,'t','a','This is news of today: Deadlock 
between Republicans and Democrats over how best to reduce the U.S. deficit, and 
over what period, has blocked an agreement to allow the raising of the $14.3 
trillion debt ceiling','2001-12-24 02:26:11','U.S. House of Representatives 
Speaker John Boehner, the top Republican in Congress who has put forward a 
deficit reduction plan to be voted on later on Thursday said he had no control 
over whether his bill would avert a cred

[jira] [Created] (HAWQ-1145) After registering a partition table, if we want to insert some data into the table, it fails.

2016-11-03 Thread Lili Ma (JIRA)
Lili Ma created HAWQ-1145:
-

 Summary: After registering a partition table, if we want to insert 
some data into the table, it fails.
 Key: HAWQ-1145
 URL: https://issues.apache.org/jira/browse/HAWQ-1145
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Command Line Tools
Reporter: Lili Ma
Assignee: Lei Chang
 Fix For: 2.0.1.0-incubating


Reproduce Steps:
1. Create a partition table
CREATE TABLE parquet_LINEITEM_uncompressed( 



 L_ORDERKEY INT8,   



 L_PARTKEY BIGINT,  



 L_SUPPKEY BIGINT,  



 L_LINENUMBER BIGINT,   


 
L_QUANTITY decimal, 



L_EXTENDEDPRICE decimal,



L_DISCOUNT decimal, 



L_TAX decimal,  



L_RETURNFLAG CHAR(1),   



L_LINESTATUS CHAR(1),   



L_SHIPDATE date,



L_COMMITDATE date,  



L_RECEIPTDATE date, 


   

[jira] [Updated] (HAWQ-1144) Register into a 2-level partition table, hawq register didn't throw error, and indicates that hawq register succeed, but no data can be selected out.

2016-11-03 Thread Lili Ma (JIRA)
s('F')   WITH 
(appendonly=true, orientation=parquet,compresstype=gzip,compresslevel=2)) 
(start(1)  end(5000) every(1000) );
```
5. call register
```
 hawq register -d postgres -c ~/parquet.yaml parquet_wt_subpartgzip2
```
6. It reflects register succeed.
```
malilis-MacBook-Pro:tpch malili$ hawq register -d postgres -c ~/parquet.yaml 
parquet_wt_subpartgzip2
20161103:15:58:10:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-try to 
connect database localhost:5432 postgres
20161103:15:58:10:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check...
20161103:15:58:11:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check done...
20161103:15:58:11:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check...
20161103:15:58:13:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check done...
20161103:15:58:13:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check...
20161103:15:58:14:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check done...
20161103:15:58:14:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check...
20161103:15:58:16:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check done...
20161103:15:58:16:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check...
20161103:15:58:17:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check done...
20161103:15:58:20:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New 
file(s) to be registered: 
['hdfs://localhost:8020/hawq_default/16385/16387/17065/1']
20161103:15:58:29:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New 
file(s) to be registered: 
['hdfs://localhost:8020/hawq_default/16385/16387/17074/1']
20161103:15:58:35:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New 
file(s) to be registered: 
['hdfs://localhost:8020/hawq_default/16385/16387/17083/1']
20161103:15:58:41:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New 
file(s) to be registered: 
['hdfs://localhost:8020/hawq_default/16385/16387/17092/1']
20161103:15:58:47:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New 
file(s) to be registered: 
['hdfs://localhost:8020/hawq_default/16385/16387/17101/1']
hdfscmd: "hadoop fs -mv hdfs://localhost:8020/hawq_default/16385/16387/17065/1 
hdfs://localhost:8020/hawq_default/16385/16387/16784/1"
hdfscmd: "hadoop fs -mv hdfs://localhost:8020/hawq_default/16385/16387/17074/1 
hdfs://localhost:8020/hawq_default/16385/16387/16822/1"
hdfscmd: "hadoop fs -mv hdfs://localhost:8020/hawq_default/16385/16387/17083/1 
hdfs://localhost:8020/hawq_default/16385/16387/16860/1"
hdfscmd: "hadoop fs -mv hdfs://localhost:8020/hawq_default/16385/16387/17092/1 
hdfs://localhost:8020/hawq_default/16385/16387/16898/1"
hdfscmd: "hadoop fs -mv hdfs://localhost:8020/hawq_default/16385/16387/17101/1 
hdfs://localhost:8020/hawq_default/16385/16387/16936/1"
20161103:15:58:58:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Hawq 
Register Succeed.
```
7. But when we select the table, no data can be selected out.  
```
postgres=# select count(*) from parquet_wt_subpartgzip2;
 count
---
 0
(1 row)
```
Actually we should throw error if hawq register wants to register into a 2 or 
more -level partition table.

  was:
Register into a 2-level partition table, hawq register didn't throw error, and 
indicates that hawq register succeed, but no data can be selected out.

Reproduce Steps:
1. Create a one-level partition table
```
 create table parquet_wt (id SERIAL,a1 int,a2 char(5),a3 numeric,a4 boolean 
DEFAULT false ,a5 char DEFAULT 'd',a6 text,a7 timestamp,a8 character 
varying(705),a9 bigint,a10 date,a11 varchar(600),a12 text,a13 decimal,a14 
real,a15 bigint,a16 int4 ,a17 bytea,a18 timestamp with time zone,a19 timetz,a20 
path,a21 box,a22 macaddr,a23 interval,a24 character varying(800),a25 lseg,a26 
point,a27 double precision,a28 circle,a29 int4,a30 numeric(8),a31 polygon,a32 
date,a33 real,a34 money,a35 cidr,a36 inet,a37 time,a38 text,a39 bit,a40 bit 
varying(5),a41 smallint,a42 int )   WITH (appendonly=true, orientation=parquet) 
distributed randomly  Partition by range(a1) (start(1)  end(5000) every(1000) );
```
2. insert some data into this table
```
insert into parquet_wt 
(a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14,a15,a16,a17,a18,a19,a20,a21,a22,a23,a24,a25,a26,a27,a28,a29,a30,a31,a32,a33,a34,a35,a36,a37,a38,a39,a40,a41,a42)
 values(generate_series(1,20),'M',2011,'t','a','This is news of today: Deadlock 
between Republicans and Democrats over how best to reduce the U.S. deficit, and 
over what period, has blocked an agreement to allow the raising of the $14.3 
trillion debt ceiling','2001-12-24 02:26:11','U.S. House of Representatives 
Speaker John Boehner, the top Republican in Congress who has put forward a 
deficit reduction plan to be voted on later on Thursday said he had no control 
over whether his bill would avert a cred

[jira] [Created] (HAWQ-1144) Register into a 2-level partition table, hawq register didn't throw error, and indicates that hawq register succeed, but no data can be selected out.

2016-11-03 Thread Lili Ma (JIRA)
 int ) 
WITH (appendonly=true, orientation=parquet) distributed 
randomly  Partition by range(a1) Subpartition by list(a2) subpartition template 
( default subpartition df_sp, subpartition sp1 values('M') , subpartition sp2 
values('F')   WITH 
(appendonly=true, orientation=parquet,compresstype=gzip,compresslevel=2)) 
(start(1)  end(5000) every(1000) );
```
5. call register
```
 hawq register -d postgres -c ~/parquet.yaml parquet_wt_subpartgzip2
```
6. It reflects register succeed.
```
malilis-MacBook-Pro:tpch malili$ hawq register -d postgres -c ~/parquet.yaml 
parquet_wt_subpartgzip2
20161103:15:58:10:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-try to 
connect database localhost:5432 postgres
20161103:15:58:10:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check...
20161103:15:58:11:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check done...
20161103:15:58:11:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check...
20161103:15:58:13:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check done...
20161103:15:58:13:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check...
20161103:15:58:14:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check done...
20161103:15:58:14:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check...
20161103:15:58:16:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check done...
20161103:15:58:16:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check...
20161103:15:58:17:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Files 
check done...
20161103:15:58:20:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New 
file(s) to be registered: 
['hdfs://localhost:8020/hawq_default/16385/16387/17065/1']
20161103:15:58:29:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New 
file(s) to be registered: 
['hdfs://localhost:8020/hawq_default/16385/16387/17074/1']
20161103:15:58:35:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New 
file(s) to be registered: 
['hdfs://localhost:8020/hawq_default/16385/16387/17083/1']
20161103:15:58:41:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New 
file(s) to be registered: 
['hdfs://localhost:8020/hawq_default/16385/16387/17092/1']
20161103:15:58:47:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New 
file(s) to be registered: 
['hdfs://localhost:8020/hawq_default/16385/16387/17101/1']
hdfscmd: "hadoop fs -mv hdfs://localhost:8020/hawq_default/16385/16387/17065/1 
hdfs://localhost:8020/hawq_default/16385/16387/16784/1"
hdfscmd: "hadoop fs -mv hdfs://localhost:8020/hawq_default/16385/16387/17074/1 
hdfs://localhost:8020/hawq_default/16385/16387/16822/1"
hdfscmd: "hadoop fs -mv hdfs://localhost:8020/hawq_default/16385/16387/17083/1 
hdfs://localhost:8020/hawq_default/16385/16387/16860/1"
hdfscmd: "hadoop fs -mv hdfs://localhost:8020/hawq_default/16385/16387/17092/1 
hdfs://localhost:8020/hawq_default/16385/16387/16898/1"
hdfscmd: "hadoop fs -mv hdfs://localhost:8020/hawq_default/16385/16387/17101/1 
hdfs://localhost:8020/hawq_default/16385/16387/16936/1"
20161103:15:58:58:083605 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Hawq 
Register Succeed.
```
7. But when we select the table, no data can be selected out.  
```
postgres=# select count(*) from parquet_wt_subpartgzip2;
 count
---
 0
(1 row)
```
Actually we should throw error if hawq register wants to register into a 2 or 
more -level partition table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-1117) RM crash when init db after configure with param '--enable-cassert'

2016-11-03 Thread Devin Jia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devin Jia closed HAWQ-1117.
---

> RM crash when init db after configure with param '--enable-cassert'
> ---
>
> Key: HAWQ-1117
> URL: https://issues.apache.org/jira/browse/HAWQ-1117
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Devin Jia
>Assignee: Xiang Sheng
> Fix For: 2.0.1.0-incubating
>
>
> after i upgrade hawq to 2.0.1 and build, the hawq cluster can't start.
> 1.configure and build:
> {quote}
> ./configure --prefix=/opt/hawq-build --enable-depend --enable-cassert 
> --enable-debug
> make && make install
> {quote}
> 2. start error:
> {quote}
> [gpadmin@hmaster pg_log]$ more 
> /home/gpadmin/hawq-data-directory/masterdd/pg_log/hawq-2016-10-20_133056.csv 
> 2016-10-20 13:30:56.549712 
> CST,"gpadmin","template1",p3279,th-266811104,"[local]",,2016-10-20 13:30:56 
> CST,0,,,seg-1,"FATAL","57P03","the database system is in recovery 
> mode",,,
> 0,,"postmaster.c",2656,
> 2016-10-20 13:30:56.556630 
> CST,,,p3280,th-2668111040,,,seg-1,"LOG","0","database system 
> was interrupted at 2016-10-20 13:22:51 CST",,,0,,"xlog.c",6229,
> 2016-10-20 13:30:56.558414 
> CST,,,p3280,th-2668111040,,,seg-1,"LOG","0","checkpoint 
> record is at 0/857ED8",,,0,,"xlog.c",6306,
> 2016-10-20 13:30:56.558464 
> CST,,,p3280,th-2668111040,,,seg-1,"LOG","0","redo record is 
> at 0/857ED8; undo record is at 0/0; shutdown TRUE",,,0,,"xlog.c",6340,
> 2016-10-20 13:30:56.558495 
> CST,,,p3280,th-2668111040,,,seg-1,"LOG","0","next transaction 
> ID: 0/963; next OID: 10896",,,0,,"xlog.c",6344,
> 2016-10-20 13:30:56.558522 
> CST,,,p3280,th-2668111040,,,seg-1,"LOG","0","next 
> MultiXactId: 1; next MultiXactOffset: 0",,,0,,"xlog.c",6347,
> 2016-10-20 13:30:56.558559 
> CST,,,p3280,th-2668111040,,,seg-1,"LOG","0","database system 
> was not properly shut down; automatic recovery in 
> progress",,,0,,"xlog.c",6436,
> 2016-10-20 13:30:56.563303 
> CST,,,p3280,th-2668111040,,,seg-1,"LOG","0","record with zero 
> length at 0/857F28",,,0,,"xlog.c",4110,
> 2016-10-20 13:30:56.563348 
> CST,,,p3280,th-2668111040,,,seg-1,"LOG","0","no record for 
> redo after checkpoint, skip redo and proceed for recovery 
> pass",,,0,,"xlog.c",6500,
> 2016-10-20 13:30:56.563411 
> CST,,,p3280,th-2668111040,,,seg-1,"LOG","0","end of 
> transaction log location is 0/857F28",,,0,,"xlog.c",6584,
> 2016-10-20 13:30:56.568795 
> CST,,,p3280,th-2668111040,,,seg-1,"LOG","0","Finished startup 
> pass 1.  Proceeding to startup crash recovery passes 2 and 
> 3.",,,0,,"xlog.c",681
> 8,
> 2016-10-20 13:30:56.580641 
> CST,,,p3281,th-2668111040,,,seg-1,"LOG","0","Finished startup 
> crash recovery pass 2",,,0,,"xlog.c",6989,
> 2016-10-20 13:30:56.595325 
> CST,,,p3282,th-2668111040,,,seg-1,"LOG","0","recovery restart 
> point at 0/857ED8","xlog redo checkpoint: redo 0/857ED8; undo 0/0; tli 1; 
> xid 0/
> 963; oid 10896; multi 1; offset 0; shutdown
> REDO PASS 3 @ 0/857ED8; LSN 0/857F28: prev 0/857E88; xid 0: XLOG - 
> checkpoint: redo 0/857ED8; undo 0/0; tli 1; xid 0/963; oid 10896; multi 1; 
> offset 0; shutdown",,0,,"xlog.c",8331,
> 2016-10-20 13:30:56.595390 
> CST,,,p3282,th-2668111040,,,seg-1,"LOG","0","record with zero 
> length at 0/857F28",,,0,,"xlog.c",4110,
> 2016-10-20 13:30:56.595477 
> CST,,,p3282,th-2668111040,,,seg-1,"LOG","0","Oldest active 
> transaction from prepared transactions 963",,,0,,"xlog.c",5998,
> 2016-10-20 13:30:56.603266 
> CST,,,p3282,th-2668111040,,,seg-1,"LOG","0","database system 
> is ready",,,0,,"xlog.c",6024,
> 2016-10-20 13:30:56.603314 
> CST,,,p3282,th-2668111040,,,seg-1,"LOG","0","PostgreSQL 
> 8.2.15 (Greenplum Database 4.2.0 build 1) (HAWQ 2.0.1.0 build dev) on 
> x86_64-unknown-linux
> -gnu, compiled by GCC gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-15) compiled on 
> Oct 20 2016 12:27:04 (with assert checking)",,,0,,"xlog.c",6034,
> 2016-10-20 13:30:56.607520 
> CST,,,p3282,th-2668111040,,,seg-1,"LOG","0","Finished startup 
> crash recovery pass 3",,,0,,"xlog.c",7133,
> 2016-10-20 13:30:56.632316 
> CST,,,p3283,th-2668111040,,,seg-1,"LOG","0","Finished startup 
> integrity checking",,,0,,"xlog.c",7161,
> 2016-10-20 13:30:56.645485 
> CST,,,p3290,th-2668111040,con4,,seg-1,"LOG","0","Resource 
> manager starts accepting resource request. Listening normal socket port 5437. 
> Total list
> ened 1 

[GitHub] incubator-hawq issue #1000: HAWQ-1143. Libhdfs create semantic is not consis...

2016-11-03 Thread linwen
Github user linwen commented on the issue:

https://github.com/apache/incubator-hawq/pull/1000
  
+1 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1000: HAWQ-1143. Libhdfs create semantic is not...

2016-11-03 Thread zhangh43
GitHub user zhangh43 opened a pull request:

https://github.com/apache/incubator-hawq/pull/1000

HAWQ-1143. Libhdfs create semantic is not consistent with posix stand…

…ard.
Open a file under posix standard, if o_create flag is set to true and the 
file exists, no side effect except O_EXCL is also be set true.
Open a file in HDFS with hdfs::create flag will report errors if file 
exists.
In libhdfs, the o_create flag is interpreted to hdfs::create, which leads 
to errors if file exists no matter O_EXCL is set or not.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangh43/incubator-hawq hawq1143

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1000.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1000


commit 68867d781e138b0504145ef65c73b0d996fe5216
Author: hzhang2 
Date:   2016-11-03T08:21:24Z

HAWQ-1143. Libhdfs create semantic is not consistent with posix standard.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #999: HAWQ-1140. Parallelize test cases for hawqregiste...

2016-11-03 Thread amyrazz44
Github user amyrazz44 commented on the issue:

https://github.com/apache/incubator-hawq/pull/999
  
LGTM +1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HAWQ-1143) Libhdfs create semantic is not consistent with posix standard.

2016-11-03 Thread Hubert Zhang (JIRA)
Hubert Zhang created HAWQ-1143:
--

 Summary: Libhdfs create semantic is not consistent with posix 
standard.
 Key: HAWQ-1143
 URL: https://issues.apache.org/jira/browse/HAWQ-1143
 Project: Apache HAWQ
  Issue Type: Bug
  Components: libhdfs
Reporter: Hubert Zhang
Assignee: Lei Chang


Open a file under posix standard, if o_create flag is set to true and the file 
exists, no side effect except O_EXCL is also be set true.
Open a file in HDFS with  hdfs::create flag will report errors if file exists.

In libhdfs, the o_create flag is interpreted to hdfs::create, which leads to 
errors if file exists no matter O_EXCL is set or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-1143) Libhdfs create semantic is not consistent with posix standard.

2016-11-03 Thread Hubert Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hubert Zhang reassigned HAWQ-1143:
--

Assignee: Hubert Zhang  (was: Lei Chang)

> Libhdfs create semantic is not consistent with posix standard.
> --
>
> Key: HAWQ-1143
> URL: https://issues.apache.org/jira/browse/HAWQ-1143
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
>
> Open a file under posix standard, if o_create flag is set to true and the 
> file exists, no side effect except O_EXCL is also be set true.
> Open a file in HDFS with  hdfs::create flag will report errors if file exists.
> In libhdfs, the o_create flag is interpreted to hdfs::create, which leads to 
> errors if file exists no matter O_EXCL is set or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq issue #999: HAWQ-1140. Parallelize test cases for hawqregiste...

2016-11-03 Thread xunzhang
Github user xunzhang commented on the issue:

https://github.com/apache/incubator-hawq/pull/999
  
cc @ictmalili @amyrazz44 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---