[jira] [Resolved] (HAWQ-1184) Fix risky "-Wshift-negative-value, -Wparentheses-equality, -Wtautological-compare" types of compile warnings under osx

2016-12-01 Thread hongwu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hongwu resolved HAWQ-1184.
--
   Resolution: Fixed
Fix Version/s: 2.0.1.0-incubating

> Fix risky "-Wshift-negative-value, -Wparentheses-equality, 
> -Wtautological-compare" types of compile warnings under osx
> --
>
> Key: HAWQ-1184
> URL: https://issues.apache.org/jira/browse/HAWQ-1184
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build
>Reporter: hongwu
>Assignee: hongwu
> Fix For: 2.0.1.0-incubating
>
>
> http://pastebin.com/DwGqcxr8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq issue #1032: HAWQ-1184. Fix risky "-Wshift-negative-value, -W...

2016-12-01 Thread xunzhang
Github user xunzhang commented on the issue:

https://github.com/apache/incubator-hawq/pull/1032
  
Merged into master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1032: HAWQ-1184. Fix risky "-Wshift-negative-va...

2016-12-01 Thread xunzhang
Github user xunzhang closed the pull request at:

https://github.com/apache/incubator-hawq/pull/1032


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1032: HAWQ-1184. Fix risky "-Wshift-negative-value, -W...

2016-12-01 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1032
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1032: HAWQ-1184. Fix risky "-Wshift-negative-va...

2016-12-01 Thread xunzhang
Github user xunzhang commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1032#discussion_r90584055
  
--- Diff: src/backend/catalog/pg_filesystem.c ---
@@ -384,7 +384,7 @@ FileSystemGetNameByOid(Oid  fsysOid)
 
 char *fsys_func_type_to_name(FileSystemFuncType ftype)
 {
-   if(ftype < 0 || ftype >= FSYS_FUNC_TOTALNUM)
+   if (!(ftype >= FSYS_FUNC_CONNECT && ftype < FSYS_FUNC_TOTALNUM))
--- End diff --

@paul-guo- done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HAWQ-1185) Support multiple parameters setting in one hawq config command.

2016-12-01 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1185:
--

 Summary: Support multiple parameters setting in one hawq config 
command.
 Key: HAWQ-1185
 URL: https://issues.apache.org/jira/browse/HAWQ-1185
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Lei Chang


Currently we support only one in one "hawq config" CLI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #1032: HAWQ-1184. Fix risky "-Wshift-negative-va...

2016-12-01 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1032#discussion_r90579306
  
--- Diff: src/backend/catalog/pg_filesystem.c ---
@@ -384,7 +384,7 @@ FileSystemGetNameByOid(Oid  fsysOid)
 
 char *fsys_func_type_to_name(FileSystemFuncType ftype)
 {
-   if(ftype < 0 || ftype >= FSYS_FUNC_TOTALNUM)
+   if (!(ftype >= FSYS_FUNC_CONNECT && ftype < FSYS_FUNC_TOTALNUM))
--- End diff --

It does not seem to need a conversion of !. 
The code below seems to apply either unsigned or signed type, which enum 
may use.
ftype < FSYS_FUNC_CONNECT || ftype >= FSYS_FUNC_TOTALNUM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (HAWQ-1182) Add Macro for unused argument and variable.

2016-12-01 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1182.
--
Resolution: Fixed

> Add Macro for unused argument and variable.
> ---
>
> Key: HAWQ-1182
> URL: https://issues.apache.org/jira/browse/HAWQ-1182
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.0.1.0-incubating
>
>
> Per discussion on the dev mail list. I want to add the Macros below to help 
> eliminate some "unused" variable/argument warnings.
> Typical cases:
> 1) Variable is only used with some configurations, etc. val for Assert(val). 
> Then you could add the code below to eliminate the warning when cassert is 
> not enabled.
>POSSIBLE_UNUSED_VAR(val);
> 2) For argument that is explicitly unused but might be kept for 
> compatibility, you could use UNUSED_ARG().
> A simple patch, see below:
> [pguo@host67:/data2/github/incubator-hawq-a/src/include]$ git diff
> diff --git a/src/include/postgres.h b/src/include/postgres.h
> index 1138f20..9391d6b 100644
> --- a/src/include/postgres.h
> +++ b/src/include/postgres.h
> @@ -513,6 +513,18 @@ extern void gp_set_thread_sigmasks(void);
>  extern void OnMoveOutCGroupForQE(void);
> +#ifndef POSSIBLE_UNUSED_VAR
> +#define POSSIBLE_UNUSED_VAR(x) ((void)x)
> +#endif
> +
> +#ifndef POSSIBLE_UNUSED_ARG
> +#define POSSIBLE_UNUSED_ARG(x) ((void)x)
> +#endif
> +
> +#ifndef UNUSED_ARG
> +#define UNUSED_ARG(x)  ((void)x)
> +#endif
> +
>  #ifdef __cplusplus
>  }   /* extern "C" */
>  #endif



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #1033: HAWQ-1183. Writable external table with H...

2016-12-01 Thread paul-guo-
GitHub user paul-guo- opened a pull request:

https://github.com/apache/incubator-hawq/pull/1033

HAWQ-1183. Writable external table with Hash distribution shows slow …

…performance

This fixes some warnings also in the affected source file.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/paul-guo-/incubator-hawq planner

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1033.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1033


commit 9bcdea00178f840bc96b76c9bed78f457a8543f8
Author: Paul Guo 
Date:   2016-12-01T08:43:06Z

HAWQ-1183. Writable external table with Hash distribution shows slow 
performance

This fixes some warnings also in the affected source file.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1031: HAWQ-1182. Add Macro for unused argument ...

2016-12-01 Thread paul-guo-
Github user paul-guo- closed the pull request at:

https://github.com/apache/incubator-hawq/pull/1031


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1031: HAWQ-1182. Add Macro for unused argument ...

2016-12-01 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1031#discussion_r90576041
  
--- Diff: src/include/postgres.h ---
@@ -513,6 +513,18 @@ extern void gp_set_thread_sigmasks(void);
 
 extern void OnMoveOutCGroupForQE(void);
 
+#ifndef POSSIBLE_UNUSED_VAR
+#define POSSIBLE_UNUSED_VAR(x) ((void)x)
+#endif
+
+#ifndef POSSIBLE_UNUSED_ARG
+#define POSSIBLE_UNUSED_ARG(x) ((void)x)
+#endif
+
+#ifndef UNUSED_ARG
+#define UNUSED_ARG(x)  ((void)x)
--- End diff --

It is because the tab stop shown on this page is 8, while that in hawq 4.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1031: HAWQ-1182. Add Macro for unused argument and var...

2016-12-01 Thread wengyanqing
Github user wengyanqing commented on the issue:

https://github.com/apache/incubator-hawq/pull/1031
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-1157) Make consistent docs link to PostgreSQL 8.2

2016-12-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713427#comment-15713427
 ] 

ASF GitHub Bot commented on HAWQ-1157:
--

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/67


> Make consistent docs link to PostgreSQL 8.2
> ---
>
> Key: HAWQ-1157
> URL: https://issues.apache.org/jira/browse/HAWQ-1157
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Jane Beckman
>Assignee: David Yozie
> Fix For: 2.0.1.0-incubating
>
>
> Some links to Postgresql link to other versions of PostgreSQL than 8.2. HAWQ 
> is based on PostgreSQL 8.2.15.  One section mentions PostgreSQL 9.0 
> specifically. These references should standardize on 8.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1157) Make consistent docs link to PostgreSQL 8.2

2016-12-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713146#comment-15713146
 ] 

ASF GitHub Bot commented on HAWQ-1157:
--

GitHub user janebeckman opened a pull request:

https://github.com/apache/incubator-hawq-docs/pull/67

HAWQ-1157 Update links to PostgresQL 9.0



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/janebeckman/incubator-hawq-docs 
feature/HAWQ-1157PostgreSQL

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq-docs/pull/67.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #67


commit c63a7eb3f6b5e1287c3e21094b0d5d5c03fa05e7
Author: Jane Beckman 
Date:   2016-12-01T18:47:55Z

Update links to PostgresQL 9.0




> Make consistent docs link to PostgreSQL 8.2
> ---
>
> Key: HAWQ-1157
> URL: https://issues.apache.org/jira/browse/HAWQ-1157
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Jane Beckman
>Assignee: David Yozie
> Fix For: 2.0.1.0-incubating
>
>
> Some links to Postgresql link to other versions of PostgreSQL than 8.2. HAWQ 
> is based on PostgreSQL 8.2.15.  One section mentions PostgreSQL 9.0 
> specifically. These references should standardize on 8.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1157) Make consistent docs link to PostgreSQL 8.2

2016-12-01 Thread Jane Beckman (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15712718#comment-15712718
 ] 

Jane Beckman commented on HAWQ-1157:


Wen Lin notes pg_hba.conf points to 8.4, but 9.0 is the more accurate version.

> Make consistent docs link to PostgreSQL 8.2
> ---
>
> Key: HAWQ-1157
> URL: https://issues.apache.org/jira/browse/HAWQ-1157
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Jane Beckman
>Assignee: David Yozie
> Fix For: 2.0.1.0-incubating
>
>
> Some links to Postgresql link to other versions of PostgreSQL than 8.2. HAWQ 
> is based on PostgreSQL 8.2.15.  One section mentions PostgreSQL 9.0 
> specifically. These references should standardize on 8.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq issue #1032: HAWQ-1184. Fix risky "-Wshift-negative-value, -W...

2016-12-01 Thread xunzhang
Github user xunzhang commented on the issue:

https://github.com/apache/incubator-hawq/pull/1032
  
cc @paul-guo- 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1032: HAWQ-1184. Fix risky "-Wshift-negative-va...

2016-12-01 Thread xunzhang
GitHub user xunzhang opened a pull request:

https://github.com/apache/incubator-hawq/pull/1032

HAWQ-1184. Fix risky "-Wshift-negative-value, 
-Wparentheses-equality,-Wtautological-compare" types of compile warnings under 
osx.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xunzhang/incubator-hawq HAWQ-1184

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1032.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1032


commit dcc45ba1b3e1eb90db13aface7f92bd59fd704ae
Author: xunzhang 
Date:   2016-12-01T15:30:01Z

HAWQ-1184. Fix risky "-Wshift-negative-value, -Wparentheses-equality, 
-Wtautological-compare" types of compile warnings under osx.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (HAWQ-1184) Fix risky "-Wshift-negative-value, -Wparentheses-equality, -Wtautological-compare" types of compile warnings under osx

2016-12-01 Thread hongwu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hongwu reassigned HAWQ-1184:


Assignee: hongwu  (was: Lei Chang)

> Fix risky "-Wshift-negative-value, -Wparentheses-equality, 
> -Wtautological-compare" types of compile warnings under osx
> --
>
> Key: HAWQ-1184
> URL: https://issues.apache.org/jira/browse/HAWQ-1184
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build
>Reporter: hongwu
>Assignee: hongwu
>
> http://pastebin.com/DwGqcxr8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1184) Fix risky "-Wshift-negative-value, -Wparentheses-equality, -Wtautological-compare" types of compile warnings under osx

2016-12-01 Thread hongwu (JIRA)
hongwu created HAWQ-1184:


 Summary: Fix risky "-Wshift-negative-value, 
-Wparentheses-equality, -Wtautological-compare" types of compile warnings under 
osx
 Key: HAWQ-1184
 URL: https://issues.apache.org/jira/browse/HAWQ-1184
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: Build
Reporter: hongwu
Assignee: Lei Chang


http://pastebin.com/DwGqcxr8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #1031: HAWQ-1182. Add Macro for unused argument ...

2016-12-01 Thread xunzhang
Github user xunzhang commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1031#discussion_r90413490
  
--- Diff: src/include/postgres.h ---
@@ -513,6 +513,18 @@ extern void gp_set_thread_sigmasks(void);
 
 extern void OnMoveOutCGroupForQE(void);
 
+#ifndef POSSIBLE_UNUSED_VAR
+#define POSSIBLE_UNUSED_VAR(x) ((void)x)
+#endif
+
+#ifndef POSSIBLE_UNUSED_ARG
+#define POSSIBLE_UNUSED_ARG(x) ((void)x)
+#endif
+
+#ifndef UNUSED_ARG
+#define UNUSED_ARG(x)  ((void)x)
--- End diff --

..indent


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1031: HAWQ-1182. Add Macro for unused argument ...

2016-12-01 Thread paul-guo-
GitHub user paul-guo- opened a pull request:

https://github.com/apache/incubator-hawq/pull/1031

HAWQ-1182. Add Macro for unused argument and variable.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/paul-guo-/incubator-hawq build

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1031.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1031


commit a670199ba7d44802edc022f45baddf1850e6986f
Author: Paul Guo 
Date:   2016-12-01T09:01:03Z

HAWQ-1182. Add Macro for unused argument and variable.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-1183) Writable external table with Hash distribution shows slow performance

2016-12-01 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15711343#comment-15711343
 ] 

Paul Guo commented on HAWQ-1183:


With the previous patch, the planner and run time is as expected now. 

postgres=# explain analyze INSERT INTO ext_tbl1 SELECT * from tbl1;
QUERY PLAN
-
-

Insert (cost=0.00..444.03 rows=167 width=8)
Rows out: Avg 351833.3 rows x 6 workers. Max/Last(seg5:host67/seg0:host67) 
351916/351849 rows with 89/171 ms to first row, 4074/4209 ms to end, start 
offset by 46/45 ms.
Executor memory: 1K bytes avg, 1K bytes max (seg5:host67).
-> Result (cost=0.00..431.01 rows=167 width=20)
Rows out: Avg 351833.3 rows x 6 workers. Max/Last(seg5:host67/seg0:host67) 
351916/351849 rows with 77/148 ms to first row, 292/392 ms to end, start offset 
by 46/45 ms.
-> Table Scan on tbl1 (cost=0.00..431.00 rows=167 width=8)
Rows out: Avg 351833.3 rows x 6 workers. Max/Last(seg5:host67/seg2:host67) 
351916/351855 rows with 77/152 ms to first row, 158/257 ms to end, start offset 
by 46/42
ms.
Slice statistics:
(slice0) Executor memory: 280K bytes avg x 6 workers, 280K bytes max 
(seg5:host67).
Statement statistics:
Memory used: 262144K bytes
Optimizer status: PQO version 1.684
Dispatcher statistics:
executors used(total/cached/new connection): (6/0/6); dispatcher 
time(total/connection/dispatch data): (38.288 ms/37.708 ms/0.078 ms).
dispatch data time(max/min/avg): (0.028 ms/0.004 ms/0.012 ms); consume executor 
data time(max/min/avg): (0.067 ms/0.014 ms/0.029 ms); free executor 
time(max/min/avg): (0.000 ms/0
.000 ms/0.000 ms).
Data locality statistics:
data locality ratio: 1.000; virtual segment number: 6; different host number: 
1; virtual segment number per host(avg/min/max): (6/6/6); segment 
size(avg/min/max): (7670609.333 B/
7668464 B/7672344 B); segment size with penalty(avg/min/max): (0.000 B/0 B/0 
B); continuity(avg/min/max): (1.000/1.000/1.000); DFS metadatacache: 28.855 ms; 
resource allocation: 12.
933 ms; datalocality calculation: 0.190 ms.
Total runtime: 4333.663 ms
(18 rows)


> Writable external table with Hash distribution shows slow performance
> -
>
> Key: HAWQ-1183
> URL: https://issues.apache.org/jira/browse/HAWQ-1183
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> Steps:
> 1. Create tables and populate them.
> drop table tbl1;
> drop external table ext_tbl1;
> drop external table ext_tbl1_random;
> CREATE TABLE tbl1 (a int, b text) DISTRIBUTED BY (a);
> INSERT INTO tbl1 VALUES (generate_series(1,1000),'aaa');
> INSERT INTO tbl1 VALUES (generate_series(1,1),'bbb');
> INSERT INTO tbl1 VALUES (generate_series(1,10),'bbc');
> INSERT INTO tbl1 VALUES (generate_series(1,100),'bdbc');
> INSERT INTO tbl1 VALUES (generate_series(1,100),'bdddbc');
> CREATE WRITABLE EXTERNAL TABLE ext_tbl1
> ( LIKE tbl1 )
> LOCATION ('gpfdist://127.0.0.1/tbl1.csv')
> FORMAT 'CSV' (DELIMITER ',')
> DISTRIBUTED BY (a);
> CREATE WRITABLE EXTERNAL TABLE ext_tbl1_random
> ( LIKE tbl1 )
> LOCATION ('gpfdist://127.0.0.1/tbl1.random.csv')
> FORMAT 'CSV' (DELIMITER ',')
> DISTRIBUTED RANDOMLY;
> 2. Write the two external tables. We can find that the external table with 
> hash distribution is slow with inserting, and plan shows that it has 1 
> workers only.
> postgres=# explain analyze INSERT INTO ext_tbl1 SELECT * from tbl1;
>  QUERY PLAN
> -
> -
> 
>  Insert  (cost=0.00..509.20 rows=1000 width=8)
>Rows out:  Avg 2111000.0 rows x 1 workers.  
> Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 17/17 ms to first 
> row, 20145/20145 ms to end, start offset by 18/18 ms.
>Executor memory:  1K bytes.
>->  Result  (cost=0.00..431.07 rows=1000 width=20)
>  Rows out:  Avg 2111000.0 rows x 1 workers.  
> Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 14/14 ms to first 
> row, 1919/1919 ms to end, start offset by 18/18 ms
> .
>  ->  Redistribute 

[jira] [Updated] (HAWQ-1183) Writable external table with Hash distribution shows slow performance

2016-12-01 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1183:
---
Description: 
Steps:

1. Create tables and populate them.
drop table tbl1;
drop external table ext_tbl1;
drop external table ext_tbl1_random;

CREATE TABLE tbl1 (a int, b text) DISTRIBUTED BY (a);
INSERT INTO tbl1 VALUES (generate_series(1,1000),'aaa');
INSERT INTO tbl1 VALUES (generate_series(1,1),'bbb');
INSERT INTO tbl1 VALUES (generate_series(1,10),'bbc');
INSERT INTO tbl1 VALUES (generate_series(1,100),'bdbc');
INSERT INTO tbl1 VALUES (generate_series(1,100),'bdddbc');

CREATE WRITABLE EXTERNAL TABLE ext_tbl1
( LIKE tbl1 )
LOCATION ('gpfdist://127.0.0.1/tbl1.csv')
FORMAT 'CSV' (DELIMITER ',')
DISTRIBUTED BY (a);

CREATE WRITABLE EXTERNAL TABLE ext_tbl1_random
( LIKE tbl1 )
LOCATION ('gpfdist://127.0.0.1/tbl1.random.csv')
FORMAT 'CSV' (DELIMITER ',')
DISTRIBUTED RANDOMLY;

2. Write the two external tables. We can find that the external table with hash 
distribution is slow with inserting, and plan shows that it has 1 workers only.

postgres=# explain analyze INSERT INTO ext_tbl1 SELECT * from tbl1;

 QUERY PLAN

-
-

 Insert  (cost=0.00..509.20 rows=1000 width=8)
   Rows out:  Avg 2111000.0 rows x 1 workers.  
Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 17/17 ms to first 
row, 20145/20145 ms to end, start offset by 18/18 ms.
   Executor memory:  1K bytes.
   ->  Result  (cost=0.00..431.07 rows=1000 width=20)
 Rows out:  Avg 2111000.0 rows x 1 workers.  
Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 14/14 ms to first 
row, 1919/1919 ms to end, start offset by 18/18 ms
.
 ->  Redistribute Motion 1:1  (slice1; segments: 1)  (cost=0.00..431.05 
rows=1000 width=8)
   Hash Key: tbl1.a
   Rows out:  Avg 2111000.0 rows x 1 workers at destination.  
Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 14/14 ms to first 
row, 1273/1273 ms to end, sta
rt offset by 18/18 ms.
   ->  Table Scan on tbl1  (cost=0.00..431.01 rows=1000 width=8)
 Rows out:  Avg 2111000.0 rows x 1 workers.  
Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 13/13 ms to first 
row, 447/447 ms to end, start offset b
y 18/18 ms.
 Slice statistics:
   (slice0)Executor memory: 293K bytes (seg0:host67).
   (slice1)Executor memory: 303K bytes (seg0:host67).
 Statement statistics:
   Memory used: 262144K bytes
 Optimizer status: PQO version 1.684
 Dispatcher statistics:
   executors used(total/cached/new connection): (2/0/2); dispatcher 
time(total/connection/dispatch data): (13.138 ms/12.628 ms/0.061 ms).
   dispatch data time(max/min/avg): (0.034 ms/0.025 ms/0.029 ms); consume 
executor data time(max/min/avg): (0.098 ms/0.036 ms/0.067 ms); free executor 
time(max/min/avg): (0.000 ms/0
.000 ms/0.000 ms).
 Data locality statistics:
   data locality ratio: 1.000; virtual segment number: 1; different host 
number: 1; virtual segment number per host(avg/min/max): (1/1/1); segment 
size(avg/min/max): (46023656.000 B
/46023656 B/46023656 B); segment size with penalty(avg/min/max): (46023656.000 
B/46023656 B/46023656 B); continuity(avg/min/max): (1.000/1.000/1.000); DFS 
metadatacache: 27.930 ms;
resource allocation: 11.879 ms; datalocality calculation: 0.207 ms.
 Total runtime: 20356.994 ms
(22 rows)

postgres=#
postgres=# explain analyze INSERT INTO ext_tbl1_random SELECT * from tbl1;

QUERY PLAN

-
-
--
 Insert  (cost=0.00..444.03 rows=167 width=8)
   Rows out:  Avg 351833.3 rows x 6 workers.  Max/Last(seg2:host67/seg5:host67) 
351984/351854 rows with 61/51 ms to first row, 4731/4767 ms to end, start 
offset by 67/75 ms.
   Executor memory:  1K bytes avg, 1K bytes max (seg5:host67).
   ->  Result  (cost=0.00..431.01 rows=167 width=20)
 Rows out:  Avg 351833.3 rows x 6 workers.  
Max/Last(seg2:host67/seg1:host67) 351984/351734 rows with 35/29 ms to first 
row, 616/705 ms to end, start offset by 67/77 ms.
 ->  Redistribute Motion 6:6  (slice1; segments: 

[jira] [Commented] (HAWQ-1183) Writable external table with Hash distribution shows slow performance

2016-12-01 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15711332#comment-15711332
 ] 

Paul Guo commented on HAWQ-1183:


16559 and 16561 are oid of the two external tables, one for hash and another 
for random.

postgres=# select * from gp_distribution_policy;
 localoid | bucketnum | attrnums
--+---+--
16554 | 6 | {1}
16559 | 1 | {1}
16561 | 1 |
(3 rows)

Looking into DefineExternalRelation(), it looks like for EXTTBL_TYPE_LOCATION, 
it set the bucket number as the (for our case is gpfdist) location number
createStmt->policy->bucketnum = locLength;

I talked with related designer, this seems to be a hack. In theory we should 
save location number and bucket number in different places in catalog tables.

In short term, we could fix this soon with the patch below,
@@ -970,7 +970,7 @@ DefineExternalRelation(CreateExternalStmt *createExtStmt)
 isweb, iswritable,);
if(!isCustom){
int locLength = list_length(exttypeDesc->location_list);
-   if (createStmt->policy && locLength > 0)
+   if (createStmt->policy && locLength > 0 && locLength > 
createStmt->policy->bucketnum)
{
createStmt->policy->bucketnum = locLength;
}

In the long run, we should save bucket number and location number in different 
place.

> Writable external table with Hash distribution shows slow performance
> -
>
> Key: HAWQ-1183
> URL: https://issues.apache.org/jira/browse/HAWQ-1183
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> Steps:
> 1. Create tables and populate them.
> drop table tbl1;
> drop external table ext_tbl1;
> drop external table ext_tbl1_random;
> CREATE TABLE tbl1 (a int, b text) DISTRIBUTED BY (a);
> INSERT INTO tbl1 VALUES (generate_series(1,1000),'aaa');
> INSERT INTO tbl1 VALUES (generate_series(1,1),'bbb');
> INSERT INTO tbl1 VALUES (generate_series(1,10),'bbc');
> INSERT INTO tbl1 VALUES (generate_series(1,100),'bdbc');
> INSERT INTO tbl1 VALUES (generate_series(1,100),'bdddbc');
> CREATE WRITABLE EXTERNAL TABLE ext_tbl1
> ( LIKE tbl1 )
> LOCATION ('gpfdist://127.0.0.1/tbl1.csv')
> FORMAT 'CSV' (DELIMITER ',')
> DISTRIBUTED BY (a);
> CREATE WRITABLE EXTERNAL TABLE ext_tbl1_random
> ( LIKE tbl1 )
> LOCATION ('gpfdist://127.0.0.1/tbl1.random.csv')
> FORMAT 'CSV' (DELIMITER ',')
> DISTRIBUTED RANDOMLY;
> 2. Write the two external tables. We can find that the external table with 
> hash distribution is slow with inserting, and plan shows that it has 1 
> workers only.
> postgres=# explain analyze INSERT INTO ext_tbl1 SELECT * from tbl1;
> QUERY PLAN
> -
> -
> ---
>  Insert  (cost=0.00..509.20 rows=1000 width=8)
>Rows out:  Avg 2111000.0 rows x 1 workers.  
> Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 70/70 ms to first 
> row, 20304/20304 ms to end, start offset by 30/30 ms.
>Executor memory:  1K bytes.
>->  Result  (cost=0.00..431.07 rows=1000 width=20)
>  Rows out:  Avg 2111000.0 rows x 1 workers.  
> Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 61/61 ms to first 
> row, 2034/2034 ms to end, start offset by 30/30 ms
> .
>  ->  Redistribute Motion 1:1  (slice1; segments: 1)  
> (cost=0.00..431.05 rows=1000 width=8)
>Hash Key: tbl1.a
>Rows out:  Avg 2111000.0 rows x 1 workers at destination.  
> Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 61/61 ms to first 
> row, 1370/1370 ms to end, sta
> rt offset by 30/30 ms.
>->  Table Scan on tbl1  (cost=0.00..431.01 rows=1000 width=8)
>  Rows out:  Avg 2111000.0 rows x 1 workers.  
> Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 61/61 ms to first 
> row, 566/566 ms to end, start offset b
> y 30/30 ms.
>  Slice statistics:
>(slice0)Executor memory: 293K bytes (seg0:host67).
>(slice1)Executor memory: 303K bytes (seg0:host67).
>  Statement statistics:
>Memory used: 262144K bytes
>  Optimizer status: PQO version 1.684
>  Dispatcher statistics:
>executors used(total/cached/new connection): (2/0/2); dispatcher 
> time(total/connection/dispatch data): (17.095 ms/16.477 ms/0.053 

[jira] [Assigned] (HAWQ-1183) Writable external table with Hash distribution shows slow performance

2016-12-01 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1183:
--

Assignee: Paul Guo  (was: Lei Chang)

> Writable external table with Hash distribution shows slow performance
> -
>
> Key: HAWQ-1183
> URL: https://issues.apache.org/jira/browse/HAWQ-1183
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> Steps:
> 1. Create tables and populate them.
> drop table tbl1;
> drop external table ext_tbl1;
> drop external table ext_tbl1_random;
> CREATE TABLE tbl1 (a int, b text) DISTRIBUTED BY (a);
> INSERT INTO tbl1 VALUES (generate_series(1,1000),'aaa');
> INSERT INTO tbl1 VALUES (generate_series(1,1),'bbb');
> INSERT INTO tbl1 VALUES (generate_series(1,10),'bbc');
> INSERT INTO tbl1 VALUES (generate_series(1,100),'bdbc');
> INSERT INTO tbl1 VALUES (generate_series(1,100),'bdddbc');
> CREATE WRITABLE EXTERNAL TABLE ext_tbl1
> ( LIKE tbl1 )
> LOCATION ('gpfdist://127.0.0.1/tbl1.csv')
> FORMAT 'CSV' (DELIMITER ',')
> DISTRIBUTED BY (a);
> CREATE WRITABLE EXTERNAL TABLE ext_tbl1_random
> ( LIKE tbl1 )
> LOCATION ('gpfdist://127.0.0.1/tbl1.random.csv')
> FORMAT 'CSV' (DELIMITER ',')
> DISTRIBUTED RANDOMLY;
> 2. Write the two external tables. We can find that the external table with 
> hash distribution is slow with inserting, and plan shows that it has 1 
> workers only.
> postgres=# explain analyze INSERT INTO ext_tbl1 SELECT * from tbl1;
> QUERY PLAN
> -
> -
> ---
>  Insert  (cost=0.00..509.20 rows=1000 width=8)
>Rows out:  Avg 2111000.0 rows x 1 workers.  
> Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 70/70 ms to first 
> row, 20304/20304 ms to end, start offset by 30/30 ms.
>Executor memory:  1K bytes.
>->  Result  (cost=0.00..431.07 rows=1000 width=20)
>  Rows out:  Avg 2111000.0 rows x 1 workers.  
> Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 61/61 ms to first 
> row, 2034/2034 ms to end, start offset by 30/30 ms
> .
>  ->  Redistribute Motion 1:1  (slice1; segments: 1)  
> (cost=0.00..431.05 rows=1000 width=8)
>Hash Key: tbl1.a
>Rows out:  Avg 2111000.0 rows x 1 workers at destination.  
> Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 61/61 ms to first 
> row, 1370/1370 ms to end, sta
> rt offset by 30/30 ms.
>->  Table Scan on tbl1  (cost=0.00..431.01 rows=1000 width=8)
>  Rows out:  Avg 2111000.0 rows x 1 workers.  
> Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 61/61 ms to first 
> row, 566/566 ms to end, start offset b
> y 30/30 ms.
>  Slice statistics:
>(slice0)Executor memory: 293K bytes (seg0:host67).
>(slice1)Executor memory: 303K bytes (seg0:host67).
>  Statement statistics:
>Memory used: 262144K bytes
>  Optimizer status: PQO version 1.684
>  Dispatcher statistics:
>executors used(total/cached/new connection): (2/0/2); dispatcher 
> time(total/connection/dispatch data): (17.095 ms/16.477 ms/0.053 ms).
>dispatch data time(max/min/avg): (0.027 ms/0.025 ms/0.026 ms); consume 
> executor data time(max/min/avg): (0.051 ms/0.043 ms/0.047 ms); free executor 
> time(max/min/avg): (0.000 ms/0
> .000 ms/0.000 ms).
>  Data locality statistics:
>data locality ratio: 1.000; virtual segment number: 1; different host 
> number: 1; virtual segment number per host(avg/min/max): (1/1/1); segment 
> size(avg/min/max): (46023656.000 B
> /46023656 B/46023656 B); segment size with penalty(avg/min/max): 
> (46023656.000 B/46023656 B/46023656 B); continuity(avg/min/max): 
> (1.000/1.000/1.000); DFS metadatacache: 46.181 ms;
> resource allocation: 1.837 ms; datalocality calculation: 1.180 ms.
>  Total runtime: 20538.524 ms
> (22 rows)
> postgres=# explain analyze INSERT INTO ext_tbl1 SELECT * from tbl1;
>  QUERY PLAN
> -
> -
> 
>  Insert  (cost=0.00..444.03 

[jira] [Assigned] (HAWQ-1182) Add Macro for unused argument and variable.

2016-12-01 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1182:
--

Assignee: Paul Guo  (was: Lei Chang)

> Add Macro for unused argument and variable.
> ---
>
> Key: HAWQ-1182
> URL: https://issues.apache.org/jira/browse/HAWQ-1182
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.0.1.0-incubating
>
>
> Per discussion on the dev mail list. I want to add the Macros below to help 
> eliminate some "unused" variable/argument warnings.
> Typical cases:
> 1) Variable is only used with some configurations, etc. val for Assert(val). 
> Then you could add the code below to eliminate the warning when cassert is 
> not enabled.
>POSSIBLE_UNUSED_VAR(val);
> 2) For argument that is explicitly unused but might be kept for 
> compatibility, you could use UNUSED_ARG().
> A simple patch, see below:
> [pguo@host67:/data2/github/incubator-hawq-a/src/include]$ git diff
> diff --git a/src/include/postgres.h b/src/include/postgres.h
> index 1138f20..9391d6b 100644
> --- a/src/include/postgres.h
> +++ b/src/include/postgres.h
> @@ -513,6 +513,18 @@ extern void gp_set_thread_sigmasks(void);
>  extern void OnMoveOutCGroupForQE(void);
> +#ifndef POSSIBLE_UNUSED_VAR
> +#define POSSIBLE_UNUSED_VAR(x) ((void)x)
> +#endif
> +
> +#ifndef POSSIBLE_UNUSED_ARG
> +#define POSSIBLE_UNUSED_ARG(x) ((void)x)
> +#endif
> +
> +#ifndef UNUSED_ARG
> +#define UNUSED_ARG(x)  ((void)x)
> +#endif
> +
>  #ifdef __cplusplus
>  }   /* extern "C" */
>  #endif



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1183) Writable external table with Hash distribution shows slow performance

2016-12-01 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1183:
--

 Summary: Writable external table with Hash distribution shows slow 
performance
 Key: HAWQ-1183
 URL: https://issues.apache.org/jira/browse/HAWQ-1183
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Lei Chang


Steps:

1. Create tables and populate them.
drop table tbl1;
drop external table ext_tbl1;
drop external table ext_tbl1_random;

CREATE TABLE tbl1 (a int, b text) DISTRIBUTED BY (a);
INSERT INTO tbl1 VALUES (generate_series(1,1000),'aaa');
INSERT INTO tbl1 VALUES (generate_series(1,1),'bbb');
INSERT INTO tbl1 VALUES (generate_series(1,10),'bbc');
INSERT INTO tbl1 VALUES (generate_series(1,100),'bdbc');
INSERT INTO tbl1 VALUES (generate_series(1,100),'bdddbc');

CREATE WRITABLE EXTERNAL TABLE ext_tbl1
( LIKE tbl1 )
LOCATION ('gpfdist://127.0.0.1/tbl1.csv')
FORMAT 'CSV' (DELIMITER ',')
DISTRIBUTED BY (a);

CREATE WRITABLE EXTERNAL TABLE ext_tbl1_random
( LIKE tbl1 )
LOCATION ('gpfdist://127.0.0.1/tbl1.random.csv')
FORMAT 'CSV' (DELIMITER ',')
DISTRIBUTED RANDOMLY;

2. Write the two external tables. We can find that the external table with hash 
distribution is slow with inserting, and plan shows that it has 1 workers only.

postgres=# explain analyze INSERT INTO ext_tbl1 SELECT * from tbl1;

QUERY PLAN

-
-
---
 Insert  (cost=0.00..509.20 rows=1000 width=8)
   Rows out:  Avg 2111000.0 rows x 1 workers.  
Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 70/70 ms to first 
row, 20304/20304 ms to end, start offset by 30/30 ms.
   Executor memory:  1K bytes.
   ->  Result  (cost=0.00..431.07 rows=1000 width=20)
 Rows out:  Avg 2111000.0 rows x 1 workers.  
Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 61/61 ms to first 
row, 2034/2034 ms to end, start offset by 30/30 ms
.
 ->  Redistribute Motion 1:1  (slice1; segments: 1)  (cost=0.00..431.05 
rows=1000 width=8)
   Hash Key: tbl1.a
   Rows out:  Avg 2111000.0 rows x 1 workers at destination.  
Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 61/61 ms to first 
row, 1370/1370 ms to end, sta
rt offset by 30/30 ms.
   ->  Table Scan on tbl1  (cost=0.00..431.01 rows=1000 width=8)
 Rows out:  Avg 2111000.0 rows x 1 workers.  
Max/Last(seg0:host67/seg0:host67) 2111000/2111000 rows with 61/61 ms to first 
row, 566/566 ms to end, start offset b
y 30/30 ms.
 Slice statistics:
   (slice0)Executor memory: 293K bytes (seg0:host67).
   (slice1)Executor memory: 303K bytes (seg0:host67).
 Statement statistics:
   Memory used: 262144K bytes
 Optimizer status: PQO version 1.684
 Dispatcher statistics:
   executors used(total/cached/new connection): (2/0/2); dispatcher 
time(total/connection/dispatch data): (17.095 ms/16.477 ms/0.053 ms).
   dispatch data time(max/min/avg): (0.027 ms/0.025 ms/0.026 ms); consume 
executor data time(max/min/avg): (0.051 ms/0.043 ms/0.047 ms); free executor 
time(max/min/avg): (0.000 ms/0
.000 ms/0.000 ms).
 Data locality statistics:
   data locality ratio: 1.000; virtual segment number: 1; different host 
number: 1; virtual segment number per host(avg/min/max): (1/1/1); segment 
size(avg/min/max): (46023656.000 B
/46023656 B/46023656 B); segment size with penalty(avg/min/max): (46023656.000 
B/46023656 B/46023656 B); continuity(avg/min/max): (1.000/1.000/1.000); DFS 
metadatacache: 46.181 ms;
resource allocation: 1.837 ms; datalocality calculation: 1.180 ms.
 Total runtime: 20538.524 ms
(22 rows)


postgres=# explain analyze INSERT INTO ext_tbl1 SELECT * from tbl1;

 QUERY PLAN

-
-

 Insert  (cost=0.00..444.03 rows=167 width=8)
   Rows out:  Avg 351833.3 rows x 6 workers.  Max/Last(seg5:host67/seg0:host67) 
351916/351849 rows with 89/171 ms to first row, 4074/4209 ms to end, start 
offset by 46/45 ms.
   Executor memory:  1K bytes avg, 1K bytes max (seg5:host67).
   ->  Result  (cost=0.00..431.01 rows=167 width=20)
 Rows out:  Avg 351833.3 rows x 6 workers.  

[jira] [Created] (HAWQ-1182) Add Macro for unused argument and variable.

2016-12-01 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1182:
--

 Summary: Add Macro for unused argument and variable.
 Key: HAWQ-1182
 URL: https://issues.apache.org/jira/browse/HAWQ-1182
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Lei Chang
 Fix For: 2.0.1.0-incubating


Per discussion on the dev mail list. I want to add the Macros below to help 
eliminate some "unused" variable/argument warnings.

Typical cases:

1) Variable is only used with some configurations, etc. val for Assert(val). 
Then you could add the code below to eliminate the warning when cassert is not 
enabled.

   POSSIBLE_UNUSED_VAR(val);

2) For argument that is explicitly unused but might be kept for compatibility, 
you could use UNUSED_ARG().

A simple patch, see below:
[pguo@host67:/data2/github/incubator-hawq-a/src/include]$ git diff
diff --git a/src/include/postgres.h b/src/include/postgres.h
index 1138f20..9391d6b 100644
--- a/src/include/postgres.h
+++ b/src/include/postgres.h
@@ -513,6 +513,18 @@ extern void gp_set_thread_sigmasks(void);

 extern void OnMoveOutCGroupForQE(void);

+#ifndef POSSIBLE_UNUSED_VAR
+#define POSSIBLE_UNUSED_VAR(x) ((void)x)
+#endif
+
+#ifndef POSSIBLE_UNUSED_ARG
+#define POSSIBLE_UNUSED_ARG(x) ((void)x)
+#endif
+
+#ifndef UNUSED_ARG
+#define UNUSED_ARG(x)  ((void)x)
+#endif
+
 #ifdef __cplusplus
 }   /* extern "C" */
 #endif



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)