[GitHub] incubator-hawq issue #1325: HAWQ-1125. Running pl/python related feature_tes...

2018-01-01 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1325
  
+1.

By the way, I'm wondering whether the sql below works also.

-- start_ignore
CREATE LANGUAGE plpythonu;
-- end_ignore



---


[jira] [Commented] (HAWQ-1561) build faild on centos 6.8 bzip2

2017-12-05 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279544#comment-16279544
 ] 

Paul Guo commented on HAWQ-1561:


Please do not mix different versions of bzip2 since the linking library and 
runtime library could be different and probably there is wrong in function 
symbols.

You could uninstallall bzip2 and then reinstall them via "yum install".

> build faild on centos 6.8 bzip2
> ---
>
> Key: HAWQ-1561
> URL: https://issues.apache.org/jira/browse/HAWQ-1561
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: xinzhang
>Assignee: Radar Lei
>
> Hi. My env is CentOS release 6.8 .
> env:
>  # bzip2 --version
> bzip2, a block-sorting file compressor.  Version 1.0.6, 6-Sept-2010.
> fail log:
>...
>  checking for library containing BZ2_bzDecompress... no
> configure: error: library 'bzip2' is required.
> 'bzip2' is used for table compression.  Check config.log for details.
> It is possible the compiler isn't looking in the proper directory.
> q:
>   CENTOS 6.X use bzip1.0.5  as default . The dependence libs are the biggest 
> pros.
>What should I to do ? bzip2 1.0.6 had installed .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1561) build faild on centos 6.8 bzip2

2017-12-04 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276568#comment-16276568
 ] 

Paul Guo commented on HAWQ-1561:


I just tried. I could run configure with this version. My bzip2 and bzip2-devel 
was installed via "yum install". If you 1.0.5 does not work you could provide 
the error information.

> build faild on centos 6.8 bzip2
> ---
>
> Key: HAWQ-1561
> URL: https://issues.apache.org/jira/browse/HAWQ-1561
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: xinzhang
>Assignee: Radar Lei
>
> Hi. My env is CentOS release 6.8 .
> env:
>  # bzip2 --version
> bzip2, a block-sorting file compressor.  Version 1.0.6, 6-Sept-2010.
> fail log:
>...
>  checking for library containing BZ2_bzDecompress... no
> configure: error: library 'bzip2' is required.
> 'bzip2' is used for table compression.  Check config.log for details.
> It is possible the compiler isn't looking in the proper directory.
> q:
>   CENTOS 6.X use bzip1.0.5  as default . The dependence libs are the biggest 
> pros.
>What should I to do ? bzip2 1.0.6 had installed .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1561) build faild on centos 6.8 bzip2

2017-12-04 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16276543#comment-16276543
 ] 

Paul Guo commented on HAWQ-1561:


You need to install bzip2-devel

Admittedly, configure error message should be explicit. 

> build faild on centos 6.8 bzip2
> ---
>
> Key: HAWQ-1561
> URL: https://issues.apache.org/jira/browse/HAWQ-1561
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: xinzhang
>Assignee: Radar Lei
>
> Hi. My env is CentOS release 6.8 .
> env:
>  # bzip2 --version
> bzip2, a block-sorting file compressor.  Version 1.0.6, 6-Sept-2010.
> fail log:
>...
>  checking for library containing BZ2_bzDecompress... no
> configure: error: library 'bzip2' is required.
> 'bzip2' is used for table compression.  Check config.log for details.
> It is possible the compiler isn't looking in the proper directory.
> q:
>   CENTOS 6.X use bzip1.0.5  as default . The dependence libs are the biggest 
> pros.
>What should I to do ? bzip2 1.0.6 had installed .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] incubator-hawq issue #1263: HAWQ-1495 Corrected answer file to match insert ...

2017-08-02 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1263
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1263: HAWQ-1495 Corrected answer file to match ...

2017-07-17 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1263#discussion_r127877655
  
--- Diff: src/test/feature/README.md ---
@@ -16,7 +16,10 @@ Before building the code of feature tests part, just 
make sure your compiler sup
 2. Load environment configuration by running `source 
$INSTALL_PREFIX/greenplum_path.sh`.
 3. Load hdfs configuration. For example, `export 
HADOOP_HOME=/Users/wuhong/hadoop-2.7.2 && export 
PATH=${PATH}:${HADOOP_HOME}/bin`. Since some test cases need `hdfs` and 
`hadoop` command, just ensure these commands work before running. Otherwise you 
will get failure.
 4. Run the cases with`./parallel-run-feature-test.sh 8 ./feature-test`(in 
this case 8 threads in parallel), you could use `--gtest_filter` option to 
filter test cases(both positive and negative patterns are supported). Please 
see more options by running `./feature-test --help`. 
-5.You can also run cases with `./parallel-run-feature-test.sh 8 
./feature-test --gtest_schedule` (eg. --gtest_schedule=./full_tests.txt) if you 
want to run cases in both parallel way and serial way.The schedule file sample 
is full_tests.txt which stays in the same directory.
+5. You can also run cases with `./parallel-run-feature-test.sh 8 
./feature-test --gtest_schedule` (eg. --gtest_schedule=./full_tests.txt) if you 
want to run cases in both parallel way and serial way.The schedule file sample 
is full_tests.txt which stays in the same 
+directory.
--- End diff --

It looks like 4 and 5 could be combined.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1265: HAWQ-1500. HAWQ-1501. HAWQ-1502. Support ...

2017-07-11 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1265#discussion_r126856856
  
--- Diff: depends/libhdfs3/CMake/FindSSL.cmake ---
@@ -0,0 +1,26 @@
+# - Try to find the Open ssl library (ssl)
+#
+# Once done this will define
+#
+#  SSL_FOUND - System has gnutls
+#  SSL_INCLUDE_DIR - The gnutls include directory
+#  SSL_LIBRARIES - The libraries needed to use gnutls
+#  SSL_DEFINITIONS - Compiler switches required for using gnutls
+
+
+IF (SSL_INCLUDE_DIR AND SSL_LIBRARIES)
+   # in cache already
+   SET(SSL_FIND_QUIETLY TRUE)
+ENDIF (SSL_INCLUDE_DIR AND SSL_LIBRARIES)
+
+FIND_PATH(SSL_INCLUDE_DIR openssl/opensslv.h)
+
+FIND_LIBRARY(SSL_LIBRARIES crypto)
+
+INCLUDE(FindPackageHandleStandardArgs)
+
+# handle the QUIETLY and REQUIRED arguments and set SSL_FOUND to TRUE if
+# all listed variables are TRUE
+FIND_PACKAGE_HANDLE_STANDARD_ARGS(SSL DEFAULT_MSG SSL_LIBRARIES 
SSL_INCLUDE_DIR)
+
+MARK_AS_ADVANCED(SSL_INCLUDE_DIR SSL_LIBRARIES)
--- End diff --

Can we leverage the code for ssl and curl in configure?
By the way, now we should auto-enable with-openssl if with-libhdfs3 is 
enabled.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1263: HAWQ-1495 Corrected answer file to match insert ...

2017-07-05 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1263
  
You could run the test using parallel-run-feature-test.sh.

Basically the test need to set some properties so that ans files are 
uniform.
In parallel-run-feature-test.sh, e.g. the date style is set as below,

  run_sql "alter database $TEST_DB_NAME set datestyle to 'postgres,MDY';"

An ideal fix for this is to document it in readme, warning not to run 
googletest directly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1248: HAWQ-1476. Augment enable-ranger-plugin.sh to su...

2017-06-01 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1248
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (HAWQ-1478) Enable hawq build on suse11

2017-06-01 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo resolved HAWQ-1478.

Resolution: Fixed

> Enable hawq build on suse11
> ---
>
> Key: HAWQ-1478
> URL: https://issues.apache.org/jira/browse/HAWQ-1478
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.2.0.0-incubating
>Reporter: Paul Guo
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> We have some users want hawq run on suse (typically suse11). We have at first 
> make it build fine on the platform.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq pull request #1247: HAWQ-1478. Enable hawq build on suse11

2017-06-01 Thread paul-guo-
GitHub user paul-guo- opened a pull request:

https://github.com/apache/incubator-hawq/pull/1247

HAWQ-1478. Enable hawq build on suse11

This was done by amyrazz44, edespino, internma and liming01 intermittently.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/paul-guo-/incubator-hawq make

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1247.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1247


commit abd74ed1cc6a1df538ea5ac9df4bef5ce6b52c29
Author: interma <inte...@outlook.com>
Date:   2017-04-25T07:35:36Z

HAWQ-1478. Enable hawq build on suse11

This was done by amyrazz44, edespino, internma and liming01 intermittently.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HAWQ-1478) Enable hawq build on suse11

2017-06-01 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1478:
--

 Summary: Enable hawq build on suse11
 Key: HAWQ-1478
 URL: https://issues.apache.org/jira/browse/HAWQ-1478
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Radar Lei


We have some users want hawq run on suse (typically suse11). We have at first 
make it build fine on the platform.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq issue #1243: HAWQ-1458. Fix share input scan bug for writer p...

2017-05-31 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1243
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-31 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r119279208
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -666,6 +717,29 @@ static int retry_write(int fd, char *buf, int wsize)
return 0;
 }
 
+
+
+/*
+ * generate_lock_file_name
+ *
+ * Called by reader or writer to make the unique lock file name.
+ */
+void generate_lock_file_name(char* p, int size, int share_id, const char* 
name)
+{
+   if (strncmp(name , "writer", strlen("writer")) == 0)
+   {
+   sisc_lockname(p, size, share_id, "ready");
+   strncat(p, name, lengthof(p) - strlen(p) - 1);
--- End diff --

Not lengthof(p). Should be size?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-31 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r119279170
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
+/*
+ * mkLockFileForWriter
+ * 
+ * Create a unique lock file for writer, then use flock() to lock/unlock 
the lock file.
+ * We can make sure the lock file will be locked forerver until the writer 
process quits.
+ */
+static void mkLockFileForWriter(int size, int share_id, char * name)
+{
+   char *lock_file;
+   int lock;
+
+   lock_file = (char *)palloc0(sizeof(char) * MAXPGPATH);
--- End diff --

MAXPGPATH -> size?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-31 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r119279222
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -666,6 +717,29 @@ static int retry_write(int fd, char *buf, int wsize)
return 0;
 }
 
+
+
+/*
+ * generate_lock_file_name
+ *
+ * Called by reader or writer to make the unique lock file name.
+ */
+void generate_lock_file_name(char* p, int size, int share_id, const char* 
name)
+{
+   if (strncmp(name , "writer", strlen("writer")) == 0)
+   {
+   sisc_lockname(p, size, share_id, "ready");
+   strncat(p, name, lengthof(p) - strlen(p) - 1);
+   }
+   else
+   {
+   sisc_lockname(p, size, share_id, "done");
+   strncat(p, name, lengthof(p) - strlen(p) - 1);
--- End diff --

ditto.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-31 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r119278591
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
+/*
+ * mkLockFileForWriter
+ * 
+ * Create a unique lock file for writer, then use flock() to lock/unlock 
the lock file.
+ * We can make sure the lock file will be locked forerver until the writer 
process quits.
+ */
+static void mkLockFileForWriter(int size, int share_id, char * name)
+{
+   char *lock_file;
+   int lock;
+
+   lock_file = (char *)palloc0(sizeof(char) * MAXPGPATH);
+   generate_lock_file_name(lock_file, size, share_id, name);
+   elog(DEBUG3, "The lock file for writer in SISC is %s", lock_file);
+   sisc_writer_lock_fd = open(lock_file, O_CREAT, S_IRWXU);
+   if(sisc_writer_lock_fd < 0)
+   {
+   elog(ERROR, "Could not create lock file %s for writer in SISC. 
The error number is %d", lock_file, errno);
+   }
+   lock = flock(sisc_writer_lock_fd, LOCK_EX | LOCK_NB);
+   if(lock == -1)
+   elog(DEBUG3, "Could not lock lock file  \"%s\" for writer in 
SISC . The error number is %d", lock_file, errno);
+   else if(lock == 0)
+   elog(DEBUG3, "Successfully locked lock file  \"%s\" for writer 
in SISC.The error number is %d", lock_file, errno);
--- End diff --

For (lock == 0), there is no need of errno in log.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1237: HAWQ-1459. Tweak the feature test related...

2017-05-27 Thread paul-guo-
Github user paul-guo- closed the pull request at:

https://github.com/apache/incubator-hawq/pull/1237


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-27 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118815751
  
--- Diff: src/backend/utils/misc/guc.c ---
@@ -6685,6 +6687,15 @@ static struct config_int ConfigureNamesInt[] =
_cache_max_hdfs_file_num,
524288, 32768, 8388608, NULL, NULL
},
+   {
+   {"share_input_scan_wait_lockfile_timeout", PGC_USERSET, 
DEVELOPER_OPTIONS,
+   gettext_noop("timeout for waiting lock file which 
writer creates."),
+   NULL
+   },
--- End diff --

So this is ms or second?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-27 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118815742
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -793,15 +877,78 @@ shareinput_reader_waitready(int share_id, 
PlanGenerator planGen)
}
else if(n==0)
{
-   elog(DEBUG1, "SISC READER (shareid=%d, slice=%d): Wait 
ready time out once",
-   share_id, currentSliceId);
+   file_exists = access(writer_lock_file, F_OK);   
+   if(file_exists != 0)
+   {
+   elog(DEBUG3, "Wait lock file for writer time 
out interval is %d", timeout_interval);
+   if(timeout_interval >= 
share_input_scan_wait_lockfile_timeout || flag == true) //If lock file never 
exists or disappeared, reader will no longer waiting for writer
+   {
+   elog(LOG, "SISC READER (shareid=%d, 
slice=%d): Wait ready time out and break",
+   share_id, currentSliceId);
+   pfree(writer_lock_file);
+   break;
+   }
+   timeout_interval += tval.tv_sec;
--- End diff --

Why you ignore tv_usec?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-27 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118815361
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -793,15 +877,78 @@ shareinput_reader_waitready(int share_id, 
PlanGenerator planGen)
}
else if(n==0)
{
-   elog(DEBUG1, "SISC READER (shareid=%d, slice=%d): Wait 
ready time out once",
-   share_id, currentSliceId);
+   file_exists = access(writer_lock_file, F_OK);   
+   if(file_exists != 0)
+   {
+   elog(DEBUG3, "Wait lock file for writer time 
out interval is %d", timeout_interval);
+   if(timeout_interval >= 
share_input_scan_wait_lockfile_timeout || flag == true) //If lock file never 
exists or disappeared, reader will no longer waiting for writer
+   {
+   elog(LOG, "SISC READER (shareid=%d, 
slice=%d): Wait ready time out and break",
+   share_id, currentSliceId);
+   pfree(writer_lock_file);
+   break;
+   }
+   timeout_interval += tval.tv_sec;
+   }
+   else
+   {
+   elog(LOG, "writer lock file of 
shareinput_reader_waitready() is %s", writer_lock_file);
+   flag = true;
+   lock_fd = open(writer_lock_file, O_RDONLY);
+   if(lock_fd < 0)
+   {
+   elog(DEBUG3, "Open writer's lock file 
%s failed!, error number is %d", writer_lock_file, errno);
+   continue;
+   }
+   lock = flock(lock_fd, LOCK_EX | LOCK_NB);
+   if(lock == -1)
+   {
+   /*
+* Reader try to lock the lock file 
which writer created until locked the lock file successfully 
+ * which means that writer process quit. If reader 
lock the lock file failed, it means that writer
+* process is healthy.
+*/
+   elog(DEBUG3, "Lock writer's lock file 
%s failed!, error number is %d", writer_lock_file, errno);
+   }
+   else if(lock == 0)
+   {
+   /*
+* There is one situation to consider 
about.
+* Writer need a time interval to lock 
the lock file after the lock file has been created.
+* So, if reader lock the lock file 
ahead of writer, we should unlock it.
+* If reader lock the lock file after 
writer, it means that writer process has abort.
+* We should break the loop to make 
sure reader no longer wait for writer.
+*/  
+   if(is_lock_firsttime == true)  
+   {
+   lock = flock(lock_fd, LOCK_UN); 
+   is_lock_firsttime = false;
+   elog(DEBUG3, "Lock writer's 
lock file %s first time successfully in SISC! Unlock it.", writer_lock_file);
+   continue;
+   }
+   else
+   {
+   elog(LOG, "Lock writer's lock 
file %s successfully in SISC!", writer_lock_file);
+   /* Retry to close the fd in 
case there is interruption from signal */
+   while ((close(lock_fd) < 0) && 
(errno == EINTR))
--- End diff --

This is a legal condition. Should  not elog(ERROR).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-27 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118815310
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -666,6 +717,29 @@ static int retry_write(int fd, char *buf, int wsize)
return 0;
 }
 
+
+
+/*
+ * generate_lock_file_name
+ *
+ * Called by reader or writer to make the unique lock file name.
+ */
+void generate_lock_file_name(char* p, int size, int share_id, const char* 
name)
+{
+   if (strncmp(name , "writer", strlen("writer")) == 0)
+   {
+   sisc_lockname(p, size, share_id, "ready");
+   strncat(p, name, sizeof(p) - strlen(p) - 1);
+   }
+   else
+   {
+   sisc_lockname(p, size, share_id, "done");
+   strncat(p, name, sizeof(p) - strlen(p) - 1);
--- End diff --

same as above.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-27 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118815308
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -666,6 +717,29 @@ static int retry_write(int fd, char *buf, int wsize)
return 0;
 }
 
+
+
+/*
+ * generate_lock_file_name
+ *
+ * Called by reader or writer to make the unique lock file name.
+ */
+void generate_lock_file_name(char* p, int size, int share_id, const char* 
name)
+{
+   if (strncmp(name , "writer", strlen("writer")) == 0)
+   {
+   sisc_lockname(p, size, share_id, "ready");
+   strncat(p, name, sizeof(p) - strlen(p) - 1);
--- End diff --

bug here. sizeof(p) = 8 (for x64), and strlen(name)?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118226956
  
--- Diff: src/backend/utils/misc/guc.c ---
@@ -6685,6 +6687,15 @@ static struct config_int ConfigureNamesInt[] =
_cache_max_hdfs_file_num,
524288, 32768, 8388608, NULL, NULL
},
+   {
+   {"share_input_scan_wait_lockfile_timeout", PGC_USERSET, 
DEVELOPER_OPTIONS,
+   gettext_noop("timeout for wait lock file."),
--- End diff --

The description is too short.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118224289
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -709,6 +783,12 @@ shareinput_reader_waitready(int share_id, 
PlanGenerator planGen)
struct timeval tval;
int n;
char a;
+   int file_exists = -1;
+   int timeout_interval = 0;
+   bool flag = false; //A tag for file exists or not.
+   int fd_lock = -1;
--- End diff --

fd_lock -> lock_fd?




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118221100
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
+/*
+ * mkLockFileForWriter
+ * 
+ * Create a unique lock file for writer. Then use flock's unblock way to 
lock the lock file.
+ * We can make sure the lock file will be locked forerver until the writer 
process has quit.
+ */
+void mkLockFileForWriter(int size, int share_id, char * name)
--- End diff --

mkLockFileForWriter() seems to be used in this file only. Why not static 
this file and remove the definition in the header file below?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118222465
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
+/*
+ * mkLockFileForWriter
+ * 
+ * Create a unique lock file for writer. Then use flock's unblock way to 
lock the lock file.
+ * We can make sure the lock file will be locked forerver until the writer 
process has quit.
+ */
+void mkLockFileForWriter(int size, int share_id, char * name)
+{
+   char *lock_file;
+   int lock;
+
+   lock_file = (char *)palloc0(sizeof(char) * MAXPGPATH);
+   generate_lock_file_name(lock_file, size, share_id, name);
+   elog(DEBUG3, "The lock file for writer in SISC is %s", lock_file);
+   sisc_writer_lock_fd = open(lock_file, O_CREAT, S_IRWXU);
+   if(sisc_writer_lock_fd < 0)
+   {
+   elog(ERROR, "Could not create lock file %s for writer in SISC. 
The error number is %d", lock_file, errno);
+   }
+   lock = flock(sisc_writer_lock_fd, LOCK_EX | LOCK_NB);
+   if(lock == -1)
+   elog(DEBUG3, "Could not lock lock file  \"%s\" for writer in 
SISC : %m. The error number is %d", lock_file, errno);
+   else if(lock == 0)
+   elog(DEBUG3, "Successfully locked lock file  \"%s\" for writer 
in SISC: %m.The error number is %d", lock_file, errno);
+   pfree(lock_file);
--- End diff --

ditto.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118224476
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -764,7 +848,7 @@ shareinput_reader_waitready(int share_id, PlanGenerator 
planGen)
tval.tv_usec = 0;
 
n = select(pctxt->readyfd+1, (fd_set *) , NULL, NULL, 
);
-
+   
--- End diff --

remove the tailing blank characters.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118220731
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
+/*
+ * mkLockFileForWriter
+ * 
+ * Create a unique lock file for writer. Then use flock's unblock way to 
lock the lock file.
+ * We can make sure the lock file will be locked forerver until the writer 
process has quit.
--- End diff --

until the writer process quits.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118222840
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -552,11 +551,59 @@ static void sisc_lockname(char* p, int size, int 
share_id, const char* name)
}
 }
 
+
+char *joint_lock_file_name(ShareInput_Lk_Context *lk_ctxt, char *name)
+{
+   char *lock_file = palloc0(sizeof(char) * MAXPGPATH);
+
+   if(strncmp("writer", name, 6) ==0 )
+   {
+   strncat(lock_file, lk_ctxt->lkname_ready, 
strlen(lk_ctxt->lkname_ready));
--- End diff --

The strncat usage is not correct. It should be
strncat(lock_file, lk_ctxt->lkname_ready, MAXPGPATH - 
strlen(lock_file) - 1);

From the manpage.
 strncat(char *restrict s1, const char *restrict s2, size_t n);
 The strncat() function appends not more than n characters from s2, and 
then adds a terminating `\0'.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118224402
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -738,6 +818,10 @@ shareinput_reader_waitready(int share_id, 
PlanGenerator planGen)
if(pctxt->donefd < 0)
elog(ERROR, "could not open fifo \"%s\": %m", 
pctxt->lkname_done);
 
+   char *writer_lock_file = NULL; //current path for lock file.
--- End diff --

I think moving variable definition earlier in its scope would be better.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118226007
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -793,15 +877,72 @@ shareinput_reader_waitready(int share_id, 
PlanGenerator planGen)
}
else if(n==0)
{
-   elog(DEBUG1, "SISC READER (shareid=%d, slice=%d): Wait 
ready time out once",
-   share_id, currentSliceId);
+   file_exists = access(writer_lock_file, F_OK);   
+   if(file_exists != 0)
+   {
+   elog(DEBUG3, "Wait lock file for writer time 
out interval is %d", timeout_interval);
+   if(timeout_interval >= 
share_input_scan_wait_lockfile_timeout || flag == true) //If lock file never 
exists or disappeared, reader will no longer waiting for writer
+   {
+   elog(LOG, "SISC READER (shareid=%d, 
slice=%d): Wait ready time out and break",
+   share_id, currentSliceId);
+   pfree(writer_lock_file);
+   break;
+   }
+   timeout_interval += tval.tv_sec;
+   }
+   else
+   {
+   elog(LOG, "writer lock file of 
shareinput_reader_waitready() is %s", writer_lock_file);
+   flag = true;
+   fd_lock = open(writer_lock_file, O_RDONLY);
+   if(fd_lock < 0)
+   {
+   elog(DEBUG3, "Open writer's lock file 
%s failed!, error number is %d", writer_lock_file, errno);
+   }
+   lock = flock(fd_lock, LOCK_EX | LOCK_NB);
+   if(lock == -1)
+   {
--- End diff --

This is expected if I understand correctly so the log should be friendly 
(e.g. told users that this is expected).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118223887
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -666,6 +717,29 @@ static int retry_write(int fd, char *buf, int wsize)
return 0;
 }
 
+
+
+/*
+ * generate_lock_file_name
+ *
+ * Called by reader or writer to make the unique lock file name.
+ */
+void generate_lock_file_name(char* p, int size, int share_id, const char* 
name)
+{
+   if (strncmp(name , "writer", 6) == 0)
--- End diff --

My habit is:

strncmp(name , "writer", strlen("writer")) == 0

So you do not need to calculate the string length yourself (compiler does 
this for you).

Anyway, up to you.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118222360
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
+/*
+ * mkLockFileForWriter
+ * 
+ * Create a unique lock file for writer. Then use flock's unblock way to 
lock the lock file.
+ * We can make sure the lock file will be locked forerver until the writer 
process has quit.
+ */
+void mkLockFileForWriter(int size, int share_id, char * name)
+{
+   char *lock_file;
+   int lock;
+
+   lock_file = (char *)palloc0(sizeof(char) * MAXPGPATH);
+   generate_lock_file_name(lock_file, size, share_id, name);
+   elog(DEBUG3, "The lock file for writer in SISC is %s", lock_file);
+   sisc_writer_lock_fd = open(lock_file, O_CREAT, S_IRWXU);
+   if(sisc_writer_lock_fd < 0)
+   {
+   elog(ERROR, "Could not create lock file %s for writer in SISC. 
The error number is %d", lock_file, errno);
+   }
+   lock = flock(sisc_writer_lock_fd, LOCK_EX | LOCK_NB);
+   if(lock == -1)
+   elog(DEBUG3, "Could not lock lock file  \"%s\" for writer in 
SISC : %m. The error number is %d", lock_file, errno);
--- End diff --

If having %m we have the err info.
   m  (Glibc extension.)  Print output of strerror(errno).  No 
argument is required.

Does this code work fine on mac? I think to avoid duplication we should use 
errno or strerror, instead of "%m".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118220716
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
+/*
+ * mkLockFileForWriter
+ * 
+ * Create a unique lock file for writer. Then use flock's unblock way to 
lock the lock file.
--- End diff --

use flock() to lock/unlock the lock file.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118226308
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -793,15 +877,72 @@ shareinput_reader_waitready(int share_id, 
PlanGenerator planGen)
}
else if(n==0)
{
-   elog(DEBUG1, "SISC READER (shareid=%d, slice=%d): Wait 
ready time out once",
-   share_id, currentSliceId);
+   file_exists = access(writer_lock_file, F_OK);   
+   if(file_exists != 0)
+   {
+   elog(DEBUG3, "Wait lock file for writer time 
out interval is %d", timeout_interval);
+   if(timeout_interval >= 
share_input_scan_wait_lockfile_timeout || flag == true) //If lock file never 
exists or disappeared, reader will no longer waiting for writer
+   {
+   elog(LOG, "SISC READER (shareid=%d, 
slice=%d): Wait ready time out and break",
+   share_id, currentSliceId);
+   pfree(writer_lock_file);
+   break;
+   }
+   timeout_interval += tval.tv_sec;
+   }
+   else
+   {
+   elog(LOG, "writer lock file of 
shareinput_reader_waitready() is %s", writer_lock_file);
+   flag = true;
+   fd_lock = open(writer_lock_file, O_RDONLY);
+   if(fd_lock < 0)
+   {
+   elog(DEBUG3, "Open writer's lock file 
%s failed!, error number is %d", writer_lock_file, errno);
--- End diff --

why the code still continue if fd_lock < 0? Should not it fail?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (HAWQ-1471) "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 4.7.2 is too low to install hawq

2017-05-24 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo resolved HAWQ-1471.

   Resolution: Fixed
 Assignee: Paul Guo  (was: Ed Espino)
Fix Version/s: 2.3.0.0-incubating

> "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 
> 4.7.2 is too low to install hawq
> -
>
> Key: HAWQ-1471
> URL: https://issues.apache.org/jira/browse/HAWQ-1471
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build
>Reporter: fangpei
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> My OS environment is Red Hat 6.5. I referred to the  website 
> "https://cwiki.apache.org//confluence/display/HAWQ/Build+and+Install; to 
> build and install hawq. I installed the GCC version specified on the website 
> (gcc 4.7.2 / gcc-c++ 4.7.2), but the error happened:
> "error: 'Hdfs::Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'" 
> or
> "error : 'Yarn:Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'"
> I found that GCC support for C ++ 11 is not good, leading to the 
> emergence of the error.
> So I installed gcc 4.8.5 / gcc-c++ 4.8.5, and the problem was resolved. 
> gcc / gcc-c++ version 4.7.2 is too low to install hawq, I suggest 
> updating the website about gcc / gcc-c++ version requirement.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HAWQ-1471) "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 4.7.2 is too low to install hawq

2017-05-24 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022567#comment-16022567
 ] 

Paul Guo edited comment on HAWQ-1471 at 5/24/17 8:59 AM:
-

Modified the wiki page. So we could close this issue.


was (Author: paul guo):
Modified. So we could close this issue.

> "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 
> 4.7.2 is too low to install hawq
> -
>
> Key: HAWQ-1471
> URL: https://issues.apache.org/jira/browse/HAWQ-1471
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build
>Reporter: fangpei
>Assignee: Ed Espino
> Fix For: 2.3.0.0-incubating
>
>
> My OS environment is Red Hat 6.5. I referred to the  website 
> "https://cwiki.apache.org//confluence/display/HAWQ/Build+and+Install; to 
> build and install hawq. I installed the GCC version specified on the website 
> (gcc 4.7.2 / gcc-c++ 4.7.2), but the error happened:
> "error: 'Hdfs::Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'" 
> or
> "error : 'Yarn:Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'"
> I found that GCC support for C ++ 11 is not good, leading to the 
> emergence of the error.
> So I installed gcc 4.8.5 / gcc-c++ 4.8.5, and the problem was resolved. 
> gcc / gcc-c++ version 4.7.2 is too low to install hawq, I suggest 
> updating the website about gcc / gcc-c++ version requirement.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq issue #1243: HAWQ-1458. Fix share input scan bug for writer p...

2017-05-19 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1243
  
I think we could file some JIRAs for rest/following work since this is just 
a hack?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-19 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r117423393
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -793,15 +887,70 @@ shareinput_reader_waitready(int share_id, 
PlanGenerator planGen)
}
else if(n==0)
{
-   elog(DEBUG1, "SISC READER (shareid=%d, slice=%d): Wait 
ready time out once",
-   share_id, currentSliceId);
+   file_exists = access(writer_tmp_file, F_OK);
+   if(file_exists != 0)
+   {
+   elog(DEBUG3, "time out count is %d", 
timeout_count);
+   timeout_count--;
+   if(timeout_count == 0 || flag == true) //If tmp 
file never exists or disappeared, reader will no longer waiting for writer
+   {
+   elog(LOG, "SISC READER (shareid=%d, 
slice=%d): Wait ready time out and break",
+   share_id, currentSliceId);
+   pfree(writer_tmp_file);
+   break;
+   }
+   }
+   else
+   {
+   elog(LOG, "writer tmp file of 
shareinput_reader_waitready() is %s", writer_tmp_file);
+   flag = true;
+   fd_tmp = open(writer_tmp_file, O_RDONLY);
+   if(fd_tmp < 0)
+   {
+   elog(DEBUG3, "Open writer's tmp file %s 
failed!, error number is %d", writer_tmp_file, errno);
+   }
+   lock = flock(fd_tmp, LOCK_EX | LOCK_NB);
+   if(lock == -1)
+   {
+   elog(DEBUG3, "Lock writer's tmp file %s 
failed!, error number is %d", writer_tmp_file, errno);
+   }
+   else if(lock == 0)
+   {
+   /*
+* There is one situation to consider 
about.
+* Writer need a time interval to lock 
the tmp file after the tmp file has been created.
+* So, if reader lock the tmp file 
ahead of writer, we should unlock it.
+* If reader lock the tmp file after 
writer, it means that writer process has abort.
+* We should break the loop to make 
sure reader no longer wait for writer.
+*/  
+   if(lock_count == 0)  
+   {
+   lock = flock(fd_tmp, LOCK_UN); 
+   lock_count++;
+   elog(DEBUG3, "Lock writer's tmp 
file %s first time successfully!", writer_tmp_file);
+   continue;
+   }
+   else
+   {
+   elog(LOG, "Lock writer's tmp 
file %s successfully!", writer_tmp_file);
+   close(fd_tmp);
+   pfree(writer_tmp_file);
+   break; 
+   }
+   
--- End diff --

No need of blank lines here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-19 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r117423311
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -552,11 +551,61 @@ static void sisc_lockname(char* p, int size, int 
share_id, const char* name)
}
 }
 
+
+char * joint_tmp_file_name(ShareInput_Lk_Context *lk_ctxt, char *name)
+{
+   char *tmp_file = palloc(sizeof(char) * MAXPGPATH);
+
+   strcat(tmp_file, getCurrentTempFilePath);
+   strcat(tmp_file, "/");
+   strcat(tmp_file, PG_TEMP_FILES_DIR);
+   strcat(tmp_file, "/");
+   if(strncmp("writer", name, 7) ==0 )
+   {
+   strcat(tmp_file, lk_ctxt->lkname_ready);
+   }
+   else
+   {
+   strcat(tmp_file, lk_ctxt->lkname_done);
+   }
+   strcat(tmp_file, name);
+   return tmp_file;
+}
+
+void drop_tmp_files(ShareInput_Lk_Context *lk_ctxt)
+{
--- End diff --

Need to call unlink() to drop.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-19 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r117419625
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -552,11 +551,61 @@ static void sisc_lockname(char* p, int size, int 
share_id, const char* name)
}
 }
 
+
+char * joint_tmp_file_name(ShareInput_Lk_Context *lk_ctxt, char *name)
+{
+   char *tmp_file = palloc(sizeof(char) * MAXPGPATH);
+
+   strcat(tmp_file, getCurrentTempFilePath);
+   strcat(tmp_file, "/");
+   strcat(tmp_file, PG_TEMP_FILES_DIR);
+   strcat(tmp_file, "/");
--- End diff --

Please either calculate the size of tmp_file before calling strcat(),  or 
call strncat().
For all strcat() cases, please also modify.

Besides, You do not have the below strings as one parameter: "/" 
PG_TEMP_FILES_DIR "/"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-19 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r117423486
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -793,15 +887,70 @@ shareinput_reader_waitready(int share_id, 
PlanGenerator planGen)
}
else if(n==0)
{
-   elog(DEBUG1, "SISC READER (shareid=%d, slice=%d): Wait 
ready time out once",
-   share_id, currentSliceId);
+   file_exists = access(writer_tmp_file, F_OK);
+   if(file_exists != 0)
+   {
+   elog(DEBUG3, "time out count is %d", 
timeout_count);
+   timeout_count--;
+   if(timeout_count == 0 || flag == true) //If tmp 
file never exists or disappeared, reader will no longer waiting for writer
+   {
+   elog(LOG, "SISC READER (shareid=%d, 
slice=%d): Wait ready time out and break",
+   share_id, currentSliceId);
+   pfree(writer_tmp_file);
+   break;
+   }
+   }
+   else
+   {
+   elog(LOG, "writer tmp file of 
shareinput_reader_waitready() is %s", writer_tmp_file);
+   flag = true;
+   fd_tmp = open(writer_tmp_file, O_RDONLY);
+   if(fd_tmp < 0)
+   {
+   elog(DEBUG3, "Open writer's tmp file %s 
failed!, error number is %d", writer_tmp_file, errno);
--- End diff --

This seems to be expected result, right? If yes, we do not need code here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-19 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r117416229
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
--- End diff --

Again, for function that is not ready for other files' use, please add 
"static". Please also double-check other functions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-19 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r117423597
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -793,15 +887,70 @@ shareinput_reader_waitready(int share_id, 
PlanGenerator planGen)
}
else if(n==0)
{
-   elog(DEBUG1, "SISC READER (shareid=%d, slice=%d): Wait 
ready time out once",
-   share_id, currentSliceId);
+   file_exists = access(writer_tmp_file, F_OK);
+   if(file_exists != 0)
+   {
+   elog(DEBUG3, "time out count is %d", 
timeout_count);
+   timeout_count--;
+   if(timeout_count == 0 || flag == true) //If tmp 
file never exists or disappeared, reader will no longer waiting for writer
+   {
+   elog(LOG, "SISC READER (shareid=%d, 
slice=%d): Wait ready time out and break",
+   share_id, currentSliceId);
+   pfree(writer_tmp_file);
+   break;
+   }
+   }
+   else
+   {
+   elog(LOG, "writer tmp file of 
shareinput_reader_waitready() is %s", writer_tmp_file);
+   flag = true;
+   fd_tmp = open(writer_tmp_file, O_RDONLY);
+   if(fd_tmp < 0)
+   {
+   elog(DEBUG3, "Open writer's tmp file %s 
failed!, error number is %d", writer_tmp_file, errno);
+   }
+   lock = flock(fd_tmp, LOCK_EX | LOCK_NB);
+   if(lock == -1)
+   {
+   elog(DEBUG3, "Lock writer's tmp file %s 
failed!, error number is %d", writer_tmp_file, errno);
+   }
+   else if(lock == 0)
+   {
+   /*
+* There is one situation to consider 
about.
+* Writer need a time interval to lock 
the tmp file after the tmp file has been created.
+* So, if reader lock the tmp file 
ahead of writer, we should unlock it.
+* If reader lock the tmp file after 
writer, it means that writer process has abort.
+* We should break the loop to make 
sure reader no longer wait for writer.
--- End diff --

Maybe we should add code here to tell that the tmp file will be dropped.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-19 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r117423739
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -41,14 +41,16 @@
 #include "postgres.h"
--- End diff --

I've been thinking that we should use a more meaningful name for all "tmp" 
or "temp"?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-19 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r117420258
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -552,11 +551,61 @@ static void sisc_lockname(char* p, int size, int 
share_id, const char* name)
}
 }
 
+
+char * joint_tmp_file_name(ShareInput_Lk_Context *lk_ctxt, char *name)
--- End diff --

Do we need to have the function names more meaningful? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-19 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r117420455
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -666,6 +719,36 @@ static int retry_write(int fd, char *buf, int wsize)
return 0;
 }
 
+
+
+/*
+ * mk_tmp_file_name
+ *
+ * Called by reader or writer to make the unique tmp file name.
+ */
+void mk_tmp_file_name(char* p, int size, int share_id, const char* name)
+{
+time_t t;
+int now, random_num;
+t = time(NULL);
+now = time();
+   srand(time(NULL));
+random_num = rand();
--- End diff --

It seems that there is no need of random so far.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-19 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r117418080
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
+/*
+ * mkTempFileForWriter
+ * 
+ * Create a unique tmp file for writer. Then use flock's unblock way to 
lock the tmp file.
+ * We can make sure the tmp file will be locked forerver until the writer 
process has quit.
+ */
+void mkTempFileForWriter(int size, int share_id, char * name)
+{
+   char *tmp_file;
+   int lock;
+
+   tmp_file = (char *)palloc(sizeof(char) * MAXPGPATH);
+   mk_tmp_file_name(tmp_file, size, share_id, name);
+   elog(DEBUG3, "tmp file for writer is %s", tmp_file);
--- End diff --

I think "shareinputscan" should be added in the log.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-19 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r117418664
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
+/*
+ * mkTempFileForWriter
+ * 
+ * Create a unique tmp file for writer. Then use flock's unblock way to 
lock the tmp file.
+ * We can make sure the tmp file will be locked forerver until the writer 
process has quit.
+ */
+void mkTempFileForWriter(int size, int share_id, char * name)
+{
+   char *tmp_file;
+   int lock;
+
+   tmp_file = (char *)palloc(sizeof(char) * MAXPGPATH);
+   mk_tmp_file_name(tmp_file, size, share_id, name);
+   elog(DEBUG3, "tmp file for writer is %s", tmp_file);
+   fd_tmp_writer = open(tmp_file, O_CREAT, S_IRWXU);
--- End diff --

I do not think we need to provide +wx permission.

I think if the file exists we should quit, right? If yes, we need to add 
O_EXCL


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-19 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r117418935
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
+/*
+ * mkTempFileForWriter
+ * 
+ * Create a unique tmp file for writer. Then use flock's unblock way to 
lock the tmp file.
+ * We can make sure the tmp file will be locked forerver until the writer 
process has quit.
+ */
+void mkTempFileForWriter(int size, int share_id, char * name)
+{
+   char *tmp_file;
+   int lock;
+
+   tmp_file = (char *)palloc(sizeof(char) * MAXPGPATH);
+   mk_tmp_file_name(tmp_file, size, share_id, name);
+   elog(DEBUG3, "tmp file for writer is %s", tmp_file);
+   fd_tmp_writer = open(tmp_file, O_CREAT, S_IRWXU);
+   if(fd_tmp_writer < 0)
+   {
+   elog(ERROR, "could not create tmp file %s for writer", 
tmp_file);
+   }
+   lock = flock(fd_tmp_writer, LOCK_EX | LOCK_NB);
+   if(lock == -1)
+   elog(DEBUG3, "could not lock tmp file  \"%s\": %m", tmp_file);
+   else if(lock == 0)
+   elog(DEBUG3, "successfully locked tmp file  \"%s\": %m", 
tmp_file);
+   pfree(tmp_file);
--- End diff --

Please provide more detailed info in log (e.g. this is for shareinputscan, 
fail reason, e.g. strerror(errno)) for debug purpose.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-19 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r117413480
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -41,14 +41,16 @@
 #include "postgres.h"
 
 #include "executor/executor.h"
-#include "executor/nodeMaterial.h"
 #include "executor/instrument.h"/* Instrumentation */
 #include "utils/tuplestorenew.h"
-
+#include "executor/nodeMaterial.h"
 #include "miscadmin.h"
 
 #include "cdb/cdbvars.h"
+#include "postmaster/primary_mirror_mode.h"
 
+
+int fd_tmp_writer = -1;
--- End diff --

1. Use "static" if it is used in the file only. 
2. I'd suggest you should a more meaningful name for this variable and add 
comment here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1181: HAWQ-1398. Fixing invalid references in log stat...

2017-05-17 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1181
  
You should check into the apache repo. github repo is just a mirror which
will keep sync-ed with the apache repo.

See
https://cwiki.apache.org/confluence/display/HAWQ/Contributing+to+HAWQ

2017-05-16 23:13 GMT+08:00 Kyle Dunn <notificati...@github.com>:

> Looks like my Apache user kdunn926 does not have write permissions to
> this repo. @edespino <https://github.com/edespino>
>
> $ git push https://kdunn...@github.com/apache/incubator-hawq
> Password for 'https://kdunn...@github.com':
> remote: Permission to apache/incubator-hawq.git denied to kdunn926.
> fatal: unable to access 
'https://kdunn...@github.com/apache/incubator-hawq/': The requested URL 
returned error: 403
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> 
<https://github.com/apache/incubator-hawq/pull/1181#issuecomment-301814544>,
> or mute the thread
> 
<https://github.com/notifications/unsubscribe-auth/AHI5jODWSoKIWQvnqIa0L4hlLEu0mj6bks5r6b03gaJpZM4MkFvI>
> .
>



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1242: HAWQ-1469. Don't expose warning messages to comm...

2017-05-17 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1242
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1242: HAWQ-1469. Don't expose warning messages ...

2017-05-17 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1242#discussion_r116960903
  
--- Diff: src/backend/libpq/rangerrest.c ---
@@ -453,23 +453,30 @@ static int call_ranger_rest(CURL_HANDLE curl_handle, 
const char* request)
{
if (retry > 1)
{
-   elog(WARNING, "ranger plugin service from 
http://%s:%d/rps is unavailable : %s, try another http://%s:%d/rps\n;,
+   /* Don't expose this warning message to client, 
just record in log.
+* The value of whereToSendOutput is 
DestRemote, so set it to DestNone
+* and set back after write a warning message 
in log file.
+*/
+   CommandDest commandDest = whereToSendOutput;
+   whereToSendOutput = DestNone;
+   elog(WARNING, "ranger plugin service from 
http://%s:%d/rps is unavailable : %s, "
+   "trying ranger plugin service 
at http://%s:%d/rps\n;,
--- End diff --

If so, why not just remove the code and add comments here? Does elog(LOG, 
...) possibly expose this to users?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (HAWQ-1455) Wrong results on CTAS query over catalog

2017-05-17 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1455.
--
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

> Wrong results on CTAS query over catalog
> 
>
> Key: HAWQ-1455
> URL: https://issues.apache.org/jira/browse/HAWQ-1455
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>    Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> The last ctas sql returns 0 tuple. This is wrong.
> $ cat catalog.sql
> create temp table t1 (tta varchar, ttb varchar);
> create temp table t2 (tta varchar, ttb varchar);
> insert into t1 values('a', '1');
> insert into t1 values('a', '2');
> insert into t1 values('tta', '3');
> insert into t1 values('ttb', '4');
> insert into t2 select pg_attribute.attname,t1.ttb from pg_attribute join t1 
> on pg_attribute.attname = t1.tta;
> $ psql -f catalog.sql -d postgres
> CREATE TABLE
> CREATE TABLE
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 0
> The join result should be as below for a new database.
> INSERT 0 4
>  tta | ttb
> -+-
>  tta | 3
>  ttb | 4
>  tta | 3
>  ttb | 4
> (4 rows)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1455) Wrong results on CTAS query over catalog

2017-05-17 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16013628#comment-16013628
 ] 

Paul Guo commented on HAWQ-1455:


Closing this one after checking in the patch below.

commit 3461e64801eb2299d46c86f47734b7f000152a10
Author: Paul Guo <paul...@gmail.com>
Date:   Fri May 5 17:47:41 2017 +0800

HAWQ-1455. Wrong results on CTAS query over catalog

This reverts the previous fix for HAWQ-512, however to HAWQ-512, it looks 
like
we could modify the lock related code following gpdb to do a real fix. 
Those code
was probably deleted during early hawq development.


> Wrong results on CTAS query over catalog
> 
>
> Key: HAWQ-1455
> URL: https://issues.apache.org/jira/browse/HAWQ-1455
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> The last ctas sql returns 0 tuple. This is wrong.
> $ cat catalog.sql
> create temp table t1 (tta varchar, ttb varchar);
> create temp table t2 (tta varchar, ttb varchar);
> insert into t1 values('a', '1');
> insert into t1 values('a', '2');
> insert into t1 values('tta', '3');
> insert into t1 values('ttb', '4');
> insert into t2 select pg_attribute.attname,t1.ttb from pg_attribute join t1 
> on pg_attribute.attname = t1.tta;
> $ psql -f catalog.sql -d postgres
> CREATE TABLE
> CREATE TABLE
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 0
> The join result should be as below for a new database.
> INSERT 0 4
>  tta | ttb
> -+-
>  tta | 3
>  ttb | 4
>  tta | 3
>  ttb | 4
> (4 rows)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq issue #1232: HAWQ-1455. Wrong results on CTAS query over cata...

2017-05-10 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1232
  
My guess is that we simplified the lock mechanism, i.e. we removed some
code from gpdb. That patch seems to be the whole gpdb patch against pg for
open source.

2017-05-10 17:14 GMT+08:00 zhenglin tao <notificati...@github.com>:

> Seems these codes comes from a bunch of bug fix involved from GPDB commit
> 6b0e52beadd678c5050af9c978c26d171ba86ae0. For this specific bug, I have
> no doubt it works. But for lock mechanism, gpdb fix involves more than
> that. Not sure if there is potential issue in lock.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> 
<https://github.com/apache/incubator-hawq/pull/1232#issuecomment-300423845>,
> or mute the thread
> 
<https://github.com/notifications/unsubscribe-auth/AHI5jJ1cITueX3x4Z_fYO19GnsvLxfl0ks5r4X_ngaJpZM4NRu03>
> .
>



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-1460) WAL Send Server process should exit if postmaster on master is killed

2017-05-10 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004401#comment-16004401
 ] 

Paul Guo commented on HAWQ-1460:


I saw this on a single-node platform without slave.

> WAL Send Server process should exit if postmaster on master is killed
> -
>
> Key: HAWQ-1460
> URL: https://issues.apache.org/jira/browse/HAWQ-1460
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>    Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> If we kill the postmaster on master, we will see two processes keep running.
> pguo  44007  1  0 16:35 ?00:00:00 postgres: port  5432, 
> master logger process
> pguo  44014  1  0 16:35 ?00:00:00 postgres: port  5432, WAL 
> Send Server process
> Well, maybe we should exit the "WAL Send Server process" so that the 
> processes on master are all gone via checking PostmasterIsAlive() in its loop 
> code.
> Note in distributed system any process could be killed at any time without 
> any callback, handler etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1457) Shared memory for SegmentStatus and MetadataCache should not be allocated on segments.

2017-05-10 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1457.
--
Resolution: Fixed

> Shared memory for SegmentStatus and MetadataCache should not be allocated on 
> segments.
> --
>
> Key: HAWQ-1457
> URL: https://issues.apache.org/jira/browse/HAWQ-1457
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> From code level, MetadataCache_ShmemInit() and
> SegmentStatusShmemInit() should not be called on segments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1460) WAL Send Server process should exit if postmaster on master is killed

2017-05-10 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1460.
--
Resolution: Fixed

> WAL Send Server process should exit if postmaster on master is killed
> -
>
> Key: HAWQ-1460
> URL: https://issues.apache.org/jira/browse/HAWQ-1460
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>    Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> If we kill the postmaster on master, we will see two processes keep running.
> pguo  44007  1  0 16:35 ?00:00:00 postgres: port  5432, 
> master logger process
> pguo  44014  1  0 16:35 ?00:00:00 postgres: port  5432, WAL 
> Send Server process
> Well, maybe we should exit the "WAL Send Server process" so that the 
> processes on master are all gone via checking PostmasterIsAlive() in its loop 
> code.
> Note in distributed system any process could be killed at any time without 
> any callback, handler etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq pull request #1236: HAWQ-1457. Shared memory for SegmentStatu...

2017-05-10 Thread paul-guo-
Github user paul-guo- closed the pull request at:

https://github.com/apache/incubator-hawq/pull/1236


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1236: HAWQ-1457. Shared memory for SegmentStatus and M...

2017-05-10 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1236
  
Merged. Closing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1238: HAWQ-1460. WAL Send Server process should exit i...

2017-05-09 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1238
  
@jiny2 @linwen Could you please take a look at this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1238: HAWQ-1460. WAL Send Server process should...

2017-05-09 Thread paul-guo-
GitHub user paul-guo- opened a pull request:

https://github.com/apache/incubator-hawq/pull/1238

HAWQ-1460. WAL Send Server process should exit if postmaster on maste…

…r is killed

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/paul-guo-/incubator-hawq wal

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1238.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1238


commit 9d153c4275d380e6f8cf287c2c4da372a78ec6ce
Author: Paul Guo <paul...@gmail.com>
Date:   2017-05-10T04:12:14Z

HAWQ-1460. WAL Send Server process should exit if postmaster on master is 
killed




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (HAWQ-1460) WAL Send Server process should exit if postmaster on master is killed

2017-05-09 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1460:
--

Assignee: Paul Guo  (was: Ed Espino)

> WAL Send Server process should exit if postmaster on master is killed
> -
>
> Key: HAWQ-1460
> URL: https://issues.apache.org/jira/browse/HAWQ-1460
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>    Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> If we kill the postmaster on master, we will see two processes keep running.
> pguo  44007  1  0 16:35 ?00:00:00 postgres: port  5432, 
> master logger process
> pguo  44014  1  0 16:35 ?00:00:00 postgres: port  5432, WAL 
> Send Server process
> Well, maybe we should exit the "WAL Send Server process" so that the 
> processes on master are all gone via checking PostmasterIsAlive() in its loop 
> code.
> Note in distributed system any process could be killed at any time without 
> any callback, handler etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1460) WAL Send Server process should exit if postmaster on master is killed

2017-05-09 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1460:
--

 Summary: WAL Send Server process should exit if postmaster on 
master is killed
 Key: HAWQ-1460
 URL: https://issues.apache.org/jira/browse/HAWQ-1460
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Core
Reporter: Paul Guo
Assignee: Ed Espino
 Fix For: 2.3.0.0-incubating


If we kill the postmaster on master, we will see two processes keep running.

pguo  44007  1  0 16:35 ?00:00:00 postgres: port  5432, master 
logger process
pguo  44014  1  0 16:35 ?00:00:00 postgres: port  5432, WAL 
Send Server process

Well, maybe we should exit the "WAL Send Server process" so that the processes 
on master are all gone via checking PostmasterIsAlive() in its loop code.

Note in distributed system any process could be killed at any time without any 
callback, handler etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1458) Shared Input Scan QE hung in shareinput_reader_waitready().

2017-05-09 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002291#comment-16002291
 ] 

Paul Guo commented on HAWQ-1458:


I'd suggest providing more details. Just a stack seems to be too simple for a 
lot of people do not have the background of this issue.

> Shared Input Scan QE hung in shareinput_reader_waitready().
> ---
>
> Key: HAWQ-1458
> URL: https://issues.apache.org/jira/browse/HAWQ-1458
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Reporter: Amy
>Assignee: Amy
> Fix For: backlog
>
>
> The stack is as below:
> ```
> 4/13/17 6:12:32 AM PDT: stack of postgres process (pid 108464) on test4:
> 4/13/17 6:12:32 AM PDT: Thread 2 (Thread 0x7f7ca0c7b700 (LWP 108465)):
> 4/13/17 6:12:32 AM PDT: #0  0x0032214df283 in poll () from 
> /lib64/libc.so.6
> 4/13/17 6:12:32 AM PDT: #1  0x0097e110 in rxThreadFunc ()
> 4/13/17 6:12:32 AM PDT: #2  0x003221807aa1 in start_thread () from 
> /lib64/libpthread.so.0
> 4/13/17 6:12:32 AM PDT: #3  0x0032214e8aad in clone () from 
> /lib64/libc.so.6
> 4/13/17 6:12:32 AM PDT: Thread 1 (Thread 0x7f7cc5d48920 (LWP 108464)):
> 4/13/17 6:12:32 AM PDT: #0  0x0032214e1523 in select () from 
> /lib64/libc.so.6
> 4/13/17 6:12:32 AM PDT: #1  0x0069baaf in shareinput_reader_waitready 
> ()
> 4/13/17 6:12:32 AM PDT: #2  0x0069be0d in 
> ExecSliceDependencyShareInputScan ()
> 4/13/17 6:12:32 AM PDT: #3  0x0066eb40 in ExecSliceDependencyNode ()
> 4/13/17 6:12:32 AM PDT: #4  0x0066eaa5 in ExecSliceDependencyNode ()
> 4/13/17 6:12:32 AM PDT: #5  0x0066eaa5 in ExecSliceDependencyNode ()
> 4/13/17 6:12:32 AM PDT: #6  0x0066af41 in ExecutePlan ()
> 4/13/17 6:12:32 AM PDT: #7  0x0066bafa in ExecutorRun ()
> 4/13/17 6:12:32 AM PDT: #8  0x007f52aa in PortalRun ()
> 4/13/17 6:12:32 AM PDT: #9  0x007eb044 in exec_mpp_query ()
> 4/13/17 6:12:32 AM PDT: #10 0x007effb4 in PostgresMain ()
> 4/13/17 6:12:32 AM PDT: #11 0x007a04f0 in ServerLoop ()
> 4/13/17 6:12:32 AM PDT: #12 0x007a32b9 in PostmasterMain ()
> 4/13/17 6:12:32 AM PDT: #13 0x004a52b9 in main ()
> ```



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq pull request #1237: HAWQ-1459. Tweak the feature test related...

2017-05-09 Thread paul-guo-
GitHub user paul-guo- opened a pull request:

https://github.com/apache/incubator-hawq/pull/1237

HAWQ-1459. Tweak the feature test related entries in makefiles.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/paul-guo-/incubator-hawq make

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1237.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1237


commit 9e632dfef73bcba04b7a31beb81331079529237d
Author: Paul Guo <paul...@gmail.com>
Date:   2017-05-09T08:26:05Z

HAWQ-1459. Tweak the feature test related entries in makefiles.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (HAWQ-1459) Tweak the feature test related entries in makefiles.

2017-05-09 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1459:
--

Assignee: Paul Guo  (was: Ed Espino)

> Tweak the feature test related entries in makefiles.
> 
>
> Key: HAWQ-1459
> URL: https://issues.apache.org/jira/browse/HAWQ-1459
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>    Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> We really do not need to set seperate entries for feature test in makefiles, 
> i.e.
> feature-test
> feature-test-clean
> This looks a bit ugly.
> Besides, in src/test/Makefile, there is typo, i.e.
> feature_test



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1459) Tweak the feature test related entries in makefiles.

2017-05-09 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1459:
--

 Summary: Tweak the feature test related entries in makefiles.
 Key: HAWQ-1459
 URL: https://issues.apache.org/jira/browse/HAWQ-1459
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Build
Reporter: Paul Guo
Assignee: Ed Espino
 Fix For: 2.3.0.0-incubating


We really do not need to set seperate entries for feature test in makefiles, 
i.e.
feature-test
feature-test-clean

This looks a bit ugly.

Besides, in src/test/Makefile, there is typo, i.e.
feature_test



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1457) Shared memory for SegmentStatus and MetadataCache should not be allocated on segments.

2017-05-09 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1457:
--

Assignee: Paul Guo  (was: Ed Espino)

> Shared memory for SegmentStatus and MetadataCache should not be allocated on 
> segments.
> --
>
> Key: HAWQ-1457
> URL: https://issues.apache.org/jira/browse/HAWQ-1457
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> From code level, MetadataCache_ShmemInit() and
> SegmentStatusShmemInit() should not be called on segments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq pull request #1236: HAWQ-1457. Shared memory for SegmentStatu...

2017-05-09 Thread paul-guo-
GitHub user paul-guo- opened a pull request:

https://github.com/apache/incubator-hawq/pull/1236

HAWQ-1457. Shared memory for SegmentStatus and MetadataCache should n…

…ot be allocated on segments.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/paul-guo-/incubator-hawq misc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1236.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1236


commit eff42eaaf825886ffc00ed172b18ceb642c49634
Author: Paul Guo <paul...@gmail.com>
Date:   2017-05-09T07:05:05Z

HAWQ-1457. Shared memory for SegmentStatus and MetadataCache should not be 
allocated on segments.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HAWQ-1457) Shared memory for SegmentStatus and MetadataCache should not be allocated on segments.

2017-05-09 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1457:
--

 Summary: Shared memory for SegmentStatus and MetadataCache should 
not be allocated on segments.
 Key: HAWQ-1457
 URL: https://issues.apache.org/jira/browse/HAWQ-1457
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Core
Reporter: Paul Guo
Assignee: Ed Espino
 Fix For: 2.3.0.0-incubating


>From code level, MetadataCache_ShmemInit() and
SegmentStatusShmemInit() should not be called on segments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1444) Need to replace gettimeofday() with clock_gettime() for related timeout checking code

2017-05-08 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1444.
--
   Resolution: Not A Problem
Fix Version/s: (was: backlog)
   2.3.0.0-incubating

> Need to replace gettimeofday() with clock_gettime() for related timeout 
> checking code
> -
>
> Key: HAWQ-1444
> URL: https://issues.apache.org/jira/browse/HAWQ-1444
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> gettimeofday() could be affected by ntp kinda things. If using it for timeout 
> logic there could be wrong, e.g. time goes backwards. We could 
> clock_gettime() with CLOCK_MONOTONIC as an alternative.
> For some platforms/oses that does not have the support for clock_gettime(), 
> we can fall back to use gettimeofday().
> Note getCurrentTime() in code is a good example.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1444) Need to replace gettimeofday() with clock_gettime() for related timeout checking code

2017-05-08 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16000498#comment-16000498
 ] 

Paul Guo commented on HAWQ-1444:


I went through the callers of gettimeofday() in code and did not find obvious 
bad use of it since the following code change, so I'm closing this JIRA.

HAWQ-1439. tolerate system time being changed to earlier point when 
checking resource context timeout

> Need to replace gettimeofday() with clock_gettime() for related timeout 
> checking code
> -
>
> Key: HAWQ-1444
> URL: https://issues.apache.org/jira/browse/HAWQ-1444
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: backlog
>
>
> gettimeofday() could be affected by ntp kinda things. If using it for timeout 
> logic there could be wrong, e.g. time goes backwards. We could 
> clock_gettime() with CLOCK_MONOTONIC as an alternative.
> For some platforms/oses that does not have the support for clock_gettime(), 
> we can fall back to use gettimeofday().
> Note getCurrentTime() in code is a good example.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1444) Need to replace gettimeofday() with clock_gettime() for related timeout checking code

2017-05-08 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1444:
--

Assignee: Paul Guo  (was: Ed Espino)

> Need to replace gettimeofday() with clock_gettime() for related timeout 
> checking code
> -
>
> Key: HAWQ-1444
> URL: https://issues.apache.org/jira/browse/HAWQ-1444
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: backlog
>
>
> gettimeofday() could be affected by ntp kinda things. If using it for timeout 
> logic there could be wrong, e.g. time goes backwards. We could 
> clock_gettime() with CLOCK_MONOTONIC as an alternative.
> For some platforms/oses that does not have the support for clock_gettime(), 
> we can fall back to use gettimeofday().
> Note getCurrentTime() in code is a good example.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq issue #1221: HAWQ-1434

2017-05-07 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1221
  
@shivzone Please help to take a look at the patch.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1221: HAWQ-1434

2017-05-07 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1221
  
@michaelandrepearce  If the code change has been in main, please close this 
PR. Thanks.
By the way, it would be better to have the PR title with more details (i.e. 
not just a JIRA ID).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1218: HAWQ-1431: Do not use StatsAccessor when column ...

2017-05-07 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1218
  
@sansanichfb If the code change has been in main, please close this PR. 
Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1181: HAWQ-1398. Fixing invalid references in log stat...

2017-05-07 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1181
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1232: HAWQ-1455. Wrong results on CTAS query ov...

2017-05-05 Thread paul-guo-
GitHub user paul-guo- opened a pull request:

https://github.com/apache/incubator-hawq/pull/1232

HAWQ-1455. Wrong results on CTAS query over catalog

This reverts the previous fix for HAWQ-512, however to HAWQ-512, it looks 
like
we could modify the lock related code following gpdb to do a real fix. 
Those code
was probably deleted during early hawq development.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/paul-guo-/incubator-hawq entrydb

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1232.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1232


commit 7e208508df3b58edc0c168937e4c5b7467957a1b
Author: Paul Guo <paul...@gmail.com>
Date:   2017-05-05T09:47:41Z

HAWQ-1455. Wrong results on CTAS query over catalog

This reverts the previous fix for HAWQ-512, however to HAWQ-512, it looks 
like
we could modify the lock related code following gpdb to do a real fix. 
Those code
was probably deleted during early hawq development.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-1455) Wrong results on CTAS query over catalog

2017-05-05 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15998012#comment-15998012
 ] 

Paul Guo commented on HAWQ-1455:


This is a regression which was introduced by
https://issues.apache.org/jira/browse/HAWQ-512

It is not accessing the catalog tables of database postgres (in my example) on 
entrydb QE. We need to re-fix HAWQ-512. It seems that it could be fixed in the 
lock manager instead.

> Wrong results on CTAS query over catalog
> 
>
> Key: HAWQ-1455
> URL: https://issues.apache.org/jira/browse/HAWQ-1455
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>    Reporter: Paul Guo
>Assignee: Paul Guo
>
> The last ctas sql returns 0 tuple. This is wrong.
> $ cat catalog.sql
> create temp table t1 (tta varchar, ttb varchar);
> create temp table t2 (tta varchar, ttb varchar);
> insert into t1 values('a', '1');
> insert into t1 values('a', '2');
> insert into t1 values('tta', '3');
> insert into t1 values('ttb', '4');
> insert into t2 select pg_attribute.attname,t1.ttb from pg_attribute join t1 
> on pg_attribute.attname = t1.tta;
> $ psql -f catalog.sql -d postgres
> CREATE TABLE
> CREATE TABLE
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 0
> The join result should be as below for a new database.
> INSERT 0 4
>  tta | ttb
> -+-
>  tta | 3
>  ttb | 4
>  tta | 3
>  ttb | 4
> (4 rows)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1455) Wrong results on CTAS query over catalog

2017-05-05 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1455:
--

Assignee: Paul Guo  (was: Ed Espino)

> Wrong results on CTAS query over catalog
> 
>
> Key: HAWQ-1455
> URL: https://issues.apache.org/jira/browse/HAWQ-1455
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>    Reporter: Paul Guo
>Assignee: Paul Guo
>
> The last ctas sql returns 0 tuple. This is wrong.
> $ cat catalog.sql
> create temp table t1 (tta varchar, ttb varchar);
> create temp table t2 (tta varchar, ttb varchar);
> insert into t1 values('a', '1');
> insert into t1 values('a', '2');
> insert into t1 values('tta', '3');
> insert into t1 values('ttb', '4');
> insert into t2 select pg_attribute.attname,t1.ttb from pg_attribute join t1 
> on pg_attribute.attname = t1.tta;
> $ psql -f catalog.sql -d postgres
> CREATE TABLE
> CREATE TABLE
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 0
> The join result should be as below for a new database.
> INSERT 0 4
>  tta | ttb
> -+-
>  tta | 3
>  ttb | 4
>  tta | 3
>  ttb | 4
> (4 rows)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq issue #1230: HAWQ-1453. Fixed relation_close() error at analy...

2017-05-04 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1230
  
Personally I do not like having asOwner and oldOwner1, but it is not a big 
issue. So +1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1227: HAWQ-1448. Fixed postmaster process hung at recv...

2017-05-03 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1227
  
Normally stopped master should not lead to recv() hang on segments since 
recv() will return 0 if the tcp connection (libpq) is closed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-1436) Implement RPS High availability on HAWQ

2017-04-26 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15985959#comment-15985959
 ] 

Paul Guo commented on HAWQ-1436:


If you use the guc with list, you could design it as either load-balancer or 
master+slaves.

> Implement RPS High availability on HAWQ
> ---
>
> Key: HAWQ-1436
> URL: https://issues.apache.org/jira/browse/HAWQ-1436
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: backlog
>
> Attachments: RPSHADesign_v0.1.pdf
>
>
> Once Ranger is configured, HAWQ will rely on RPS to connect to Ranger. A 
> single point RPS may influence the robustness of HAWQ. 
> Thus We need to investigate and design out the way to implement RPS High 
> availability. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1444) Need to replace gettimeofday() with clock_gettime() for related timeout checking code

2017-04-26 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1444:
--

 Summary: Need to replace gettimeofday() with clock_gettime() for 
related timeout checking code
 Key: HAWQ-1444
 URL: https://issues.apache.org/jira/browse/HAWQ-1444
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Core
Reporter: Paul Guo
Assignee: Ed Espino
 Fix For: backlog


gettimeofday() could be affected by ntp kinda things. If using it for timeout 
logic there could be wrong, e.g. time goes backwards. We could clock_gettime() 
with CLOCK_MONOTONIC as an alternative.

For some platforms/oses that does not have the support for clock_gettime(), we 
can fall back to use gettimeofday().

Note getCurrentTime() in code is a good example.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq pull request #1223: HAWQ-1439. tolerate system time being cha...

2017-04-26 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1223#discussion_r113598189
  
--- Diff: src/backend/resourcemanager/utils/network_utils.c ---
@@ -41,9 +42,23 @@ static void cleanupSocketConnectionPool(int code, Datum 
arg);
 
 uint64_t gettime_microsec(void)
 {
-static struct timeval t;
-gettimeofday(,NULL);
-return 100ULL * t.tv_sec + t.tv_usec;
+   struct timeval newTime;
+   int status = 1;
+   uint64_t t = 0;
+
+#if HAVE_LIBRT
+   struct timespec ts;
+   status = clock_gettime(CLOCK_MONOTONIC, );
+   newTime.tv_sec = ts.tv_sec;
+   newTime.tv_usec = ts.tv_nsec / 1000;
+#endif
+
+   if (status != 0)
+   {
+   gettimeofday(, NULL);
+   }
+   t = ((uint64_t)newTime.tv_sec) * USECS_PER_SECOND + newTime.tv_usec;
+return t;
--- End diff --

spaces -> tab.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1223: HAWQ-1439. tolerate system time being cha...

2017-04-26 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1223#discussion_r113598785
  
--- Diff: src/backend/resourcemanager/utils/network_utils.c ---
@@ -29,6 +29,7 @@
 #include 
 #include 
 #include 
+#include 
--- End diff --

You might add the following header file to suppress the warning about 
missing definition of on_proc_exit().

#include "storage/ipc.h"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (HAWQ-1436) Implement RPS High availability on HAWQ

2017-04-26 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15984421#comment-15984421
 ] 

Paul Guo edited comment on HAWQ-1436 at 4/26/17 9:21 AM:
-

[~xsheng]

Typo in my comment. I removed the "do not" word.

[~lilima]

load balancer is just an option. Up to you to decide the necessity, but I 
really would offload work to other idle system. Anyway since the load is small, 
it is not a big issue. Frankly speaking, if using round-robin, the code logic 
change seems to be really small.

For proxy, the cps should be high enough normally. Even if the cps is low, why 
not use load-balance (e.g. round-robin) in hawq code directly since adding an 
additional proxy (I assume you mean reverse proxy kinda thing) will introduce 
unncessary latency, besides, the "proxy" will need HA also.


was (Author: paul guo):
[~xsheng]

Typo in my comment. I removed the "do not" word.

> Implement RPS High availability on HAWQ
> ---
>
> Key: HAWQ-1436
> URL: https://issues.apache.org/jira/browse/HAWQ-1436
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: backlog
>
> Attachments: RPSHADesign_v0.1.pdf
>
>
> Once Ranger is configured, HAWQ will rely on RPS to connect to Ranger. A 
> single point RPS may influence the robustness of HAWQ. 
> Thus We need to investigate and design out the way to implement RPS High 
> availability. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1436) Implement RPS High availability on HAWQ

2017-04-26 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15984421#comment-15984421
 ] 

Paul Guo commented on HAWQ-1436:


[~xsheng]

Typo in my comment. I removed the "do not" word.

> Implement RPS High availability on HAWQ
> ---
>
> Key: HAWQ-1436
> URL: https://issues.apache.org/jira/browse/HAWQ-1436
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: backlog
>
> Attachments: RPSHADesign_v0.1.pdf
>
>
> Once Ranger is configured, HAWQ will rely on RPS to connect to Ranger. A 
> single point RPS may influence the robustness of HAWQ. 
> Thus We need to investigate and design out the way to implement RPS High 
> availability. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HAWQ-1436) Implement RPS High availability on HAWQ

2017-04-26 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15984225#comment-15984225
 ] 

Paul Guo edited comment on HAWQ-1436 at 4/26/17 8:43 AM:
-

Some suggestions:

1) Since RPS has been a stateless proxy to my knowledge, we really do not need 
to add an additional proxy service (i.e. http proxy?). i.e.  I think 3.1 is 
enough. If we really really need to add some logic of the "proxy", you could 
add the code logic in hawq.

2) I assume currently we need RPS on master and standby. think in the long run, 
we should decouple RPS nodes with master/standby nodes although running RPS is 
allowed on master/standby.

3) Since RPS is stateless, maybe we should use or at least allow a load 
balancer policy, instead of making the "standby" RPS idle.

4) I really expect combine all of the RPS related GUCs into one (no, 
hawq_rps_address_host, hawq_rps_address_port, hawq_rps_address_suffix), e.g.
hawq_rps_url_list (e.g. value with 
"http://192.168.1.66:1357/suffix,http://192.168.1.88.1357/suffix; or with more 
nodes?)
Frankly speaking, when I first see hawq_rps_address_suffix, I was really 
confused.



was (Author: paul guo):
Some suggestions:

1) Since RPS has been a stateless proxy to my knowledge, we really do not need 
to add an additional proxy service (i.e. http proxy?). i.e.  I do not think 3.1 
is enough. If we really really need to add some logic of the "proxy", you could 
add the code logic in hawq.

2) I assume currently we need RPS on master and standby. think in the long run, 
we should decouple RPS nodes with master/standby nodes although running RPS is 
allowed on master/standby.

3) Since RPS is stateless, maybe we should use or at least allow a load 
balancer policy, instead of making the "standby" RPS idle.

4) I really expect combine all of the RPS related GUCs into one (no, 
hawq_rps_address_host, hawq_rps_address_port, hawq_rps_address_suffix), e.g.
hawq_rps_url_list (e.g. value with 
"http://192.168.1.66:1357/suffix,http://192.168.1.88.1357/suffix; or with more 
nodes?)
Frankly speaking, when I first see hawq_rps_address_suffix, I was really 
confused.


> Implement RPS High availability on HAWQ
> ---
>
> Key: HAWQ-1436
> URL: https://issues.apache.org/jira/browse/HAWQ-1436
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: backlog
>
> Attachments: RPSHADesign_v0.1.pdf
>
>
> Once Ranger is configured, HAWQ will rely on RPS to connect to Ranger. A 
> single point RPS may influence the robustness of HAWQ. 
> Thus We need to investigate and design out the way to implement RPS High 
> availability. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1436) Implement RPS High availability on HAWQ

2017-04-26 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15984225#comment-15984225
 ] 

Paul Guo commented on HAWQ-1436:


Some suggestions:

1) Since RPS has been a stateless proxy to my knowledge, we really do not need 
to add an additional proxy service (i.e. http proxy?). i.e.  I do not think 3.1 
is enough. If we really really need to add some logic of the "proxy", you could 
add the code logic in hawq.

2) I assume currently we need RPS on master and standby. think in the long run, 
we should decouple RPS nodes with master/standby nodes although running RPS is 
allowed on master/standby.

3) Since RPS is stateless, maybe we should use or at least allow a load 
balancer policy, instead of making the "standby" RPS idle.

4) I really expect combine all of the RPS related GUCs into one (no, 
hawq_rps_address_host, hawq_rps_address_port, hawq_rps_address_suffix), e.g.
hawq_rps_url_list (e.g. value with 
"http://192.168.1.66:1357/suffix,http://192.168.1.88.1357/suffix; or with more 
nodes?)
Frankly speaking, when I first see hawq_rps_address_suffix, I was really 
confused.


> Implement RPS High availability on HAWQ
> ---
>
> Key: HAWQ-1436
> URL: https://issues.apache.org/jira/browse/HAWQ-1436
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: backlog
>
> Attachments: RPSHADesign_v0.1.pdf
>
>
> Once Ranger is configured, HAWQ will rely on RPS to connect to Ranger. A 
> single point RPS may influence the robustness of HAWQ. 
> Thus We need to investigate and design out the way to implement RPS High 
> availability. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq issue #1223: HAWQ-1439. tolerate system time being changed to...

2017-04-24 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1223
  
This is a workaround. A better solution seems to use the relative time 
based on H/W tick. I'd test to use clock_gettime() to replace gettimeofday() if 
clock_gettime() exists. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1222: HAWQ-1438. Support resource owner beyond transac...

2017-04-21 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/1222
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1212: HAWQ-1396. Add more test cases for queryi...

2017-04-11 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1212#discussion_r110828640
  
--- Diff: src/test/feature/sanity_tests.txt ---
@@ -2,5 +2,5 @@
 #SERIAL=* are the serial tests to run, optional but should not be empty
 #you can have several PARALLEL or SRRIAL
 

-PARALLEL=TestErrorTable.*:TestPreparedStatement.*:TestUDF.*:TestAOSnappy.*:TestAlterOwner.*:TestAlterTable.*:TestCreateTable.*:TestGuc.*:TestType.*:TestDatabase.*:TestParquet.*:TestPartition.*:TestSubplan.*:TestAggregate.*:TestCreateTypeComposite.*:TestGpDistRandom.*:TestInformationSchema.*:TestQueryInsert.*:TestQueryNestedCaseNull.*:TestQueryPolymorphism.*:TestQueryPortal.*:TestQueryPrepare.*:TestQuerySequence.*:TestCommonLib.*:TestToast.*:TestTransaction.*:TestCommand.*:TestCopy.*:TestHawqRegister.TestPartitionTableMultilevel:TestHawqRegister.TestUsage1ExpectSuccessDifferentSchema:TestHawqRegister.TestUsage1ExpectSuccess:TestHawqRegister.TestUsage1SingleHawqFile:TestHawqRegister.TestUsage1SingleHiveFile:TestHawqRegister.TestDataTypes:TestHawqRegister.TestUsage1EofSuccess:TestHawqRegister.TestUsage2Case1Expected:TestHawqRegister.TestUsage2Case2Expected

-SERIAL=TestHawqRanger.*:TestExternalOid.TestExternalOidAll:TestExternalTable.TestExternalTableAll:TestTemp.BasicTest:TestRowTypes.*

+PARALLEL=TestErrorTable.*:TestPreparedStatement.*:TestUDF.*:TestAOSnappy.*:TestAlterOwner.*:TestAlterTable.*:TestCreateTable.*:TestGuc.*:TestType.*:TestDatabase.*:TestParquet.*:TestPartition.*:TestSubplan.*:TestAggregate.*:TestCreateTypeComposite.*:TestGpDistRandom.*:TestInformationSchema.*:TestQueryInsert.*:TestQueryNestedCaseNull.*:TestQueryPolymorphism.*:TestQueryPortal.*:TestQueryPrepare.*:TestQuerySequence.*:TestCommonLib.*:TestToast.*:TestTransaction.*:TestCommand.*:TestCopy.*:TestHawqRegister.TestPartitionTableMultilevel:TestHawqRegister.TestUsage1ExpectSuccessDifferentSchema:TestHawqRegister.TestUsage1ExpectSuccess:TestHawqRegister.TestUsage1SingleHawqFile:TestHawqRegister.TestUsage1SingleHiveFile:TestHawqRegister.TestDataTypes:TestHawqRegister.TestUsage1EofSuccess:TestHawqRegister.TestUsage2Case1Expected:TestHawqRegister.TestUsage2Case2Expected:TestHawqRanger.FallbackTest:TestHawqRanger.PXF*

+SERIAL=TestHawqRanger.BasicTest:TestHawqRanger.Allow*:TestHawqRanger.Resource*:TestHawqRanger.Deny*:TestExternalOid.TestExternalOidAll:TestExternalTable.TestExternalTableAll:TestTemp.BasicTest:TestRowTypes.*
--- End diff --

I'm not sure how the syntax is parsed in the script, but maybe you could 
check. If it is really complex to do maybe you could file a JIRA. I think this 
will be an annoying issue given there will be more test cases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1212: HAWQ-1396. Add more test cases for queryi...

2017-04-07 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1212#discussion_r110351041
  
--- Diff: src/test/feature/sanity_tests.txt ---
@@ -2,5 +2,5 @@
 #SERIAL=* are the serial tests to run, optional but should not be empty
 #you can have several PARALLEL or SRRIAL
 

-PARALLEL=TestErrorTable.*:TestPreparedStatement.*:TestUDF.*:TestAOSnappy.*:TestAlterOwner.*:TestAlterTable.*:TestCreateTable.*:TestGuc.*:TestType.*:TestDatabase.*:TestParquet.*:TestPartition.*:TestSubplan.*:TestAggregate.*:TestCreateTypeComposite.*:TestGpDistRandom.*:TestInformationSchema.*:TestQueryInsert.*:TestQueryNestedCaseNull.*:TestQueryPolymorphism.*:TestQueryPortal.*:TestQueryPrepare.*:TestQuerySequence.*:TestCommonLib.*:TestToast.*:TestTransaction.*:TestCommand.*:TestCopy.*:TestHawqRegister.TestPartitionTableMultilevel:TestHawqRegister.TestUsage1ExpectSuccessDifferentSchema:TestHawqRegister.TestUsage1ExpectSuccess:TestHawqRegister.TestUsage1SingleHawqFile:TestHawqRegister.TestUsage1SingleHiveFile:TestHawqRegister.TestDataTypes:TestHawqRegister.TestUsage1EofSuccess:TestHawqRegister.TestUsage2Case1Expected:TestHawqRegister.TestUsage2Case2Expected

-SERIAL=TestHawqRanger.*:TestExternalOid.TestExternalOidAll:TestExternalTable.TestExternalTableAll:TestTemp.BasicTest:TestRowTypes.*

+PARALLEL=TestErrorTable.*:TestPreparedStatement.*:TestUDF.*:TestAOSnappy.*:TestAlterOwner.*:TestAlterTable.*:TestCreateTable.*:TestGuc.*:TestType.*:TestDatabase.*:TestParquet.*:TestPartition.*:TestSubplan.*:TestAggregate.*:TestCreateTypeComposite.*:TestGpDistRandom.*:TestInformationSchema.*:TestQueryInsert.*:TestQueryNestedCaseNull.*:TestQueryPolymorphism.*:TestQueryPortal.*:TestQueryPrepare.*:TestQuerySequence.*:TestCommonLib.*:TestToast.*:TestTransaction.*:TestCommand.*:TestCopy.*:TestHawqRegister.TestPartitionTableMultilevel:TestHawqRegister.TestUsage1ExpectSuccessDifferentSchema:TestHawqRegister.TestUsage1ExpectSuccess:TestHawqRegister.TestUsage1SingleHawqFile:TestHawqRegister.TestUsage1SingleHiveFile:TestHawqRegister.TestDataTypes:TestHawqRegister.TestUsage1EofSuccess:TestHawqRegister.TestUsage2Case1Expected:TestHawqRegister.TestUsage2Case2Expected:TestHawqRanger.FallbackTest:TestHawqRanger.PXF*

+SERIAL=TestHawqRanger.BasicTest:TestHawqRanger.Allow*:TestHawqRanger.Resource*:TestHawqRanger.Deny*:TestExternalOid.TestExternalOidAll:TestExternalTable.TestExternalTableAll:TestTemp.BasicTest:TestRowTypes.*
--- End diff --

I'm wondering whether we could use some more human-readable syntax, e.g.

SERIAL=TestHawqRanger.a:TestHawqRanger.b
SERIAL+=TestFeatureB.c


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1211: HAWQ-1425. Print error message if ssh con...

2017-04-06 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1211#discussion_r110319372
  
--- Diff: tools/bin/hawqpylib/hawqlib.py ---
@@ -203,6 +203,17 @@ def check_hostname_equal(remote_host, user = ""):
 cmd = "hostname"
 result_local, local_hostname, stderr_remote  = local_ssh_output(cmd)
 result_remote, remote_hostname, stderr_remote = remote_ssh_output(cmd, 
remote_host, user)
+# SSH return 255 error code when having connection issues, otherwise 
return code is in [0, 255).
+if result_remote == 255:
+print "Create ssh connection to %s failed." % remote_host
+print "Please check the ssh connection and make sure passwordless 
ssh enabled."
+sys.exit(result_remote)
+elif result_remote > 0:
+print "Execute remote command failed."
+sys.exit(result_remote)
+else:
+pass
+
--- End diff --

Judge via 255 seems to be wrong since the cmd could return 255, e.g.
[pguo@host67:/home/pguo]$ ssh 127.0.0.1 'exit 255'
[pguo@host67:/home/pguo]$ echo $?
255

I'm wondering whether we could show the error info in hawq utility output 
or log file, and we still check result_remote>0 and log for this condition like 
this:

ssh command returns with error, either ssh connection fails or cmd exits 
with error. see more in log or.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (HAWQ-1423) cmock framework does not recognize __MAYBE_UNUSED.

2017-04-05 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo resolved HAWQ-1423.

   Resolution: Fixed
Fix Version/s: (was: backlog)
   2.3.0.0-incubating

> cmock framework does not recognize __MAYBE_UNUSED.
> --
>
> Key: HAWQ-1423
> URL: https://issues.apache.org/jira/browse/HAWQ-1423
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Ming LI
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> This bug only exists on MacOS.
> Reproduce Steps: 
> {code}
> 1. ./configure 
> 2. make -j8 
> 3. cd src/backend
> 4. make unittest-check
> {code}
> Build log:
> {code}
> ../../../../../src/test/unit/mock/backend/libpq/be-secure_mock.c:174:2: 
> error: void function 'report_commerror'
>   should not return a value [-Wreturn-type]
> return (__MAYBE_UNUSED) mock();
> ^  ~~~
> 1 error generated.
> make[4]: *** 
> [../../../../../src/test/unit/mock/backend/libpq/be-secure_mock.o] Error 1
> make[3]: *** [mockup-phony] Error 2
> make[2]: *** [unittest-check] Error 2
> make[1]: *** [unittest-check] Error 2
> make: *** [unittest-check] Error 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   3   4   5   6   7   >