[GitHub] incubator-hawq pull request #1225: HAWQ-1446: Introduce vectorized profile f...

2017-05-24 Thread shivzone
Github user shivzone commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1225#discussion_r118358798
  
--- Diff: 
pxf/pxf-service/src/main/java/org/apache/hawq/pxf/service/ReadVectorizedBridge.java
 ---
@@ -0,0 +1,126 @@
+package org.apache.hawq.pxf.service;
--- End diff --

ReadVectorizedBridge looks very similar to ReadBridge except for getNext() 
function. Please refactor both classes to avoid duplication


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1225: HAWQ-1446: Introduce vectorized profile f...

2017-05-24 Thread shivzone
Github user shivzone commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1225#discussion_r118332954
  
--- Diff: 
pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/HiveORCBatchResolver.java
 ---
@@ -0,0 +1,257 @@
+package org.apache.hawq.pxf.plugins.hive;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import static org.apache.hawq.pxf.api.io.DataType.BIGINT;
+import static org.apache.hawq.pxf.api.io.DataType.BOOLEAN;
+import static org.apache.hawq.pxf.api.io.DataType.BPCHAR;
+import static org.apache.hawq.pxf.api.io.DataType.BYTEA;
+import static org.apache.hawq.pxf.api.io.DataType.DATE;
+import static org.apache.hawq.pxf.api.io.DataType.FLOAT8;
+import static org.apache.hawq.pxf.api.io.DataType.INTEGER;
+import static org.apache.hawq.pxf.api.io.DataType.NUMERIC;
+import static org.apache.hawq.pxf.api.io.DataType.REAL;
+import static org.apache.hawq.pxf.api.io.DataType.SMALLINT;
+import static org.apache.hawq.pxf.api.io.DataType.TEXT;
+import static org.apache.hawq.pxf.api.io.DataType.TIMESTAMP;
+import static org.apache.hawq.pxf.api.io.DataType.VARCHAR;
+
+import java.math.BigDecimal;
+import java.util.ArrayList;
+import java.util.Calendar;
+import java.util.List;
+import java.sql.Timestamp;
+import java.sql.Date;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hawq.pxf.api.OneField;
+import org.apache.hawq.pxf.api.OneRow;
+import org.apache.hawq.pxf.api.ReadVectorizedResolver;
+import org.apache.hawq.pxf.api.UnsupportedTypeException;
+import org.apache.hawq.pxf.api.io.DataType;
+import org.apache.hawq.pxf.api.utilities.ColumnDescriptor;
+import org.apache.hawq.pxf.api.utilities.InputData;
+import org.apache.hawq.pxf.api.utilities.Plugin;
+import org.apache.hawq.pxf.plugins.hive.utilities.HiveUtilities;
+import org.apache.hadoop.hive.serde2.*;
+import org.apache.hadoop.hive.serde2.objectinspector.*;
+import org.apache.hadoop.hive.serde2.objectinspector.primitive.*;
+import 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector.Category;
+import 
org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector.PrimitiveCategory;
+import org.apache.hadoop.hive.ql.exec.vector.*;
+
+@SuppressWarnings("deprecation")
+public class HiveORCBatchResolver extends Plugin implements 
ReadVectorizedResolver {
+
+private static final Log LOG = 
LogFactory.getLog(HiveORCBatchResolver.class);
+
+private List resolvedBatch;
+private StructObjectInspector soi;
+
+public HiveORCBatchResolver(InputData input) throws Exception {
+super(input);
+try {
+soi = (StructObjectInspector) 
HiveUtilities.getOrcReader(input).getObjectInspector();
+} catch (Exception e) {
+LOG.error("Unable to create an object inspector.");
+throw e;
+}
+}
+
+@Override
+public List getFieldsForBatch(OneRow batch) {
+
+Writable writableObject = null;
+Object fieldValue = null;
+VectorizedRowBatch vectorizedBatch = (VectorizedRowBatch) 
batch.getData();
+
+// Allocate empty result set
+resolvedBatch = new 
ArrayList(vectorizedBatch.size);
+for (int i = 0; i < vectorizedBatch.size; i++) {
+ArrayList row = new 
ArrayList(inputData.getColumns());
+resolvedBatch.add(row);
+for (int j = 0; j < inputData.getColumns(); j++) {
+row.add(null);
  

[GitHub] incubator-hawq pull request #1225: HAWQ-1446: Introduce vectorized profile f...

2017-05-24 Thread shivzone
Github user shivzone commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1225#discussion_r118339930
  
--- Diff: 
pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/HiveORCBatchResolver.java
 ---
@@ -0,0 +1,257 @@
+package org.apache.hawq.pxf.plugins.hive;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import static org.apache.hawq.pxf.api.io.DataType.BIGINT;
+import static org.apache.hawq.pxf.api.io.DataType.BOOLEAN;
+import static org.apache.hawq.pxf.api.io.DataType.BPCHAR;
+import static org.apache.hawq.pxf.api.io.DataType.BYTEA;
+import static org.apache.hawq.pxf.api.io.DataType.DATE;
+import static org.apache.hawq.pxf.api.io.DataType.FLOAT8;
+import static org.apache.hawq.pxf.api.io.DataType.INTEGER;
+import static org.apache.hawq.pxf.api.io.DataType.NUMERIC;
+import static org.apache.hawq.pxf.api.io.DataType.REAL;
+import static org.apache.hawq.pxf.api.io.DataType.SMALLINT;
+import static org.apache.hawq.pxf.api.io.DataType.TEXT;
+import static org.apache.hawq.pxf.api.io.DataType.TIMESTAMP;
+import static org.apache.hawq.pxf.api.io.DataType.VARCHAR;
+
+import java.math.BigDecimal;
+import java.util.ArrayList;
+import java.util.Calendar;
+import java.util.List;
+import java.sql.Timestamp;
+import java.sql.Date;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hawq.pxf.api.OneField;
+import org.apache.hawq.pxf.api.OneRow;
+import org.apache.hawq.pxf.api.ReadVectorizedResolver;
+import org.apache.hawq.pxf.api.UnsupportedTypeException;
+import org.apache.hawq.pxf.api.io.DataType;
+import org.apache.hawq.pxf.api.utilities.ColumnDescriptor;
+import org.apache.hawq.pxf.api.utilities.InputData;
+import org.apache.hawq.pxf.api.utilities.Plugin;
+import org.apache.hawq.pxf.plugins.hive.utilities.HiveUtilities;
+import org.apache.hadoop.hive.serde2.*;
+import org.apache.hadoop.hive.serde2.objectinspector.*;
+import org.apache.hadoop.hive.serde2.objectinspector.primitive.*;
+import 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector.Category;
+import 
org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector.PrimitiveCategory;
+import org.apache.hadoop.hive.ql.exec.vector.*;
+
+@SuppressWarnings("deprecation")
+public class HiveORCBatchResolver extends Plugin implements 
ReadVectorizedResolver {
+
+private static final Log LOG = 
LogFactory.getLog(HiveORCBatchResolver.class);
+
+private List resolvedBatch;
+private StructObjectInspector soi;
+
+public HiveORCBatchResolver(InputData input) throws Exception {
+super(input);
+try {
+soi = (StructObjectInspector) 
HiveUtilities.getOrcReader(input).getObjectInspector();
+} catch (Exception e) {
+LOG.error("Unable to create an object inspector.");
+throw e;
+}
+}
+
+@Override
+public List getFieldsForBatch(OneRow batch) {
+
+Writable writableObject = null;
+Object fieldValue = null;
+VectorizedRowBatch vectorizedBatch = (VectorizedRowBatch) 
batch.getData();
+
+// Allocate empty result set
+resolvedBatch = new 
ArrayList(vectorizedBatch.size);
+for (int i = 0; i < vectorizedBatch.size; i++) {
+ArrayList row = new 
ArrayList(inputData.getColumns());
+resolvedBatch.add(row);
+for (int j = 0; j < inputData.getColumns(); j++) {
+row.add(null);
  

[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118226956
  
--- Diff: src/backend/utils/misc/guc.c ---
@@ -6685,6 +6687,15 @@ static struct config_int ConfigureNamesInt[] =
_cache_max_hdfs_file_num,
524288, 32768, 8388608, NULL, NULL
},
+   {
+   {"share_input_scan_wait_lockfile_timeout", PGC_USERSET, 
DEVELOPER_OPTIONS,
+   gettext_noop("timeout for wait lock file."),
--- End diff --

The description is too short.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118224289
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -709,6 +783,12 @@ shareinput_reader_waitready(int share_id, 
PlanGenerator planGen)
struct timeval tval;
int n;
char a;
+   int file_exists = -1;
+   int timeout_interval = 0;
+   bool flag = false; //A tag for file exists or not.
+   int fd_lock = -1;
--- End diff --

fd_lock -> lock_fd?




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118221100
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
+/*
+ * mkLockFileForWriter
+ * 
+ * Create a unique lock file for writer. Then use flock's unblock way to 
lock the lock file.
+ * We can make sure the lock file will be locked forerver until the writer 
process has quit.
+ */
+void mkLockFileForWriter(int size, int share_id, char * name)
--- End diff --

mkLockFileForWriter() seems to be used in this file only. Why not static 
this file and remove the definition in the header file below?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118222465
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
+/*
+ * mkLockFileForWriter
+ * 
+ * Create a unique lock file for writer. Then use flock's unblock way to 
lock the lock file.
+ * We can make sure the lock file will be locked forerver until the writer 
process has quit.
+ */
+void mkLockFileForWriter(int size, int share_id, char * name)
+{
+   char *lock_file;
+   int lock;
+
+   lock_file = (char *)palloc0(sizeof(char) * MAXPGPATH);
+   generate_lock_file_name(lock_file, size, share_id, name);
+   elog(DEBUG3, "The lock file for writer in SISC is %s", lock_file);
+   sisc_writer_lock_fd = open(lock_file, O_CREAT, S_IRWXU);
+   if(sisc_writer_lock_fd < 0)
+   {
+   elog(ERROR, "Could not create lock file %s for writer in SISC. 
The error number is %d", lock_file, errno);
+   }
+   lock = flock(sisc_writer_lock_fd, LOCK_EX | LOCK_NB);
+   if(lock == -1)
+   elog(DEBUG3, "Could not lock lock file  \"%s\" for writer in 
SISC : %m. The error number is %d", lock_file, errno);
+   else if(lock == 0)
+   elog(DEBUG3, "Successfully locked lock file  \"%s\" for writer 
in SISC: %m.The error number is %d", lock_file, errno);
+   pfree(lock_file);
--- End diff --

ditto.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118224476
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -764,7 +848,7 @@ shareinput_reader_waitready(int share_id, PlanGenerator 
planGen)
tval.tv_usec = 0;
 
n = select(pctxt->readyfd+1, (fd_set *) , NULL, NULL, 
);
-
+   
--- End diff --

remove the tailing blank characters.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118220731
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
+/*
+ * mkLockFileForWriter
+ * 
+ * Create a unique lock file for writer. Then use flock's unblock way to 
lock the lock file.
+ * We can make sure the lock file will be locked forerver until the writer 
process has quit.
--- End diff --

until the writer process quits.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118222840
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -552,11 +551,59 @@ static void sisc_lockname(char* p, int size, int 
share_id, const char* name)
}
 }
 
+
+char *joint_lock_file_name(ShareInput_Lk_Context *lk_ctxt, char *name)
+{
+   char *lock_file = palloc0(sizeof(char) * MAXPGPATH);
+
+   if(strncmp("writer", name, 6) ==0 )
+   {
+   strncat(lock_file, lk_ctxt->lkname_ready, 
strlen(lk_ctxt->lkname_ready));
--- End diff --

The strncat usage is not correct. It should be
strncat(lock_file, lk_ctxt->lkname_ready, MAXPGPATH - 
strlen(lock_file) - 1);

From the manpage.
 strncat(char *restrict s1, const char *restrict s2, size_t n);
 The strncat() function appends not more than n characters from s2, and 
then adds a terminating `\0'.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118224402
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -738,6 +818,10 @@ shareinput_reader_waitready(int share_id, 
PlanGenerator planGen)
if(pctxt->donefd < 0)
elog(ERROR, "could not open fifo \"%s\": %m", 
pctxt->lkname_done);
 
+   char *writer_lock_file = NULL; //current path for lock file.
--- End diff --

I think moving variable definition earlier in its scope would be better.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118226007
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -793,15 +877,72 @@ shareinput_reader_waitready(int share_id, 
PlanGenerator planGen)
}
else if(n==0)
{
-   elog(DEBUG1, "SISC READER (shareid=%d, slice=%d): Wait 
ready time out once",
-   share_id, currentSliceId);
+   file_exists = access(writer_lock_file, F_OK);   
+   if(file_exists != 0)
+   {
+   elog(DEBUG3, "Wait lock file for writer time 
out interval is %d", timeout_interval);
+   if(timeout_interval >= 
share_input_scan_wait_lockfile_timeout || flag == true) //If lock file never 
exists or disappeared, reader will no longer waiting for writer
+   {
+   elog(LOG, "SISC READER (shareid=%d, 
slice=%d): Wait ready time out and break",
+   share_id, currentSliceId);
+   pfree(writer_lock_file);
+   break;
+   }
+   timeout_interval += tval.tv_sec;
+   }
+   else
+   {
+   elog(LOG, "writer lock file of 
shareinput_reader_waitready() is %s", writer_lock_file);
+   flag = true;
+   fd_lock = open(writer_lock_file, O_RDONLY);
+   if(fd_lock < 0)
+   {
+   elog(DEBUG3, "Open writer's lock file 
%s failed!, error number is %d", writer_lock_file, errno);
+   }
+   lock = flock(fd_lock, LOCK_EX | LOCK_NB);
+   if(lock == -1)
+   {
--- End diff --

This is expected if I understand correctly so the log should be friendly 
(e.g. told users that this is expected).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118223887
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -666,6 +717,29 @@ static int retry_write(int fd, char *buf, int wsize)
return 0;
 }
 
+
+
+/*
+ * generate_lock_file_name
+ *
+ * Called by reader or writer to make the unique lock file name.
+ */
+void generate_lock_file_name(char* p, int size, int share_id, const char* 
name)
+{
+   if (strncmp(name , "writer", 6) == 0)
--- End diff --

My habit is:

strncmp(name , "writer", strlen("writer")) == 0

So you do not need to calculate the string length yourself (compiler does 
this for you).

Anyway, up to you.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118222360
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
+/*
+ * mkLockFileForWriter
+ * 
+ * Create a unique lock file for writer. Then use flock's unblock way to 
lock the lock file.
+ * We can make sure the lock file will be locked forerver until the writer 
process has quit.
+ */
+void mkLockFileForWriter(int size, int share_id, char * name)
+{
+   char *lock_file;
+   int lock;
+
+   lock_file = (char *)palloc0(sizeof(char) * MAXPGPATH);
+   generate_lock_file_name(lock_file, size, share_id, name);
+   elog(DEBUG3, "The lock file for writer in SISC is %s", lock_file);
+   sisc_writer_lock_fd = open(lock_file, O_CREAT, S_IRWXU);
+   if(sisc_writer_lock_fd < 0)
+   {
+   elog(ERROR, "Could not create lock file %s for writer in SISC. 
The error number is %d", lock_file, errno);
+   }
+   lock = flock(sisc_writer_lock_fd, LOCK_EX | LOCK_NB);
+   if(lock == -1)
+   elog(DEBUG3, "Could not lock lock file  \"%s\" for writer in 
SISC : %m. The error number is %d", lock_file, errno);
--- End diff --

If having %m we have the err info.
   m  (Glibc extension.)  Print output of strerror(errno).  No 
argument is required.

Does this code work fine on mac? I think to avoid duplication we should use 
errno or strerror, instead of "%m".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118220716
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -759,3 +764,30 @@ ExecEagerFreeMaterial(MaterialState *node)
}
 }
 
+
+/*
+ * mkLockFileForWriter
+ * 
+ * Create a unique lock file for writer. Then use flock's unblock way to 
lock the lock file.
--- End diff --

use flock() to lock/unlock the lock file.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118226308
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -793,15 +877,72 @@ shareinput_reader_waitready(int share_id, 
PlanGenerator planGen)
}
else if(n==0)
{
-   elog(DEBUG1, "SISC READER (shareid=%d, slice=%d): Wait 
ready time out once",
-   share_id, currentSliceId);
+   file_exists = access(writer_lock_file, F_OK);   
+   if(file_exists != 0)
+   {
+   elog(DEBUG3, "Wait lock file for writer time 
out interval is %d", timeout_interval);
+   if(timeout_interval >= 
share_input_scan_wait_lockfile_timeout || flag == true) //If lock file never 
exists or disappeared, reader will no longer waiting for writer
+   {
+   elog(LOG, "SISC READER (shareid=%d, 
slice=%d): Wait ready time out and break",
+   share_id, currentSliceId);
+   pfree(writer_lock_file);
+   break;
+   }
+   timeout_interval += tval.tv_sec;
+   }
+   else
+   {
+   elog(LOG, "writer lock file of 
shareinput_reader_waitready() is %s", writer_lock_file);
+   flag = true;
+   fd_lock = open(writer_lock_file, O_RDONLY);
+   if(fd_lock < 0)
+   {
+   elog(DEBUG3, "Open writer's lock file 
%s failed!, error number is %d", writer_lock_file, errno);
--- End diff --

why the code still continue if fd_lock < 0? Should not it fail?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-1471) "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 4.7.2 is too low to install hawq

2017-05-24 Thread fangpei (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022711#comment-16022711
 ] 

fangpei commented on HAWQ-1471:
---

OK, thank you! After you modified the wiki page, the description of the GCC 
version became clear, and this can make more users avoid this error. Cool!

> "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 
> 4.7.2 is too low to install hawq
> -
>
> Key: HAWQ-1471
> URL: https://issues.apache.org/jira/browse/HAWQ-1471
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build
>Reporter: fangpei
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> My OS environment is Red Hat 6.5. I referred to the  website 
> "https://cwiki.apache.org//confluence/display/HAWQ/Build+and+Install; to 
> build and install hawq. I installed the GCC version specified on the website 
> (gcc 4.7.2 / gcc-c++ 4.7.2), but the error happened:
> "error: 'Hdfs::Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'" 
> or
> "error : 'Yarn:Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'"
> I found that GCC support for C ++ 11 is not good, leading to the 
> emergence of the error.
> So I installed gcc 4.8.5 / gcc-c++ 4.8.5, and the problem was resolved. 
> gcc / gcc-c++ version 4.7.2 is too low to install hawq, I suggest 
> updating the website about gcc / gcc-c++ version requirement.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq issue #1243: HAWQ-1458. Fix share input scan bug for writer p...

2017-05-24 Thread huor
Github user huor commented on the issue:

https://github.com/apache/incubator-hawq/pull/1243
  
+1 for the improvement


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1243: HAWQ-1458. Fix share input scan bug for w...

2017-05-24 Thread huor
Github user huor commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1243#discussion_r118204222
  
--- Diff: src/backend/executor/nodeMaterial.c ---
@@ -41,14 +41,16 @@
 #include "postgres.h"
--- End diff --

lock would be better


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (HAWQ-1471) "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 4.7.2 is too low to install hawq

2017-05-24 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo resolved HAWQ-1471.

   Resolution: Fixed
 Assignee: Paul Guo  (was: Ed Espino)
Fix Version/s: 2.3.0.0-incubating

> "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 
> 4.7.2 is too low to install hawq
> -
>
> Key: HAWQ-1471
> URL: https://issues.apache.org/jira/browse/HAWQ-1471
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build
>Reporter: fangpei
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> My OS environment is Red Hat 6.5. I referred to the  website 
> "https://cwiki.apache.org//confluence/display/HAWQ/Build+and+Install; to 
> build and install hawq. I installed the GCC version specified on the website 
> (gcc 4.7.2 / gcc-c++ 4.7.2), but the error happened:
> "error: 'Hdfs::Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'" 
> or
> "error : 'Yarn:Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'"
> I found that GCC support for C ++ 11 is not good, leading to the 
> emergence of the error.
> So I installed gcc 4.8.5 / gcc-c++ 4.8.5, and the problem was resolved. 
> gcc / gcc-c++ version 4.7.2 is too low to install hawq, I suggest 
> updating the website about gcc / gcc-c++ version requirement.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HAWQ-1471) "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 4.7.2 is too low to install hawq

2017-05-24 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022567#comment-16022567
 ] 

Paul Guo edited comment on HAWQ-1471 at 5/24/17 8:59 AM:
-

Modified the wiki page. So we could close this issue.


was (Author: paul guo):
Modified. So we could close this issue.

> "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 
> 4.7.2 is too low to install hawq
> -
>
> Key: HAWQ-1471
> URL: https://issues.apache.org/jira/browse/HAWQ-1471
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build
>Reporter: fangpei
>Assignee: Ed Espino
> Fix For: 2.3.0.0-incubating
>
>
> My OS environment is Red Hat 6.5. I referred to the  website 
> "https://cwiki.apache.org//confluence/display/HAWQ/Build+and+Install; to 
> build and install hawq. I installed the GCC version specified on the website 
> (gcc 4.7.2 / gcc-c++ 4.7.2), but the error happened:
> "error: 'Hdfs::Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'" 
> or
> "error : 'Yarn:Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'"
> I found that GCC support for C ++ 11 is not good, leading to the 
> emergence of the error.
> So I installed gcc 4.8.5 / gcc-c++ 4.8.5, and the problem was resolved. 
> gcc / gcc-c++ version 4.7.2 is too low to install hawq, I suggest 
> updating the website about gcc / gcc-c++ version requirement.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)