[GitHub] incubator-hawq issue #972: HAWQ-1108 Add JDBC PXF Plugin

2017-03-17 Thread jiadexin
Github user jiadexin commented on the issue:

https://github.com/apache/incubator-hawq/pull/972
  
closes #972


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #972: HAWQ-1108 Add JDBC PXF Plugin

2017-03-17 Thread jiadexin
Github user jiadexin closed the pull request at:

https://github.com/apache/incubator-hawq/pull/972


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #972: HAWQ-1108 Add JDBC PXF Plugin

2016-11-02 Thread jiadexin
Github user jiadexin commented on the issue:

https://github.com/apache/incubator-hawq/pull/972
  
@shivzoneIf there are no partitions, then only one fragment.
if have partitions ,HAWQ-MASTER scheduling strategy support is required.

HDFS is distributed, and pxf-hdfs can allocate hosts for the fragment via 
the hdfs file metadata.
The role of pxf-jdbc is mainly used to integrate the enterprise has been 
running the traditional relational database, these systems are generally 
stand-alone system, may not deploy PXF Engine.
In the current HAWQ PXF engine implementation, pxf-instance can not know 
all pxf-hosts, so only the current host name can be assigned.
if configured in the LOCATION in the DDL, when a large number of 'pxf 
hosts' is used, the DDL statement is extremely long. and not flexible.
i think, HAWQ-MASTER wants to support a scheduling strategy: when the 
fragment host name is null, HAWQ-MASTER automatically assigned a HAWQ-segment 
host.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #972: HAWQ-1108 Add JDBC PXF Plugin

2016-11-02 Thread jiadexin
Github user jiadexin commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/972#discussion_r86278188
  
--- Diff: 
pxf/pxf-jdbc/src/main/java/org/apache/hawq/pxf/plugins/jdbc/JdbcPartitionFragmenter.java
 ---
@@ -0,0 +1,297 @@
+package org.apache.hawq.pxf.plugins.jdbc;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import org.apache.hawq.pxf.api.Fragmenter;
+import org.apache.hawq.pxf.api.FragmentsStats;
+import org.apache.hawq.pxf.api.UserDataException;
+import org.apache.hawq.pxf.plugins.jdbc.utils.DbProduct;
+import org.apache.hawq.pxf.plugins.jdbc.utils.ByteUtil;
+import org.apache.hawq.pxf.api.Fragment;
+import org.apache.hawq.pxf.api.utilities.InputData;
+
+import java.net.InetAddress;
+import java.text.SimpleDateFormat;
+import java.util.*;
+
+/**
+ * Fragmenter class for JDBC data resources.
+ *
+ * Extends the {@link Fragmenter} abstract class, with the purpose of 
transforming
+ * an input data path  (an JDBC Database table name  and user request 
parameters)  into a list of regions
+ * that belong to this table.
+ * 
+ * The parameter Patterns 
+ * There are three  parameters,  the format is as follows:
+ * 
+ * 
PARTITION_BY=column_name:column_type=start_value[:end_value]=interval_num[:interval_unit]
+ * 
+ * The PARTITION_BY parameter can be split by colon(':'),the 
column_type current supported : date,int,enum .
+ * The Date format is '-MM-dd'. 
+ * The RANGE parameter can be split by colon(':') ,used to 
identify the starting range of each fragment.
+ * The range is left-closed, ie: '>= start_value AND < end_value' 
.If the column_type is int,
+ * the end_value can be empty. If the 
column_typeis enum,the parameter RANGE 
can be empty. 
+ * The INTERVAL parameter can be split by colon(':'), 
indicate the interval value of one fragment.
+ * When column_type is date,this parameter must 
be split by colon, and interval_unit can be 
year,month,day.
+ * When column_type is int, the 
interval_unit can be empty.
+ * When column_type is enum,the 
INTERVAL parameter can be empty.
+ * 
+ * 
+ * The syntax examples is :
+ * 
PARTITION_BY=createdate:date=2008-01-01:2010-01-01=1:month'
 
+ * PARTITION_BY=year:int=2008:2010=1 
+ * PARTITION_BY=grade:enum=excellent:good:general:bad
+ * 
+ *
+ */
+public class JdbcPartitionFragmenter extends Fragmenter {
+String[] partitionBy = null;
+String[] range = null;
+String[] interval = null;
+PartitionType partitionType = null;
+String partitionColumn = null;
+IntervalType intervalType = null;
+int intervalNum = 1;
+
+enum PartitionType {
--- End diff --

At present  support of these three kinds of commonly used, the future can 
also increase other types.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #972: HAWQ-1108 Add JDBC PXF Plugin

2016-10-31 Thread jiadexin
Github user jiadexin commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/972#discussion_r85867311
  
--- Diff: 
pxf/pxf-jdbc/src/main/java/org/apache/hawq/pxf/plugins/jdbc/utils/ByteUtil.java 
---
@@ -0,0 +1,86 @@
+package org.apache.hawq.pxf.plugins.jdbc.utils;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+
+/**
+ * A tool class, used to deal with byte array merging, split and other 
methods.
+ */
+public class ByteUtil {
+
+public static byte[] mergeBytes(byte[] b1, byte[] b2) {
--- End diff --

This method is simple, I do not want to import a dependency.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #972: HAWQ-1108 Add JDBC PXF Plugin

2016-10-31 Thread jiadexin
Github user jiadexin commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/972#discussion_r85867986
  
--- Diff: 
pxf/pxf-jdbc/src/main/java/org/apache/hawq/pxf/plugins/jdbc/utils/ByteUtil.java 
---
@@ -0,0 +1,86 @@
+package org.apache.hawq.pxf.plugins.jdbc.utils;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+
+/**
+ * A tool class, used to deal with byte array merging, split and other 
methods.
+ */
+public class ByteUtil {
+
+public static byte[] mergeBytes(byte[] b1, byte[] b2) {
--- End diff --

This method is simple, I do not want to import a dependency.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #972: HAWQ-1108 Add JDBC PXF Plugin

2016-10-30 Thread jiadexin
Github user jiadexin commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/972#discussion_r85683989
  
--- Diff: 
pxf/pxf-jdbc/src/test/java/org/apache/hawq/pxf/plugins/jdbc/JdbcMySqlExtensionTest.java
 ---
@@ -0,0 +1,303 @@
+package org.apache.hawq.pxf.plugins.jdbc;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import com.sun.org.apache.xml.internal.utils.StringComparable;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hawq.pxf.api.FilterParser;
+import org.apache.hawq.pxf.api.Fragment;
+import org.apache.hawq.pxf.api.OneField;
+import org.apache.hawq.pxf.api.OneRow;
+import org.apache.hawq.pxf.api.utilities.ColumnDescriptor;
+import org.apache.hawq.pxf.api.utilities.InputData;
+import org.apache.hawq.pxf.api.io.DataType;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.text.SimpleDateFormat;
+import java.util.*;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+public class JdbcMySqlExtensionTest {
+private static final Log LOG = 
LogFactory.getLog(JdbcMySqlExtensionTest.class);
+static String MYSQL_URL = "jdbc:mysql://localhost:3306/demodb";
--- End diff --

I renamed JdbcMySqlExtensionTest to SqlBuilderTest.
 Validate SQL string generated by the  
JdbcPartitionFragmenter.buildFragmenterSql method and the 
WhereSQLBuilder.buildWhereSQL method.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #972: HAWQ-1108 Add JDBC PXF Plugin

2016-10-30 Thread jiadexin
Github user jiadexin commented on the issue:

https://github.com/apache/incubator-hawq/pull/972
  
@hornn Thank you for your suggestion,  the original code has some of 
casually.
Some of the recommendations I replied to you , the other already changed in 
accordance with the your  recommendations  .


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #972: HAWQ-1108 Add JDBC PXF Plugin

2016-10-30 Thread jiadexin
Github user jiadexin commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/972#discussion_r85678597
  
--- Diff: 
pxf/pxf-jdbc/src/main/java/org/apache/hawq/pxf/plugins/jdbc/JdbcPartitionFragmenter.java
 ---
@@ -0,0 +1,284 @@
+package org.apache.hawq.pxf.plugins.jdbc;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import org.apache.hawq.pxf.api.Fragmenter;
+import org.apache.hawq.pxf.api.FragmentsStats;
+import org.apache.hawq.pxf.plugins.jdbc.utils.DbProduct;
+import org.apache.hawq.pxf.plugins.jdbc.utils.ByteUtil;
+import org.apache.hawq.pxf.api.Fragment;
+import org.apache.hawq.pxf.api.utilities.InputData;
+
+import java.net.InetAddress;
+import java.text.SimpleDateFormat;
+import java.util.*;
+
+
+/**
+ * Fragmenter class for JDBC data resources.
+ *
+ * Extends the {@link Fragmenter} abstract class, with the purpose of 
transforming
+ * an input data path  (an JDBC Database table name  and user request 
parameters)  into a list of regions
+ * that belong to this table.
+ * 
+ * The parameter Patterns 
+ * There are three  parameters,  the format is as follows:
+ * 
+ * 
PARTITION_BY=column_name:column_type=start_value[:end_value]=interval_num[:interval_unit]
+ * 
+ * The PARTITION_BY parameter can be split by colon(':'),the 
column_type current supported : date,int,enum .
+ * The Date format is '-MM-dd'. 
+ * The RANGE parameter can be split by colon(':') ,used to 
identify the starting range of each fragment.
+ * The range is left-closed, ie: '>= start_value AND < end_value' 
.If the column_type is int,
+ * the end_value can be empty. If the 
column_typeis enum,the parameter RANGE 
can be empty. 
+ * The INTERVAL parameter can be split by colon(':'), 
indicate the interval value of one fragment.
+ * When column_type is date,this parameter must 
be split by colon, and interval_unit can be 
year,month,day.
+ * When column_type is int, the 
interval_unit can be empty.
+ * When column_type is enum,the 
INTERVAL parameter can be empty.
+ * 
+ * 
+ * The syntax examples is :
+ * 
PARTITION_BY=createdate:date=2008-01-01:2010-01-01=1:month'
 
+ * PARTITION_BY=year:int=2008:2010=1 
+ * PARTITION_BY=grade:enum=excellent:good:general:bad
+ * 
+ *
+ */
+public class JdbcPartitionFragmenter extends Fragmenter {
+String[] partition_by = null;
+String[] range = null;
+String[] interval = null;
+PartitionType partitionType = null;
+String partitionColumn = null;
+IntervalType intervalType = null;
+int intervalNum = 1;
+
+enum PartitionType {
+DATE,
+INT,
+ENUM;
+
+public static PartitionType getType(String str) {
+return valueOf(str.toUpperCase());
+}
+}
+
+enum IntervalType {
+DAY,
+MONTH,
+YEAR;
+
+public static IntervalType type(String str) {
+return valueOf(str.toUpperCase());
+}
+}
+
+//The unit interval, in milliseconds, that is used to estimate the 
number of slices for the date partition type
+static Map<IntervalType, Long> intervals = new HashMap<IntervalType, 
Long>();
+
+static {
+intervals.put(IntervalType.DAY, (long) 24 * 60 * 60 * 1000);
+intervals.put(IntervalType.MONTH, (long) 30 * 24 * 60 * 60 * 
1000);//30 day
+intervals.put(IntervalType.YEAR, (long) 365 * 30 * 24 * 60 * 60 * 
1000);//365 day
+}
+
+/**
+ * Constructor for JdbcPartitionFragmenter.
+ *
+ * @param inConf input data such as which Jdbc table to scan
+ * @throws JdbcFragmentException
+ */
+public JdbcPartitionFragmenter(InputData inConf) throws 
JdbcFragmentException {
+super(inConf);
+if(inConf.getUserProperty(&quo

[GitHub] incubator-hawq pull request #972: HAWQ-1108 Add JDBC PXF Plugin

2016-10-30 Thread jiadexin
Github user jiadexin commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/972#discussion_r85677162
  
--- Diff: 
pxf/pxf-jdbc/src/test/java/org/apache/hawq/pxf/plugins/jdbc/JdbcMySqlExtensionTest.java
 ---
@@ -0,0 +1,303 @@
+package org.apache.hawq.pxf.plugins.jdbc;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import com.sun.org.apache.xml.internal.utils.StringComparable;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hawq.pxf.api.FilterParser;
+import org.apache.hawq.pxf.api.Fragment;
+import org.apache.hawq.pxf.api.OneField;
+import org.apache.hawq.pxf.api.OneRow;
+import org.apache.hawq.pxf.api.utilities.ColumnDescriptor;
+import org.apache.hawq.pxf.api.utilities.InputData;
+import org.apache.hawq.pxf.api.io.DataType;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.text.SimpleDateFormat;
+import java.util.*;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+public class JdbcMySqlExtensionTest {
+private static final Log LOG = 
LogFactory.getLog(JdbcMySqlExtensionTest.class);
+static String MYSQL_URL = "jdbc:mysql://localhost:3306/demodb";
--- End diff --

This `test` is used to test the correctness of sql, it is important.
If want to keep it in the project, how to do?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #972: HAWQ-1108 Add JDBC PXF Plugin

2016-10-30 Thread jiadexin
Github user jiadexin commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/972#discussion_r85677094
  
--- Diff: 
pxf/pxf-jdbc/src/test/java/org/apache/hawq/pxf/plugins/jdbc/JdbcFilterBuilderTest.java
 ---
@@ -0,0 +1,81 @@
+package org.apache.hawq.pxf.plugins.jdbc;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+
+import org.apache.hawq.pxf.api.BasicFilter;
+import org.apache.hawq.pxf.api.FilterParser.LogicalOperation;
+import org.apache.hawq.pxf.api.LogicalFilter;
+import org.junit.Test;
+
+import static org.apache.hawq.pxf.api.FilterParser.Operation.*;
+import static org.junit.Assert.assertEquals;
+
+public class JdbcFilterBuilderTest {
+@Test
+public void parseFilterWithThreeOperations() throws Exception {
+//orgin sql => cdate>'2008-02-01' and cdate<'2008-12-01' and amt > 
1200
+//filterstr="a1c\"first\"o5a2c2o2l0";//col_1=first and col_2=2
+String filterstr = 
"a1c\"2008-02-01\"o2a1c\"2008-12-01\"o1l0a2c1200o2l1"; //col_1>'first' and 
col_1<'2008-12-01' or col_2 > 1200;
+JdbcFilterBuilder builder = new JdbcFilterBuilder();
+
+LogicalFilter filterList = (LogicalFilter) 
builder.getFilterObject(filterstr);
+assertEquals(LogicalOperation.HDOP_OR, filterList.getOperator());
+LogicalFilter l1_left = (LogicalFilter) 
filterList.getFilterList().get(0);
+BasicFilter l1_right = (BasicFilter) 
filterList.getFilterList().get(1);
+//column_2 > 1200
+assertEquals(2, l1_right.getColumn().index());
+assertEquals(HDOP_GT, l1_right.getOperation());
+assertEquals(1200L, l1_right.getConstant().constant());
+
+assertEquals(LogicalOperation.HDOP_AND, l1_left.getOperator());
+BasicFilter l2_left = (BasicFilter) l1_left.getFilterList().get(0);
+BasicFilter l2_right = (BasicFilter) 
l1_left.getFilterList().get(1);
+
+//column_1 > '2008-02-01'
+assertEquals(1, l2_left.getColumn().index());
+assertEquals(HDOP_GT, l2_left.getOperation());
+assertEquals("2008-02-01", l2_left.getConstant().constant());
+
+//column_2 = 5
+assertEquals(1, l2_right.getColumn().index());
+assertEquals(HDOP_LT, l2_right.getOperation());
+assertEquals("2008-12-01", l2_right.getConstant().constant());
+
+}
+
+@Test
+public void parseFilterWithLogicalOperation() throws Exception {
+WhereSQLBuilder builder = new WhereSQLBuilder(null);
--- End diff --

WhereSQLBuilder used to build sql statement, its 'test' through 
JdbcMySqlExtensionTest completed.
There are other ways to test the correctness of sql statement?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #972: HAWQ-1108 Add JDBC PXF Plugin

2016-10-30 Thread jiadexin
Github user jiadexin commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/972#discussion_r85676673
  
--- Diff: 
pxf/pxf-jdbc/src/main/java/org/apache/hawq/pxf/plugins/jdbc/JdbcPartitionFragmenter.java
 ---
@@ -0,0 +1,284 @@
+package org.apache.hawq.pxf.plugins.jdbc;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import org.apache.hawq.pxf.api.Fragmenter;
+import org.apache.hawq.pxf.api.FragmentsStats;
+import org.apache.hawq.pxf.plugins.jdbc.utils.DbProduct;
+import org.apache.hawq.pxf.plugins.jdbc.utils.ByteUtil;
+import org.apache.hawq.pxf.api.Fragment;
+import org.apache.hawq.pxf.api.utilities.InputData;
+
+import java.net.InetAddress;
+import java.text.SimpleDateFormat;
+import java.util.*;
+
+
+/**
+ * Fragmenter class for JDBC data resources.
+ *
+ * Extends the {@link Fragmenter} abstract class, with the purpose of 
transforming
+ * an input data path  (an JDBC Database table name  and user request 
parameters)  into a list of regions
+ * that belong to this table.
+ * 
+ * The parameter Patterns 
+ * There are three  parameters,  the format is as follows:
+ * 
+ * 
PARTITION_BY=column_name:column_type=start_value[:end_value]=interval_num[:interval_unit]
+ * 
+ * The PARTITION_BY parameter can be split by colon(':'),the 
column_type current supported : date,int,enum .
+ * The Date format is '-MM-dd'. 
+ * The RANGE parameter can be split by colon(':') ,used to 
identify the starting range of each fragment.
+ * The range is left-closed, ie: '>= start_value AND < end_value' 
.If the column_type is int,
+ * the end_value can be empty. If the 
column_typeis enum,the parameter RANGE 
can be empty. 
+ * The INTERVAL parameter can be split by colon(':'), 
indicate the interval value of one fragment.
+ * When column_type is date,this parameter must 
be split by colon, and interval_unit can be 
year,month,day.
+ * When column_type is int, the 
interval_unit can be empty.
+ * When column_type is enum,the 
INTERVAL parameter can be empty.
+ * 
+ * 
+ * The syntax examples is :
+ * 
PARTITION_BY=createdate:date=2008-01-01:2010-01-01=1:month'
 
+ * PARTITION_BY=year:int=2008:2010=1 
+ * PARTITION_BY=grade:enum=excellent:good:general:bad
+ * 
+ *
+ */
+public class JdbcPartitionFragmenter extends Fragmenter {
+String[] partition_by = null;
+String[] range = null;
+String[] interval = null;
+PartitionType partitionType = null;
+String partitionColumn = null;
+IntervalType intervalType = null;
+int intervalNum = 1;
+
+enum PartitionType {
+DATE,
+INT,
+ENUM;
+
+public static PartitionType getType(String str) {
+return valueOf(str.toUpperCase());
+}
+}
+
+enum IntervalType {
+DAY,
+MONTH,
+YEAR;
+
+public static IntervalType type(String str) {
+return valueOf(str.toUpperCase());
+}
+}
+
+//The unit interval, in milliseconds, that is used to estimate the 
number of slices for the date partition type
+static Map<IntervalType, Long> intervals = new HashMap<IntervalType, 
Long>();
+
+static {
+intervals.put(IntervalType.DAY, (long) 24 * 60 * 60 * 1000);
+intervals.put(IntervalType.MONTH, (long) 30 * 24 * 60 * 60 * 
1000);//30 day
+intervals.put(IntervalType.YEAR, (long) 365 * 30 * 24 * 60 * 60 * 
1000);//365 day
+}
+
+/**
+ * Constructor for JdbcPartitionFragmenter.
+ *
+ * @param inConf input data such as which Jdbc table to scan
+ * @throws JdbcFragmentException
+ */
+public JdbcPartitionFragmenter(InputData inConf) throws 
JdbcFragmentException {
+super(inConf);
+if(inConf.getUserProperty("PARTI

[GitHub] incubator-hawq pull request #972: HAWQ-1108 Add JDBC PXF Plugin

2016-10-25 Thread jiadexin
GitHub user jiadexin opened a pull request:

https://github.com/apache/incubator-hawq/pull/972

HAWQ-1108 Add JDBC PXF Plugin

The PXF JDBC plug-in reads data stored in Traditional relational 
database,ie : mysql,ORACLE,postgresql.
For more information, please refer 
to:https://github.com/inspur-insight/incubator-hawq/blob/HAWQ-1108/pxf/pxf-jdbc/README.md
 .

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/inspur-insight/incubator-hawq HAWQ-1108

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/972.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #972


commit 2cc75e672d01926ef97d7c50485f4979d4866b3c
Author: Devin Jia <ji...@inspur.com>
Date:   2016-10-18T08:08:50Z

Merge pull request #1 from apache/master

re fork

commit 10f68af5ade550b6c24abe371fff4a40349829b3
Author: Devin Jia <ji...@inspur.com>
Date:   2016-10-25T06:31:07Z

the first commit

commit 5a814211ecf8f8f1e7d1487bdda33c3b72f1b990
Author: Devin Jia <ji...@inspur.com>
Date:   2016-10-25T07:10:12Z

modify parent pxf build.gradle




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #837: HAWQ-779 support pxf filter pushdwon at th...

2016-09-18 Thread jiadexin
Github user jiadexin closed the pull request at:

https://github.com/apache/incubator-hawq/pull/837


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #837: HAWQ-779 support pxf filter pushdwon at th...

2016-09-17 Thread jiadexin
Github user jiadexin commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/837#discussion_r79295676
  
--- Diff: src/backend/optimizer/plan/createplan.c ---
@@ -1144,9 +1144,15 @@ static char** create_pxf_plan(char **segdb_file_map, 
RelOptInfo *rel, int total_


Relation relation = RelationIdGetRelation(planner_rt_fetch(scan_relid, 
ctx->root)->relid);
-   segdb_work_map = map_hddata_2gp_segments(uri_str, 
+   if (pxf_enable_filter_pushdown){
--- End diff --

you are right.
my C language unskilled, so...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #837: HAWQ-779 support pxf filter pushdwon at th...

2016-09-17 Thread jiadexin
Github user jiadexin commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/837#discussion_r79295630
  
--- Diff: src/backend/optimizer/plan/createplan.c ---
@@ -1144,9 +1144,15 @@ static char** create_pxf_plan(char **segdb_file_map, 
RelOptInfo *rel, int total_


Relation relation = RelationIdGetRelation(planner_rt_fetch(scan_relid, 
ctx->root)->relid);
-   segdb_work_map = map_hddata_2gp_segments(uri_str, 
+   if (pxf_enable_filter_pushdown){
--- End diff --

you are right.
My C language unskilled, so...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #837: HAWQ-779 support pxf filter pushdwon at th...

2016-08-28 Thread jiadexin
Github user jiadexin commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/837#discussion_r76547017
  
--- Diff: 
pxf/pxf-hbase/src/main/java/org/apache/hawq/pxf/plugins/hbase/HBaseFilterBuilder.java
 ---
@@ -165,6 +165,14 @@ private Filter 
handleSimpleOperations(FilterParser.Operation opId,
 ByteArrayComparable comparator = 
getComparator(hbaseColumn.columnTypeCode(),
 constant.constant());
 
+if(operatorsMap.get(opId) == null){
+//HBase not support HDOP_LIKE, use 'NOT NULL' Comarator
--- End diff --

should i  develop a HBase LIKE Filter ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #837: HAWQ-779 support pxf filter pushdwon at th...

2016-08-28 Thread jiadexin
Github user jiadexin commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/837#discussion_r76546715
  
--- Diff: 
pxf/pxf-api/src/test/java/org/apache/hawq/pxf/api/FilterParserTest.java ---
@@ -215,6 +215,10 @@ public void parseColumnOnLeft() throws Exception {
 filter = "a1c2o7";
 op = Operation.HDOP_AND;
 runParseOneOperation("this filter was build from HDOP_AND", 
filter, op);
+
+filter = "a1c2o8";
+op = Operation.HDOP_LIKE;
+runParseOneOperation("this filter was build from HDOP_LIKE", 
filter, op);
--- End diff --

This reference to the previous code _`runParseOneOperation("this filter was 
build from HDOP_AND", filter, op)`_ ,  has been corrected.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #837: HAWQ-779 support pxf filter pushdwon at th...

2016-08-26 Thread jiadexin
Github user jiadexin commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/837#discussion_r76368428
  
--- Diff: src/backend/optimizer/plan/createplan.c ---
@@ -1146,7 +1146,7 @@ static char** create_pxf_plan(char **segdb_file_map, 
RelOptInfo *rel, int total_
Relation relation = RelationIdGetRelation(planner_rt_fetch(scan_relid, 
ctx->root)->relid);
segdb_work_map = map_hddata_2gp_segments(uri_str, 

 total_segs, segs_participating,
-   
 relation, NULL);
+   
 relation, ctx->root->parse->jointree->quals);
--- End diff --

ok.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #837: HAWQ-779 support pxf filter pushdwon at th...

2016-08-11 Thread jiadexin
Github user jiadexin commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/837#discussion_r74380940
  
--- Diff: src/backend/access/external/test/pxffilters_test.c ---
@@ -61,7 +62,7 @@ test__supported_filter_type(void **state)
 
/* go over pxf_supported_types array */
int nargs = sizeof(pxf_supported_types) / sizeof(Oid);
-   assert_int_equal(nargs, 12);
+   assert_int_equal(nargs, 13);
--- End diff --

This test is to check pxf_supported_types number, it's old value is 
hard-coded too(be 12), after an increase of DATEOID become 13.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #820: HAWQ-953 hawq pxf-hive support partition c...

2016-08-09 Thread jiadexin
Github user jiadexin closed the pull request at:

https://github.com/apache/incubator-hawq/pull/820


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #837: HAWQ-779 support pxf filter pushdwon at th...

2016-08-05 Thread jiadexin
GitHub user jiadexin opened a pull request:

https://github.com/apache/incubator-hawq/pull/837

HAWQ-779 support pxf filter pushdwon at the 'CREATE PLAN' stage ,and …

1.support  pxf filter pushdwon at the 'CREATE PLAN' stage -- 
src/backend/optimizer/plan/createplan.c
2.Due to '1' causes produce HAWQ-953 error, modify HiveDataFragmenter.java
3.add 'Date type'  filter and  'HDOP_LIKE' op -- 
pxffilters.h,pxffilters.c,FilterParser.java , and update the corresponding test 
-- FilterParserTest.java,pxffilters_test.c
4.Due to '2' cause,modified HBaseFilterBuilder.java to handle 'HDOP_LIKE' 
op.
5. 'Date filter' and  'HDOP_LIKE' used in 
pxf-solr/pxf-jdbc(https://github.com/inspur-insight/pxf-plugin)
6.By this amendment, I think: 'PXF Filter' architecture coupling is too 
high, I just want to add new types and op, but had to modify other components 
.i hope to improve the architecture .



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/inspur-insight/incubator-hawq HAWQ-779

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/837.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #837


commit 3518b4af22af281909140bc011884420de540cc6
Author: Devin Jia <ji...@inspur.com>
Date:   2016-08-05T07:05:50Z

HAWQ-779 support pxf filter pushdwon at the 'CREATE PLAN' stage ,and more 
filter type & op




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #819: HAWQ-953 pxf-hive only support partition c...

2016-07-27 Thread jiadexin
GitHub user jiadexin opened a pull request:

https://github.com/apache/incubator-hawq/pull/819

HAWQ-953 pxf-hive only support partition column filter pushdown whose types 
are string

HAWQ-953 pxf-hive only support partition column filter pushdown whose types 
are string

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/inspur-insight/incubator-hawq HAWQ-953

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/819.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #819


commit fb00bd27021fbabc98c1c940ebb890e974496500
Author: root <root@hmaster.(none)>
Date:   2016-06-07T01:16:06Z

HAWQ-779 Support more pxf filter pushdwon

commit caa20039e73589112c48c20a3f78c4a8f7b1f2d6
Author: Devin Jia <ji...@inspur.com>
Date:   2016-06-08T01:04:08Z

HAWQ-779 support more pxf filter pushdwon

commit cca9f8854a82783ca6afaa6530fd4004a6447c7d
Author: Devin Jia <ji...@inspur.com>
Date:   2016-06-08T01:04:08Z

HAWQ-779. support more pxf filter pushdwon

commit 2df879d53dc29149077346190e4c38549ba6e72b
Author: Devin Jia <ji...@inspur.com>
Date:   2016-06-12T01:10:46Z

HAWQ-779. support more pxf filter pushdwon(update FilterParserTest.java and 
HBaseFilterBuilder.java , to include HDOP_LIKE.)

commit 66717dccb05104c713ac30d4820c4d7005190f3a
Author: Devin Jia <ji...@inspur.com>
Date:   2016-06-12T01:23:58Z

Merge branch 'feature-pxf' of 
https://github.com/inspur-insight/incubator-hawq into feature-pxf

commit 6ed0e2b720057bd211dee0db2d13723171143738
Author: Devin Jia <ji...@inspur.com>
Date:   2016-06-12T01:32:57Z

HAWQ-779. support more pxf filter pushdwon - update FilterParserTest.java 
and HBaseFilterBuilder.java , to include HDOP_LIKE.

commit 45eb5b8fcbda72aa6fc1b5e4dbce929c7f4f7501
Author: Devin Jia <ji...@inspur.com>
Date:   2016-06-12T01:39:47Z

HAWQ-779. support more pxf filter pushdwon - update FilterParserTest.java 
and HBaseFilterBuilder.java , to include HDOP_LIKE.

commit 5fc6457408a239ff241f6a02d6739d232f210247
Author: Devin Jia <ji...@inspur.com>
Date:   2016-06-12T05:04:52Z

Merge remote branch 'upstream/master' into feature-pxf

commit 84cf8d268d6110c7120a82109f83febc0710b4fa
Author: Devin Jia <ji...@inspur.com>
Date:   2016-06-12T05:09:15Z

merge from origin/master.

commit a3cc461e5e1c71076f80c40195707f58dcb00377
Author: Devin Jia <ji...@inspur.com>
Date:   2016-06-12T05:13:13Z

Update hawq-site.xml

commit 0f0fd5ea92ffc6fcfec9023ff8c19d46c27b26d7
Author: Devin Jia <ji...@inspur.com>
Date:   2016-06-12T05:23:46Z

Merge pull request #1 from apache/master

merge from origin/master

commit 9e0f53ef3e208eee8f4ea2b6117f20a3b36e4f54
Author: Devin Jia <ji...@inspur.com>
Date:   2016-06-12T08:37:03Z

Merge pull request #2 from inspur-insight/master

Merge pull request #1 from apache/master

commit 1ffd32085265be2da912d551a0a37b871a4cdec3
Author: Devin Jia <ji...@inspur.com>
Date:   2016-06-12T08:38:43Z

Merge pull request #3 from inspur-insight/feature-pxf

Feature pxf

commit 134dc5752d150b688d6bf91372682f0fc15258a0
Author: Devin Jia <ji...@inspur.com>
Date:   2016-07-27T06:58:24Z

HAWQ-953 pxf-hive only support partition column filter pushdown whose types 
are string




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #695: HAWQ-779. support more pxf filter pushdwon

2016-07-06 Thread jiadexin
Github user jiadexin closed the pull request at:

https://github.com/apache/incubator-hawq/pull/695


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #695: support more pxf filter pushdwon

2016-06-12 Thread jiadexin
Github user jiadexin commented on the issue:

https://github.com/apache/incubator-hawq/pull/695
  
My local branch some confusion, handle submodules marked for deletion, I do 
not know how to recover. I first created PR in GITHUB, not familiar with.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #695: support more pxf filter pushdwon

2016-06-11 Thread jiadexin
Github user jiadexin commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/695#discussion_r66714047
  
--- Diff: src/backend/access/external/pxffilters.c ---
@@ -125,7 +126,48 @@ dbop_pxfop_map pxf_supported_opr[] =
{1871 /* int82gt */, PXFOP_GT},
{1872 /* int82le */, PXFOP_LE},
{1873 /* int82ge */, PXFOP_GE},
-   {1869 /* int82ne */, PXFOP_NE}
+   {1869 /* int82ne */, PXFOP_NE},
+
+   /**FLOAT/
+   /* float4 */
+   {Float4EqualOperator  /* float4eq */, PXFOP_EQ},
+   {622  /* float4lt */, PXFOP_LT},
+   {623 /* float4gt */, PXFOP_GT},
+   {624 /* float4le */, PXFOP_LE},
+   {625 /* float4ge */, PXFOP_GE},
+   {621 /* float4ne */, PXFOP_NE},
+
+   /* float8 */
+   {Float8EqualOperator  /* float8eq */, PXFOP_EQ},
+   {672  /* float8lt */, PXFOP_LT},
+   {674 /* float8gt */, PXFOP_GT},
+   {673 /* float8le */, PXFOP_LE},
+   {675 /* float8ge */, PXFOP_GE},
+   {671 /* float8ne */, PXFOP_NE},
+
+   /* float48 */
--- End diff --

When using HAWQ PXF read relational database, FLOAT is a very common type, 
and often used in conditional expressions. So I added FLOAT operators .


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #695: support more pxf filter pushdwon

2016-06-07 Thread jiadexin
GitHub user jiadexin opened a pull request:

https://github.com/apache/incubator-hawq/pull/695

support more pxf filter pushdwon

https://issues.apache.org/jira/browse/HAWQ-779
Description
When I use the pxf hawq, I need to read a traditional relational database 
systems and solr by way of the external table. The project 
:https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext,
 only "WriteAccessor ",so I developed 2 plug-ins, the projects: 
https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to 
modified HAWQ:
1. When get a list of fragment from pxf services, push down the 
'filterString'. modify the backend / optimizer / plan / createplan.c of 
create_pxf_plan methods:
segdb_work_map = map_hddata_2gp_segments (uri_str,
total_segs, segs_participating,
relation, ctx-> root-> parse-> jointree-> quals);
2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, 
Date type data operator, Float type operator.
3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE 
operator.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/inspur-insight/incubator-hawq feature-pxf

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/695.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #695


commit fb00bd27021fbabc98c1c940ebb890e974496500
Author: root <root@hmaster.(none)>
Date:   2016-06-07T01:16:06Z

HAWQ-779 Support more pxf filter pushdwon

commit caa20039e73589112c48c20a3f78c4a8f7b1f2d6
Author: Devin Jia <ji...@inspur.com>
Date:   2016-06-08T01:04:08Z

HAWQ-779 support more pxf filter pushdwon




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #:

2016-06-07 Thread jiadexin
Github user jiadexin commented on the pull request:


https://github.com/apache/incubator-hawq/commit/caa20039e73589112c48c20a3f78c4a8f7b1f2d6#commitcomment-17780830
  
HAWQ-779. support more pxf filter pushdwon


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---