[GitHub] [flink] wanglijie95 commented on a change in pull request #12427: [FLINK-16681][jdbc] Fix the bug that jdbc lost connection after a lon…

2020-06-08 Thread GitBox


wanglijie95 commented on a change in pull request #12427:
URL: https://github.com/apache/flink/pull/12427#discussion_r436562574



##
File path: 
flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/table/JdbcRowDataLookupFunctionTest.java
##
@@ -0,0 +1,232 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.jdbc.table;
+
+import org.apache.flink.connector.jdbc.JdbcTestFixture;
+import org.apache.flink.connector.jdbc.internal.options.JdbcLookupOptions;
+import org.apache.flink.connector.jdbc.internal.options.JdbcOptions;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.test.util.AbstractTestBase;
+import org.apache.flink.util.Collector;
+
+import org.apache.flink.shaded.guava18.com.google.common.collect.Lists;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.stream.Collectors;
+
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Test suite for {@link JdbcRowDataLookupFunction}.
+ */
+public class JdbcRowDataLookupFunctionTest extends AbstractTestBase {
+
+   public static final String DB_URL = "jdbc:derby:memory:lookup";
+   public static final String LOOKUP_TABLE = "lookup_table";
+   public static final String DB_DRIVER = 
"org.apache.derby.jdbc.EmbeddedDriver";
+
+   private static String[] fieldNames = new String[] {"id1", "id2", 
"comment1", "comment2"};
+   private static DataType[] fieldDataTypes = new DataType[] {
+   DataTypes.INT(),
+   DataTypes.STRING(),
+   DataTypes.STRING(),
+   DataTypes.STRING()
+   };
+
+   private static String[] lookupKeys = new String[] {"id1", "id2"};
+
+   @Before
+   public void before() throws ClassNotFoundException, SQLException {
+   System.setProperty("derby.stream.error.field", 
JdbcTestFixture.class.getCanonicalName() + ".DEV_NULL");
+
+   Class.forName(DB_DRIVER);
+   try (
+   Connection conn = DriverManager.getConnection(DB_URL + 
";create=true");
+   Statement stat = conn.createStatement()) {
+   stat.executeUpdate("CREATE TABLE " + LOOKUP_TABLE + " 
(" +
+   "id1 INT NOT NULL DEFAULT 0," +
+   "id2 VARCHAR(20) NOT NULL," +
+   "comment1 VARCHAR(1000)," +
+   "comment2 VARCHAR(1000))");
+
+   Object[][] data = new Object[][] {
+   new Object[] {1, "1", "11-c1-v1", "11-c2-v1"},
+   new Object[] {1, "1", "11-c1-v2", "11-c2-v2"},
+   new Object[] {2, "3", null, "23-c2"},
+   new Object[] {2, "5", "25-c1", "25-c2"},
+   new Object[] {3, "8", "38-c1", "38-c2"}
+   };
+   boolean[] surroundedByQuotes = new boolean[] {
+   false, true, true, true
+   };
+
+   StringBuilder sqlQueryBuilder = new StringBuilder(
+   "INSERT INTO " + LOOKUP_TABLE + " (id1, id2, 
comment1, comment2) VALUES ");
+   for (int i = 0; i < data.length; i++) {
+   sqlQueryBuilder.append("(");
+   for (int j = 0; j < data[i].length; j++) {
+   if (data[i][j] == null) {
+   sqlQueryBuilder.append("null");
+   } else {
+  

[GitHub] [flink] wanglijie95 commented on a change in pull request #12427: [FLINK-16681][jdbc] Fix the bug that jdbc lost connection after a lon…

2020-06-08 Thread GitBox


wanglijie95 commented on a change in pull request #12427:
URL: https://github.com/apache/flink/pull/12427#discussion_r436560069



##
File path: 
flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/JdbcLookupFunctionTest.java
##
@@ -0,0 +1,222 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.jdbc;
+
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.connector.jdbc.internal.options.JdbcLookupOptions;
+import org.apache.flink.connector.jdbc.internal.options.JdbcOptions;
+import org.apache.flink.connector.jdbc.table.JdbcLookupFunction;
+import org.apache.flink.table.functions.TableFunction;
+import org.apache.flink.test.util.AbstractTestBase;
+import org.apache.flink.types.Row;
+import org.apache.flink.util.Collector;
+
+import org.apache.flink.shaded.guava18.com.google.common.collect.Lists;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.stream.Collectors;
+
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Test suite for {@link JdbcLookupFunction}.
+ */
+public class JdbcLookupFunctionTest extends AbstractTestBase {
+
+   public static final String DB_URL = "jdbc:derby:memory:lookup";
+   public static final String LOOKUP_TABLE = "lookup_table";
+   public static final String DB_DRIVER = 
"org.apache.derby.jdbc.EmbeddedDriver";
+
+   private static String[] fieldNames = new String[] {"id1", "id2", 
"comment1", "comment2"};
+   private static TypeInformation[] fieldTypes = new TypeInformation[] {
+   BasicTypeInfo.INT_TYPE_INFO,
+   BasicTypeInfo.STRING_TYPE_INFO,
+   BasicTypeInfo.STRING_TYPE_INFO,
+   BasicTypeInfo.STRING_TYPE_INFO
+   };
+
+   private static String[] lookupKeys = new String[] {"id1", "id2"};
+
+   private TableFunction lookupFunction;
+
+   @Before
+   public void before() throws ClassNotFoundException, SQLException {
+   System.setProperty("derby.stream.error.field", 
JdbcTestFixture.class.getCanonicalName() + ".DEV_NULL");
+
+   Class.forName(DB_DRIVER);
+   try (
+   Connection conn = DriverManager.getConnection(DB_URL + 
";create=true");
+   Statement stat = conn.createStatement()) {
+   stat.executeUpdate("CREATE TABLE " + LOOKUP_TABLE + " 
(" +
+   "id1 INT NOT NULL DEFAULT 0," +
+   "id2 VARCHAR(20) NOT NULL," +
+   "comment1 VARCHAR(1000)," +
+   "comment2 VARCHAR(1000))");
+
+   Object[][] data = new Object[][] {
+   new Object[] {1, "1", "11-c1-v1", "11-c2-v1"},
+   new Object[] {1, "1", "11-c1-v2", "11-c2-v2"},
+   new Object[] {2, "3", null, "23-c2"},
+   new Object[] {2, "5", "25-c1", "25-c2"},
+   new Object[] {3, "8", "38-c1", "38-c2"}
+   };
+   boolean[] surroundedByQuotes = new boolean[] {
+   false, true, true, true
+   };
+
+   StringBuilder sqlQueryBuilder = new StringBuilder(
+   "INSERT INTO " + LOOKUP_TABLE + " (id1, id2, 
comment1, comment2) VALUES ");
+   for (int i = 0; i < data.length; i++) {
+   sqlQueryBuilder.append("(");
+   for (int j = 0; j < data[i].length; j++) {
+   if (data[i][j] == null) {
+   sqlQueryBuilder.append("null");
+   } else {
+  

[GitHub] [flink] wanglijie95 commented on a change in pull request #12427: [FLINK-16681][jdbc] Fix the bug that jdbc lost connection after a lon…

2020-06-08 Thread GitBox


wanglijie95 commented on a change in pull request #12427:
URL: https://github.com/apache/flink/pull/12427#discussion_r436559207



##
File path: 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/internal/options/JdbcOptions.java
##
@@ -34,6 +34,8 @@
 
private static final long serialVersionUID = 1L;
 
+   public static final int CONNECTION_CHECK_TIMEOUT = 60;

Review comment:
   OK.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wanglijie95 commented on a change in pull request #12427: [FLINK-16681][jdbc] Fix the bug that jdbc lost connection after a lon…

2020-06-08 Thread GitBox


wanglijie95 commented on a change in pull request #12427:
URL: https://github.com/apache/flink/pull/12427#discussion_r436558596



##
File path: 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/internal/executor/SimpleBatchStatementExecutor.java
##
@@ -30,24 +35,27 @@
 /**
  * A {@link JdbcBatchStatementExecutor} that executes supplied statement for 
given the records (without any pre-processing).
  */
-class SimpleBatchStatementExecutor implements 
JdbcBatchStatementExecutor {
+@Internal
+public class SimpleBatchStatementExecutor implements 
JdbcBatchStatementExecutor {

Review comment:
   Yes, you are right. I wil change it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wanglijie95 commented on a change in pull request #12427: [FLINK-16681][jdbc] Fix the bug that jdbc lost connection after a lon…

2020-06-08 Thread GitBox


wanglijie95 commented on a change in pull request #12427:
URL: https://github.com/apache/flink/pull/12427#discussion_r436558361



##
File path: 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/internal/executor/JdbcBatchStatementExecutor.java
##
@@ -34,7 +34,7 @@
/**
 * Open the writer by JDBC Connection. It can create Statement from 
Connection.

Review comment:
   done.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wanglijie95 commented on a change in pull request #12427: [FLINK-16681][jdbc] Fix the bug that jdbc lost connection after a lon…

2020-06-08 Thread GitBox


wanglijie95 commented on a change in pull request #12427:
URL: https://github.com/apache/flink/pull/12427#discussion_r436521494



##
File path: 
flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/table/JdbcRowDataLookupFunctionTest.java
##
@@ -0,0 +1,232 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.jdbc.table;
+
+import org.apache.flink.connector.jdbc.JdbcTestFixture;
+import org.apache.flink.connector.jdbc.internal.options.JdbcLookupOptions;
+import org.apache.flink.connector.jdbc.internal.options.JdbcOptions;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.test.util.AbstractTestBase;
+import org.apache.flink.util.Collector;
+
+import org.apache.flink.shaded.guava18.com.google.common.collect.Lists;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.stream.Collectors;
+
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Test suite for {@link JdbcRowDataLookupFunction}.
+ */
+public class JdbcRowDataLookupFunctionTest extends AbstractTestBase {
+
+   public static final String DB_URL = "jdbc:derby:memory:lookup";
+   public static final String LOOKUP_TABLE = "lookup_table";
+   public static final String DB_DRIVER = 
"org.apache.derby.jdbc.EmbeddedDriver";
+
+   private static String[] fieldNames = new String[] {"id1", "id2", 
"comment1", "comment2"};
+   private static DataType[] fieldDataTypes = new DataType[] {
+   DataTypes.INT(),
+   DataTypes.STRING(),
+   DataTypes.STRING(),
+   DataTypes.STRING()
+   };
+
+   private static String[] lookupKeys = new String[] {"id1", "id2"};
+
+   @Before
+   public void before() throws ClassNotFoundException, SQLException {

Review comment:
   I will add a base class named `JdbcLookupTestBase` for them.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wanglijie95 commented on a change in pull request #12427: [FLINK-16681][jdbc] Fix the bug that jdbc lost connection after a lon…

2020-06-06 Thread GitBox


wanglijie95 commented on a change in pull request #12427:
URL: https://github.com/apache/flink/pull/12427#discussion_r436310785



##
File path: 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/internal/executor/InsertOrUpdateJdbcExecutor.java
##
@@ -83,6 +88,21 @@ public void open(Connection connection) throws SQLException {
updateStatement = connection.prepareStatement(updateSQL);
}
 
+   @Override
+   public void reopen(Connection connection) throws SQLException {
+   try {
+   existStatement.close();
+   insertStatement.close();
+   updateStatement.close();
+   } catch (SQLException e) {
+   LOG.info("PreparedStatement close failed.", e);
+   }
+
+   existStatement = connection.prepareStatement(existSQL);
+   insertStatement = connection.prepareStatement(insertSQL);
+   updateStatement = connection.prepareStatement(updateSQL);

Review comment:
   > Then I suggest to call them `prepareStatements(Connection)` and 
`closeStatements(Connection)`.
   
   OK, I will rename them.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wanglijie95 commented on a change in pull request #12427: [FLINK-16681][jdbc] Fix the bug that jdbc lost connection after a lon…

2020-06-04 Thread GitBox


wanglijie95 commented on a change in pull request #12427:
URL: https://github.com/apache/flink/pull/12427#discussion_r435676536



##
File path: 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/internal/executor/InsertOrUpdateJdbcExecutor.java
##
@@ -83,6 +88,21 @@ public void open(Connection connection) throws SQLException {
updateStatement = connection.prepareStatement(updateSQL);
}
 
+   @Override
+   public void reopen(Connection connection) throws SQLException {
+   try {
+   existStatement.close();
+   insertStatement.close();
+   updateStatement.close();
+   } catch (SQLException e) {
+   LOG.info("PreparedStatement close failed.", e);
+   }
+
+   existStatement = connection.prepareStatement(existSQL);
+   insertStatement = connection.prepareStatement(insertSQL);
+   updateStatement = connection.prepareStatement(updateSQL);

Review comment:
   Hi @wuchong , I have checked the implementation of `open` and `close`. 
The `reopen` is not the simple combination of `close` and `open`. The 
difference is that `close` will clear the `batch` map. It's not what we wan't. 
   
   So we need to introduce a new `reopen` interface. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wanglijie95 commented on a change in pull request #12427: [FLINK-16681][jdbc] Fix the bug that jdbc lost connection after a lon…

2020-06-03 Thread GitBox


wanglijie95 commented on a change in pull request #12427:
URL: https://github.com/apache/flink/pull/12427#discussion_r434644645



##
File path: 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/internal/JdbcBatchingOutputFormat.java
##
@@ -175,6 +176,15 @@ public synchronized void flush() throws IOException {
if (i >= executionOptions.getMaxRetries()) {
throw new IOException(e);
}
+   try {
+   if 
(!connection.isValid(JdbcLookupFunction.CONNECTION_CHECK_TIMEOUT)) {
+   connection = 
connectionProvider.reestablishConnection();
+   
jdbcStatementExecutor.reopen(connection);
+   }
+   } catch (Exception excpetion) {
+   LOG.error("JDBC connection is not 
valid, and reestablish connection failed.", excpetion);
+   throw new RuntimeException("Reestablish 
JDBC connection failed", excpetion);

Review comment:
   OK,I will change it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wanglijie95 commented on a change in pull request #12427: [FLINK-16681][jdbc] Fix the bug that jdbc lost connection after a lon…

2020-06-03 Thread GitBox


wanglijie95 commented on a change in pull request #12427:
URL: https://github.com/apache/flink/pull/12427#discussion_r434643665



##
File path: 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/internal/executor/InsertOrUpdateJdbcExecutor.java
##
@@ -83,6 +88,21 @@ public void open(Connection connection) throws SQLException {
updateStatement = connection.prepareStatement(updateSQL);
}
 
+   @Override
+   public void reopen(Connection connection) throws SQLException {
+   try {
+   existStatement.close();
+   insertStatement.close();
+   updateStatement.close();
+   } catch (SQLException e) {
+   LOG.info("PreparedStatement close failed.", e);
+   }
+
+   existStatement = connection.prepareStatement(existSQL);
+   insertStatement = connection.prepareStatement(insertSQL);
+   updateStatement = connection.prepareStatement(updateSQL);

Review comment:
   Thanks for review, @wuchong . Yes, move the `batch` map into constructor 
looks better.  However, I think it will confuse people if change `open` to 
`openConnection`. Because in `open`, we just init `PrepareStatement`, and at 
this moment the JDBC connection is already open. `openConnection` will confuse 
people with "open JDBC connecton".
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wanglijie95 commented on a change in pull request #12427: [FLINK-16681][jdbc] Fix the bug that jdbc lost connection after a lon…

2020-06-03 Thread GitBox


wanglijie95 commented on a change in pull request #12427:
URL: https://github.com/apache/flink/pull/12427#discussion_r434643665



##
File path: 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/internal/executor/InsertOrUpdateJdbcExecutor.java
##
@@ -83,6 +88,21 @@ public void open(Connection connection) throws SQLException {
updateStatement = connection.prepareStatement(updateSQL);
}
 
+   @Override
+   public void reopen(Connection connection) throws SQLException {
+   try {
+   existStatement.close();
+   insertStatement.close();
+   updateStatement.close();
+   } catch (SQLException e) {
+   LOG.info("PreparedStatement close failed.", e);
+   }
+
+   existStatement = connection.prepareStatement(existSQL);
+   insertStatement = connection.prepareStatement(insertSQL);
+   updateStatement = connection.prepareStatement(updateSQL);

Review comment:
   Thanks for review. Yes, move the `batch` map into constructor looks 
better.  However, I think it will confuse people if change `open` to 
`openConnection`. Because in `open`, we just init `PrepareStatement`, and at 
this moment the JDBC connection is already open. `openConnection` will confuse 
people with "open JDBC connecton".
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org