cloud-fan commented on a change in pull request #35727:
URL: https://github.com/apache/spark/pull/35727#discussion_r819573984



##########
File path: sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala
##########
@@ -99,6 +99,16 @@ abstract class JdbcDialect extends Serializable with Logging{
    */
   def getJDBCType(dt: DataType): Option[JdbcType] = None
 
+  /**
+   * Get the factory method for create JDBC connection.
+   * In general, creating a connection has nothing to do with JDBC partition.
+   * But sometimes it is needed, such as a database with multiple shard nodes.
+   * @param options JDBC options.
+   * @return The factory method for create JDBC connection.
+   */
+  def getConnection(options: JDBCOptions): JDBCPartition => Connection =
+    JDBCPartition => JdbcUtils.createConnectionFactory(options)()

Review comment:
       > JdbcUtils.createConnectionFactory are used in other places
   
   This is the problem. It's quite confusing that in some places Spark decides 
how to create the connection, and in some places JDBC dialect decides it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to