moomindani commented on a change in pull request #28953:
URL: https://github.com/apache/spark/pull/28953#discussion_r448065129
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
##########
@@ -122,6 +122,37 @@ object JdbcUtils extends Logging {
}
}
+ /**
+ * Runs a custom query against a table from the JDBC database.
+ */
+ def runQuery(conn: Connection, actions: String, options: JDBCOptions): Unit
= {
+ val autoCommit = conn.getAutoCommit
+ conn.setAutoCommit(false)
+ val queries = actions.split(";")
+ try {
+ queries.foreach { query =>
+ val queryString = query.trim()
+ val statement = conn.prepareStatement(queryString)
+ try {
+ statement.setQueryTimeout(options.queryTimeout)
+ val hasResultSet = statement.execute()
Review comment:
It is to support stored procedures. (related to
https://issues.apache.org/jira/browse/SPARK-32014)
For example, MySQL does not return result sets in stored procedures. In such
case, we can use `executeUpdate()`.
However, some other databases returns result sets in stored procedures. To
support such use-case, we need to consider both `executeQuery()` and
`executeUpdate()`. That's the reason why I used `execute()` here.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]