EnricoMi commented on code in PR #41518:
URL: https://github.com/apache/spark/pull/41518#discussion_r1230713275
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala:
##########
@@ -284,6 +288,8 @@ object JDBCOptions {
val JDBC_BATCH_FETCH_SIZE = newOption("fetchsize")
val JDBC_TRUNCATE = newOption("truncate")
val JDBC_CASCADE_TRUNCATE = newOption("cascadeTruncate")
+ val JDBC_UPSERT = newOption("upsert")
Review Comment:
I'd go for `upsert`, as in upsert mode.
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala:
##########
@@ -878,6 +898,7 @@ object JdbcUtils extends Logging with SQLConfHelper {
df: DataFrame,
tableSchema: Option[StructType],
isCaseSensitive: Boolean,
+ upsert: Boolean,
Review Comment:
This is needed because `saveTable` is called for all save modes, but only in
`Append` mode for an existing table, we want to use the upsert statement, for
all other code paths, we want to use plain insert.
We could decrease code complexity by removing this `upsert` argument and use
`options.isUpsert`, but that would use upsert statements in situations where no
upserts are needed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]