cloud-fan commented on code in PR #41855:
URL: https://github.com/apache/spark/pull/41855#discussion_r1314508461
##########
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##########
@@ -180,6 +180,38 @@ abstract class JdbcDialect extends Serializable with
Logging {
statement.executeUpdate(s"CREATE TABLE $tableName ($strSchema)
$createTableOptions")
}
+ /**
+ * Returns an Insert SQL statement for inserting a row into the target table
via JDBC conn.
+ *
+ * @param table The name of the table.
+ * @param rddSchema The schema of the row that will be inserted.
+ * @param tableSchema The schema of the table.
Review Comment:
Since we are making it an API, we should make sure it's reasonable. As a
JDBC dialect, why do I need to care about RDD schema? Shouldn't Spark transform
the input query to match the table schema?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]