Github user CK50 commented on a diff in the pull request:
https://github.com/apache/spark/pull/10312#discussion_r47749421
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala ---
@@ -108,6 +108,34 @@ abstract class JdbcDialect extends Serializable {
def beforeFetch(connection: Connection, properties: Map[String,
String]): Unit = {
}
+ /**
+ * Get the SQL statement that should be used to insert new records into
the table.
+ * Dialects can override this method to return a statement that works
best in a particular
+ * database.
+ * @param table The name of the table.
+ * @param rddSchema The schema of DataFrame to be inserted
+ * @param columnMapping An optional mapping from DataFrame field names
to database column
+ * names
+ * @return The SQL statement to use for inserting into the table.
+ */
+ def getInsertStatement(table: String,
+ rddSchema: StructType,
+ columnMapping: Map[String, String] = null):
String = {
+ if (columnMapping == null) {
+ return rddSchema.fields.map(field => "?")
--- End diff --
Yes, the dialect returns the SQL Insert Statement String, which is then
turned into a prepared statement in JdbcUtils.insertStatement
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]