cloud-fan commented on code in PR #38192:
URL: https://github.com/apache/spark/pull/38192#discussion_r997113086
##########
connector/connect/src/main/scala/org/apache/spark/sql/connect/command/SparkConnectCommandPlanner.scala:
##########
@@ -74,4 +77,49 @@ class SparkConnectCommandPlanner(session: SparkSession,
command: proto.Command)
session.udf.registerPython(cf.getPartsList.asScala.head, udf)
}
+ /**
+ * Transforms the write operation and executes it.
+ *
+ * The input write operation contains a reference to the input plan and
transforms it to
+ * the corresponding logical plan. Afterwards, creates the DataFrameWriter
and translates
+ * the parameters of the WriteOperation into the corresponding methods calls.
+ *
+ * @param writeOperation
+ */
+ def handleWriteOperation(writeOperation: WriteOperation): Unit = {
Review Comment:
This is more than planned write. We need to create a logical plan for DF
write, instead of putting implementation code in DF write APIs.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]