hvanhovell commented on code in PR #38192:
URL: https://github.com/apache/spark/pull/38192#discussion_r991558435


##########
connector/connect/src/main/scala/org/apache/spark/sql/connect/command/SparkConnectCommandPlanner.scala:
##########
@@ -74,4 +77,49 @@ class SparkConnectCommandPlanner(session: SparkSession, 
command: proto.Command)
     session.udf.registerPython(cf.getPartsList.asScala.head, udf)
   }
 
+  /**
+   * Transforms the write operation and executes it.
+   *
+   * The input write operation contains a reference to the input plan and 
transforms it to
+   * the corresponding logical plan. Afterwards, creates the DataFrameWriter 
and translates
+   * the parameters of the WriteOperation into the corresponding methods calls.
+   *
+   * @param writeOperation
+   */
+  def handleWriteOperation(writeOperation: WriteOperation): Unit = {

Review Comment:
   It is a bit weird to have this in the SparkPlanner node, but I guess this is 
the consequence of the builder() API we have in the DataFrameWriter.
   
   @cloud-fan AFAIK you have been working on making writes more declarative 
(i.e. planned writes). Do you see a way to improve this?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to