EnricoMi commented on code in PR #41000:
URL: https://github.com/apache/spark/pull/41000#discussion_r1191158325


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala:
##########
@@ -159,6 +159,17 @@ object FileFormatWriter extends Logging {
       statsTrackers = statsTrackers
     )
 
+    SQLExecution.checkSQLExecutionId(sparkSession)
+
+    // propagate the description UUID into the jobs, so that committers
+    // get an ID guaranteed to be unique.
+    job.getConfiguration.set("spark.sql.sources.writeJobUUID", 
description.uuid)
+
+    // This call shouldn't be put into the `try` block below because it only 
initializes and
+    // prepares the job, any exception thrown from here shouldn't cause 
abortJob() to be called.
+    // It must be run before `materializeAdaptiveSparkPlan()`

Review Comment:
   Maybe a similar comment above this line below would also be helpful:
   
       val materializedPlan = materializeAdaptiveSparkPlan(empty2NullPlan)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to