rdblue commented on a change in pull request #30806:
URL: https://github.com/apache/spark/pull/30806#discussion_r544469508



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/V1FallbackWriters.scala
##########
@@ -38,10 +38,11 @@ case class AppendDataExecV1(
     table: SupportsWrite,
     writeOptions: CaseInsensitiveStringMap,
     plan: LogicalPlan,
-    refreshCache: () => Unit) extends V1FallbackWriters {
+    refreshCache: () => Unit,
+    override val write: Option[V1Write] = None) extends V1FallbackWriters {
 
-  override protected def run(): Seq[InternalRow] = {
-    writeWithV1(newWriteBuilder().buildForV1Write(), refreshCache = 
refreshCache)
+  override protected def buildAndRun(): Seq[InternalRow] = {

Review comment:
       I'm interested to hear what @dongjoon-hyun thinks about this.
   
   I think we should have a different physical node for each write so that the 
explain plan shows what is happening. Otherwise, the approach to support 
building the batch write here or building it in the optimizer was mainly to be 
able to turn this on and off in our environment. I doubt that is needed in 
other situations.
   
   I think I would be for removing all of the `buildAndRun` methods and always 
building the write in the optimizer.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to