aokolnychyi commented on a change in pull request #31700:
URL: https://github.com/apache/spark/pull/31700#discussion_r588010745



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/v2Commands.scala
##########
@@ -166,6 +166,52 @@ object OverwritePartitionsDynamic {
   }
 }
 
+case class AppendMicroBatch(

Review comment:
       Thanks for the context, @HeartSaVioR!
   
   I think we have two ways to proceed:
   
   Option 1: Just adapt `WriteToMicroBatchDataSource ` to use the `Write` 
abstraction and handle it in `V2Writes`.
   Option 2: Define specific plans where we have clarity. For example, `append` 
and `complete` seem well-defined. We could define plans like 
`AppendStreamingData` and `TruncateAndAppendStreamingData` or anything like 
that and have something intermediate for `update`.
   
   I am fine either way but option 1 seems easier for this PR. The rest can be 
covered by SPARK-27484.
   
   To start with, we should all agree this feature is useful for micro-batch 
streaming.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to