viirya commented on a change in pull request #32039:
URL: https://github.com/apache/spark/pull/32039#discussion_r609944372
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2Exec.scala
##########
@@ -250,10 +252,13 @@ case class OverwritePartitionsDynamicExec(
case class WriteToDataSourceV2Exec(
batchWrite: BatchWrite,
+ refreshCache: () => Unit,
query: SparkPlan) extends V2TableWriteExec {
override protected def run(): Seq[InternalRow] = {
- writeWithV2(batchWrite)
+ val writtenRows = writeWithV2(batchWrite)
+ refreshCache()
+ writtenRows
Review comment:
Instead of refreshing/invalidating the table per trigger, why we don't
just invalidate the cache before we start the streaming query that writes the
table?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]