rdblue commented on pull request #30429:
URL: https://github.com/apache/spark/pull/30429#issuecomment-732498815
Sorry I'm late to help review, but this overall looks good to me.
The one minor issue I have is passing the Spark session and logical plan
through the plans as new parameters. I would rather use a callback to refresh
instead, so that `DataSourceV2Strategy` would pass what to do after the commit:
```scala
case AppendData(r: DataSourceV2Relation, query, writeOptions, _) =>
val refresh = () =>
session.sharedState.cacheManager.recacheByPlan(session, r)
r.table.asWritable match {
case v1 if v1.supports(TableCapability.V1_BATCH_WRITE) =>
AppendDataExecV1(v1, writeOptions.asOptions, query, afterWrite =
refresh) :: Nil
case v2 =>
AppendDataExec(v2, r, writeOptions.asOptions, planLater(query),
afterWrite = refresh) :: Nil
}
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]