rdblue commented on a change in pull request #2116:
URL: https://github.com/apache/iceberg/pull/2116#discussion_r560550601
##########
File path:
spark3-extensions/src/main/scala/org/apache/spark/sql/catalyst/optimizer/RewriteMergeInto.scala
##########
@@ -58,53 +61,138 @@ case class RewriteMergeInto(conf: SQLConf) extends
Rule[LogicalPlan] with Rewrit
override def apply(plan: LogicalPlan): LogicalPlan = {
plan resolveOperators {
+ case MergeIntoTable(target: DataSourceV2Relation, source: LogicalPlan,
cond, matchedActions, notMatchedActions)
+ if matchedActions.isEmpty =>
+
+ val mergeBuilder = target.table.asMergeable.newMergeBuilder("merge",
newWriteInfo(target.schema))
+ val targetTableScan = buildSimpleScanPlan(target.table, target.output,
mergeBuilder, cond)
+
+ // when there are no matched actions, use a left anti join to remove
any matching rows and rewrite to use
+ // append instead of replace. only unmatched source rows are passed to
the merge and actions are all inserts.
+ val joinPlan = Join(source, targetTableScan, LeftAnti, Some(cond),
JoinHint.NONE)
Review comment:
Yeah, I was thinking about that, too. I used the merge builder here
because that automatically projects `_file` and `_pos`, but it also ignores
residuals so we don't get pushdown to data file filters. I'll switch over to a
normal scan.
I'll add a filter with the condition as well.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]