aokolnychyi commented on a change in pull request #1862:
URL: https://github.com/apache/iceberg/pull/1862#discussion_r535046623
##########
File path: spark3/src/main/java/org/apache/iceberg/spark/source/SparkTable.java
##########
@@ -160,6 +167,43 @@ public WriteBuilder newWriteBuilder(LogicalWriteInfo info)
{
return new SparkWriteBuilder(sparkSession(), icebergTable, info);
}
+ @Override
+ public MergeBuilder newMergeBuilder(LogicalWriteInfo info) {
+ String mode = icebergTable.properties().getOrDefault(WRITE_ROW_LEVEL_MODE,
WRITE_ROW_LEVEL_MODE_DEFAULT);
+ ValidationException.check(mode.equals("copy-on-write"), "Unsupported row
operations mode: %s", mode);
+ return new SparkMergeBuilder(sparkSession(), icebergTable, info);
+ }
+
+ @Override
+ public boolean canDeleteWhere(Filter[] filters) {
+ if (table().specs().size() > 1) {
+ // cannot guarantee a metadata delete will be successful if we have
multiple specs
+ return false;
+ }
+
+ Set<Integer> identitySourceIds = table().spec().identitySourceIds();
+ Schema schema = table().schema();
+
+ for (Filter filter : filters) {
+ // return false if the filter requires rewrite or if we cannot translate
the filter
+ if (requiresRewrite(filter, schema, identitySourceIds) ||
SparkFilters.convert(filter) == null) {
+ return false;
+ }
+ }
+
+ return true;
+ }
+
+ private boolean requiresRewrite(Filter filter, Schema schema, Set<Integer>
identitySourceIds) {
+ // TODO: handle dots correctly via v2references
Review comment:
Yeah, they are already supported.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]