szehon-ho commented on code in PR #4812:
URL: https://github.com/apache/iceberg/pull/4812#discussion_r923950980
##########
core/src/main/java/org/apache/iceberg/MetadataColumns.java:
##########
@@ -53,6 +53,8 @@ private MetadataColumns() {
public static final String DELETE_FILE_ROW_FIELD_NAME = "row";
public static final int DELETE_FILE_ROW_FIELD_ID = Integer.MAX_VALUE - 103;
public static final String DELETE_FILE_ROW_DOC = "Deleted row values";
+ public static final int POSITION_DELETE_TABLE_PARTITION_FIELD_ID =
Integer.MAX_VALUE - 104;
Review Comment:
Hi guys, I hit the first issue , as StaticDataTask is in core module, there
is no way currently for it to access Parquet, ORC, and the file readers. So
maybe engines, like Spark, can pass it in? This would seem to require an API
change in any case on DataTask.
So something like, adding a new method DataTask::rows(Function<DeleteFile,
Schema, Expression> filter readerFunction)?
This can also take care of encryption, which seems to also be a callback
(for example DeleteFilter::inputFile(), which is implemented by Spark/Flink).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]