rdblue commented on code in PR #4812:
URL: https://github.com/apache/iceberg/pull/4812#discussion_r921638925


##########
core/src/main/java/org/apache/iceberg/MetadataColumns.java:
##########
@@ -53,6 +53,8 @@ private MetadataColumns() {
   public static final String DELETE_FILE_ROW_FIELD_NAME = "row";
   public static final int DELETE_FILE_ROW_FIELD_ID = Integer.MAX_VALUE - 103;
   public static final String DELETE_FILE_ROW_DOC = "Deleted row values";
+  public static final int POSITION_DELETE_TABLE_PARTITION_FIELD_ID = 
Integer.MAX_VALUE - 104;

Review Comment:
   My concern with the marker `FileScanTask` is that if an engine is 
implementing metadata tables like normal reads, then we've introduced a 
correctness problem because it doesn't know to read the new task differently. I 
think the cleanest way is probably to use `DataTask`.
   
   I think the concern about using `DataTask` is valid, since it exposes a 
row-based interface and isn't intended for large uses that benefit from 
vectorization. It was originally intended for small tables, like `snapshots`.
   
   However, I've been thinking for a while that a significant improvement is to 
adapt Arrow record batches into rows, so we can take advantage of vectorized 
reads in all cases, not just when the engine supports a vectorized format. That 
is probably way faster, so we could explore doing that here and using a joined 
row.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to