openinx commented on a change in pull request #3240:
URL: https://github.com/apache/iceberg/pull/3240#discussion_r739995001



##########
File path: 
flink/src/main/java/org/apache/iceberg/flink/source/RowDataFileScanTaskReader.java
##########
@@ -70,9 +71,17 @@ public RowDataFileScanTaskReader(Schema tableSchema, Schema 
projectedSchema,
         PartitionUtil.constantsMap(task, RowDataUtil::convertConstant);
 
     FlinkDeleteFilter deletes = new FlinkDeleteFilter(task, tableSchema, 
projectedSchema, inputFilesDecryptor);
-    return deletes
-        .filter(newIterable(task, deletes.requiredSchema(), idToConstant, 
inputFilesDecryptor))
-        .iterator();
+    CloseableIterable<RowData> iterable = deletes.filter(
+        newIterable(task, deletes.requiredSchema(), idToConstant, 
inputFilesDecryptor)
+    );
+
+    // Project the RowData to remove the extra meta columns.
+    if (!projectedSchema.sameSchema(deletes.requiredSchema())) {

Review comment:
       The correct approach to address the above cases in unit tests is :  we 
create a separate unit case with an iceberg table, the iceberg table can be v1 
table or v2 table. For the v1 table, we just write few records into it. Finally 
we use the `FlinkInputFormat` or `DataIterator`  to open this table and assert 
the returned `RowData`s are not a `RowDataProjection` type.  For the v2 table,  
the difference is the returned `RowData` is a `RowDataProjection` type.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to