coolderli commented on a change in pull request #4005:
URL: https://github.com/apache/iceberg/pull/4005#discussion_r836351384
##########
File path: core/src/test/java/org/apache/iceberg/TestMetadataTableScans.java
##########
@@ -516,6 +522,101 @@ public void testPartitionColumnNamedPartition() throws
Exception {
validateIncludesPartitionScan(tasksAndEq, 0);
}
+ @Test
+ public void testPartitionsTableScanWithDeleteFilesInFilter() throws
IOException {
+ Assume.assumeTrue(formatVersion == 2);
+ Configuration conf = new Configuration();
+ HadoopTables tables = new HadoopTables(conf);
Review comment:
Yes, I checked again. I think the problem is that TestTableOperations
does not write metadata.json
file:https://github.com/apache/iceberg/blob/master/core/src/test/java/org/apache/iceberg/TestTables.java#L216.
So the metadata location is always null. When reading the records. it should
plan tasks,so the metadata.json should be used
[here](https://github.com/apache/iceberg/blob/master/core/src/main/java/org/apache/iceberg/PartitionsTable.java#L78).
So I create a HadoopTable because HadoopTableOperation will create the
metadata.json
[here](https://github.com/apache/iceberg/blob/master/core/src/main/java/org/apache/iceberg/hadoop/HadoopTableOperations.java#L151)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]