kbendick commented on a change in pull request #2081:
URL: https://github.com/apache/iceberg/pull/2081#discussion_r558631817



##########
File path: 
spark3/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java
##########
@@ -529,7 +529,7 @@ private Table buildPartitionedTable(String desc, 
PartitionSpec spec, String udf,
     return Lists.newArrayList(
         record(schema, 0L, parse("2017-12-22T09:20:44.294658+00:00"), 
"junction"),
         record(schema, 1L, parse("2017-12-22T07:15:34.582910+00:00"), 
"alligator"),
-        record(schema, 2L, parse("2017-12-22T06:02:09.243857+00:00"), 
"forrest"),
+        record(schema, 2L, parse("2017-12-22T06:02:09.243857+00:00"), ""),

Review comment:
       Yes unfortunately, adding a new record to be returned from this function 
would mean we'd have to change many of the existing tests, as many of them have 
a hardcoded assumption on 10 records.
   
   I can add a new test in another (possibly new) file if we'd prefer. But in 
order to place the test in here to be run against what seems to be one of the 
more comprehensive test suites for hitting various evaluators in various 
scenarios (i.e. it tests against tables partitioned by the empty string record, 
tables not partitioned by the empty string record, unpartitioned tables), we'd 
have to either change an existing record or update the tests to be aware of 11 
records.
   
   In particular, we need to test on a scan of a table with a filter, which 
will cause the various evaluators which can return `ROWS_MIGHT_MATCH`.
   
   I'll take another look to see if there's an existing test suite where a new 
test can be easily added to cover this scenario without affecting other tests.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to