thinkharderdev commented on code in PR #2170:
URL: https://github.com/apache/arrow-datafusion/pull/2170#discussion_r844308448
##########
datafusion/core/src/physical_plan/file_format/parquet.rs:
##########
@@ -878,6 +880,39 @@ mod tests {
assert_batches_sorted_eq!(expected, &read);
}
+ #[tokio::test]
+ async fn evolved_schema_intersection_filter() {
+ let c1: ArrayRef =
+ Arc::new(StringArray::from(vec![Some("Foo"), None, Some("bar")]));
+
+ let c2: ArrayRef = Arc::new(Int64Array::from(vec![Some(1), Some(2),
None]));
+
+ let c3: ArrayRef = Arc::new(Int8Array::from(vec![Some(10), Some(20),
None]));
+
+ // batch1: c1(string), c2(int64), c3(int8)
+ let batch1 = create_batch(vec![("c1", c1), ("c3", c3.clone())]);
+
+ // batch2: c3(int8), c2(int64), c1(string)
+ let batch2 = create_batch(vec![("c3", c3), ("c2", c2)]);
+
+ let filter = col("c2").eq(lit(0_i64));
+
+ // read/write them files:
+ let read = round_trip_to_parquet(vec![batch1, batch2], None, None,
Some(filter))
+ .await
+ .unwrap();
+ let expected = vec![
+ "+-----+----+----+",
+ "| c1 | c3 | c2 |",
Review Comment:
Yeah, this looked wrong to me as well. What I think is happening is that the
min/max aren't set the pruning predicates aren't applied. In a "real" query
where this predicate was pushed down from a filter stage this would still get
piped into a `FilerExec`. I think we would have to special case the scenario
where we fill in a null column to conform to a merged schema which may be worth
doing. I can double check though and make sure there's not a bug here.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]