huaxingao commented on pull request #35669: URL: https://github.com/apache/spark/pull/35669#issuecomment-1064699267
``` == Physical Plan == *(1) Filter (((a#40L < 10) AND (c#42 = 0)) OR (((a#40L >= 10) AND (c#42 >= 1)) AND (c#42 < 3))) +- *(1) ColumnarToRow +- BatchScan[a#40L, b#41L, c#42, d#43] ParquetScan DataFilters: [(((a#40L < 10) AND (c#42 = 0)) OR (((a#40L >= 10) AND (c#42 >= 1)) AND (c#42 < 3)))], Format: parquet, Location: InMemoryFileIndex(1 paths)[path, PartitionFilters: [((c#42 = 0) OR ((c#42 >= 1) AND (c#42 < 3)))], PushedAggregation: [], PushedFilters: [Or(And(LessThan(a,10),EqualTo(c,0)),And(And(GreaterThanOrEqual(a,10),GreaterThanOrEqual(c,1)),Le..., PushedGroupBy: [], ReadSchema: struct<a:bigint,b:bigint>, PushedFilters: [Or(And(LessThan(a,10),EqualTo(c,0)),And(And(GreaterThanOrEqual(a,10),GreaterThanOrEqual(c,1)),LessThan(c,3)))], PushedAggregation: [], PushedGroupBy: [] RuntimeFilters: [] ``` Seems to me this plan might be a bit misleading. The data filters are actually `[((id#9L > 0) OR (id#9L = 2))]` and the predicate pushed down to Parquet is `[((id#9L > 0) OR (id#9L = 2))]`. With the pushed filter to be `[Or(And(LessThan(a,10),EqualTo(c,0)),And(And(GreaterThanOrEqual(a,10),GreaterThanOrEqual(c,1)),LessThan(c,3)))]`, it makes me feel that we are constructing a Parquet predicate `[Or(And(LessThan(a,10),EqualTo(c,0)),And(And(GreaterThanOrEqual(a,10),GreaterThanOrEqual(c,1)),LessThan(c,3)))]` and push it down to Parquet. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
