[ https://issues.apache.org/jira/browse/SPARK-31811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Yuming Wang reassigned SPARK-31811: ----------------------------------- Assignee: (was: Yuming Wang) > Pushdown IsNotNull to file scan if possible > ------------------------------------------- > > Key: SPARK-31811 > URL: https://issues.apache.org/jira/browse/SPARK-31811 > Project: Spark > Issue Type: Improvement > Components: SQL > Affects Versions: 3.1.0 > Reporter: Yuming Wang > Priority: Major > Attachments: default.png, pushdown.png > > > We should Pushdown {{IsNotNull}} to file scan if possible. For example: > {code:sql} > CREATE TABLE t1(c1 string, c2 string) USING parquet; > EXPLAIN SELECT t1.* FROM t1 WHERE coalesce(t1.c1, t1.c2) IS NOT NULL; > {code} > {noformat} > == Physical Plan == > *(1) Filter isnotnull(coalesce(c1#43, c2#44)) > +- *(1) ColumnarToRow > +- FileScan parquet default.t1[c1#43,c2#44] Batched: true, DataFilters: > [isnotnull(coalesce(c1#43, c2#44))], Format: Parquet, Location: > InMemoryFileIndex[file:/root/spark-3.0.0-bin-hadoop2.7/spark-warehouse/t1], > PartitionFilters: [], PushedFilters: [], ReadSchema: > struct<c1:string,c2:string> > {noformat} > {code:sql} > EXPLAIN SELECT t1.* FROM t1 WHERE t1.c1 IS NOT NULL OR t1.c2 IS NOT NULL; > {code} > {noformat} > == Physical Plan == > *(1) Filter (isnotnull(c1#43) OR isnotnull(c2#44)) > +- *(1) ColumnarToRow > +- FileScan parquet default.t1[c1#43,c2#44] Batched: true, DataFilters: > [(isnotnull(c1#43) OR isnotnull(c2#44))], Format: Parquet, Location: > InMemoryFileIndex[file:/root/spark-3.0.0-bin-hadoop2.7/spark-warehouse/t1], > PartitionFilters: [], PushedFilters: [Or(IsNotNull(c1),IsNotNull(c2))], > ReadSchema: struct<c1:string,c2:string> > {noformat} > Real performance test case: > !default.png! !pushdown.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org