Yuming Wang created SPARK-33673: ----------------------------------- Summary: Do not push down partition filters to ParquetScan for DataSourceV2 Key: SPARK-33673 URL: https://issues.apache.org/jira/browse/SPARK-33673 Project: Spark Issue Type: Sub-task Components: SQL Affects Versions: 3.2.0 Reporter: Yuming Wang
{noformat} - Spark vectorized reader - with partition data column - select a single complex field array and its parent struct array *** FAILED *** - Non-vectorized reader - with partition data column - select a single complex field array and its parent struct array *** FAILED *** - Spark vectorized reader - with partition data column - select a single complex field from a map entry and its parent map entry *** FAILED *** - Non-vectorized reader - with partition data column - select a single complex field from a map entry and its parent map entry *** FAILED *** - Spark vectorized reader - with partition data column - partial schema intersection - select missing subfield *** FAILED *** - Non-vectorized reader - with partition data column - partial schema intersection - select missing subfield *** FAILED *** - Spark vectorized reader - with partition data column - no unnecessary schema pruning *** FAILED *** - Non-vectorized reader - with partition data column - no unnecessary schema pruning *** FAILED *** - Spark vectorized reader - with partition data column - empty schema intersection *** FAILED *** - Non-vectorized reader - with partition data column - empty schema intersection *** FAILED *** - Spark vectorized reader - with partition data column - select a single complex field and in where clause *** FAILED *** - Non-vectorized reader - with partition data column - select a single complex field and in where clause *** FAILED *** - Spark vectorized reader - with partition data column - select nullable complex field and having is not null predicate *** FAILED *** - Non-vectorized reader - with partition data column - select nullable complex field and having is not null predicate *** FAILED *** {noformat} These test will fail, this is because Parquet will return empty results for non-existent column since PARQUET-1765. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org