Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/15049#discussion_r147554533
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -258,6 +258,11 @@ object SQLConf {
.booleanConf
.createWithDefault(false)
+ val PARQUET_RECORD_FILTER_ENABLED =
buildConf("spark.sql.parquet.recordFilter")
+ .doc("Whether to allow the record-level filtering in Parquet for Spark
to filter them.")
+ .booleanConf
+ .createWithDefault(true)
--- End diff --
I'm curious that Orc filter pushdown also shows similar pattern, i.e.,
Spark side filtering is faster?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]