Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/20851#discussion_r176897485
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -353,6 +353,13 @@ object SQLConf {
.booleanConf
.createWithDefault(true)
+ val PARQUET_FILTER_PUSHDOWN_DATE_ENABLED =
buildConf("spark.sql.parquet.filterPushdown.date")
+ .doc("If true, enables Parquet filter push-down optimization for Date.
" +
+ "This configuration only has an effect when
'spark.sql.parquet.filterPushdown' is enabled.")
+ .internal()
+ .booleanConf
+ .createWithDefault(false)
--- End diff --
I think it's common that we turn on new feature by default, if there is no
known regression. And turn it off if we find regression later.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]