Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/21741#discussion_r202423258
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -378,6 +378,15 @@ object SQLConf {
.booleanConf
.createWithDefault(true)
+ val PARQUET_FILTER_PUSHDOWN_TIMESTAMP_ENABLED =
+ buildConf("spark.sql.parquet.filterPushdown.timestamp")
+ .doc("If true, enables Parquet filter push-down optimization for
Timestamp. " +
+ "This configuration only has an effect when
'spark.sql.parquet.filterPushdown' is " +
+ "enabled and Timestamp stored as TIMESTAMP_MICROS or
TIMESTAMP_MILLIS type.")
--- End diff --
You need to explain how to use `spark.sql.parquet.outputTimestampType` to
control the Parquet timestamp type Spark uses to writes parquet files.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]