c21 commented on a change in pull request #34298:
URL: https://github.com/apache/spark/pull/34298#discussion_r738040552



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -960,6 +960,14 @@ object SQLConf {
     .booleanConf
     .createWithDefault(true)
 
+  val ORC_AGGREGATE_PUSHDOWN_ENABLED = 
buildConf("spark.sql.orc.aggregatePushdown")
+    .doc("If true, aggregates will be pushed down to ORC for optimization. 
Support MIN, MAX and " +
+      "COUNT as aggregate expression. For MIN/MAX, support boolean, integer, 
float and date " +
+      "type. For COUNT, support all data types.")

Review comment:
       I thought to just use integer to represent all integer types (byte, 
short, int, long) and use float here to represent all float types (float and 
double), to be less verbose. We anyway will update Spark doc on website with 
more detailed explanation of this aggregate push down feature anyway (ideally a 
sheet).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to