amoghmargoor commented on a change in pull request #28804:
URL: https://github.com/apache/spark/pull/28804#discussion_r447158417



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -2196,6 +2196,13 @@ object SQLConf {
       .checkValue(bit => bit >= 10 && bit <= 30, "The bit value must be in 
[10, 30].")
       .createWithDefault(16)
 
+  val SKIP_PARTIAL_AGGREGATE_ENABLED =
+    buildConf("spark.sql.aggregate.partialaggregate.skip.enabled")
+      .internal()
+      .doc("Avoid sort/spill to disk during partial aggregation")
+      .booleanConf
+      .createWithDefault(true)

Review comment:
       @maropu This is very useful suggestion. One issue is columns stats are 
rarely computed. We came across this work in HIVE 
https://issues.apache.org/jira/browse/HIVE-291. They turn off map side 
aggregate (i.e., partial aggregate will be pass through) in Physical operator 
(i.e., Group-By operator)  if map-side aggregation reduce the entries by at 
least half and they look at 100000 rows to do that (ref: patch 
https://issues.apache.org/jira/secure/attachment/12400257/291.1.txt). Should we 
do something similar in HashAggregateExec here ? Any thoughts on this ? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to