karuppayya commented on pull request #28804:
URL: https://github.com/apache/spark/pull/28804#issuecomment-650585266


   > > it is more of a manual step and can be used only if the user knows the 
nature of data upfront.Like in my benchmark, where we expect the the all but 
few grouping keys to be different.
   > > A user will be able to find the nature of data by looking at the metrics 
in Spark UI where the number of output rows from previous stage is same/almost 
same as the number of output rows from HashAggregate. If the the user expects 
the new data to have this nature in his subsequent runs(say a partitioned table 
with new data every hour/day), he can enable the config.
   > 
   > hm...., if `SKIP_PARTIAL_AGGREGATE_ENABLED=true` and the cardinality is 
**not** the same with the number of rows, a query returns a wrong aggregated 
answer, right?
   
   No, The Final aggregation will take care giving the right results.
   This is like more like setting the Aggregation mode to 
`org.apache.spark.sql.catalyst.expressions.aggregate.Complete`
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to