karuppayya commented on pull request #28804:
URL: https://github.com/apache/spark/pull/28804#issuecomment-647051698
> Ur, one more question; how do we know that the cardinality is close the
#records being processed when processing aggregates?
@maropu
- it is more of a manual step and can be used only if the user knows the
nature of data upfront.Like in my benchmark, where we expect the the all but
few grouping keys to be different.
- A user will be able to find the nature of data by looking at the metrics
in Spark UI where the number of output rows from previous stage is same/almost
same as the number of output rows from HashAggregate. If the the user expects
the new data to have this nature in his subsequent runs(say a partitioned table
with new data every hour/day), he can enable the config.
If this can be done at runtime, without any config that would be the ideal
solution. This PR is the first step towards it.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]