Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/21931#discussion_r207802603
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -1437,6 +1437,15 @@ object SQLConf {
.intConf
.createWithDefault(20)
+ val FAST_HASH_AGGREGATE_MAX_ROWS_CAPACITY_BIT =
+ buildConf("spark.sql.fast.hash.aggregate.row.max.capacity.bit")
+ .internal()
+ .doc("Capacity for the max number of rows to be held in memory by
the fast hash aggregate " +
+ "product operator (e.g: configuration 16 capacity size is 65536).")
+ .intConf
+ .checkValue(bit => bit >= 1 && bit <= 30, "The bit value must be in
[1, 30].")
--- End diff --
We need to accept these small values, e.g., 2^1, 2^2, ..? I think these are
meaningless...
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]