Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16677#discussion_r218630324
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -204,6 +204,13 @@ object SQLConf {
.intConf
.createWithDefault(4)
+ val LIMIT_FLAT_GLOBAL_LIMIT =
buildConf("spark.sql.limit.flatGlobalLimit")
+ .internal()
+ .doc("During global limit, try to evenly distribute limited rows
across data " +
+ "partitions. If disabled, scanning data partitions sequentially
until reaching limit number.")
+ .booleanConf
+ .createWithDefault(true)
--- End diff --
so i read this config doc five times, and i still couldn't figure out what
it does, until i went ahead and read the implementation.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]