aokolnychyi commented on a change in pull request #23171: [SPARK-26205][SQL]
Optimize InSet Expression for bytes, shorts, ints, dates
URL: https://github.com/apache/spark/pull/23171#discussion_r261253221
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -171,6 +171,14 @@ object SQLConf {
.intConf
.createWithDefault(10)
+ val OPTIMIZER_INSET_SWITCH_THRESHOLD =
+ buildConf("spark.sql.optimizer.inSetSwitchThreshold")
+ .internal()
+ .doc("Configures the max set size in InSet for which Spark will generate
code with " +
+ "switch statements. This is applicable only to bytes, shorts, ints,
dates.")
+ .intConf
Review comment:
What about the default and max values then? The switch logic was faster than
`HashSet` on 500 elements for every data type and on every machine I tested. In
some cases, `HashSet` started to outperform on 550+. Also, I had to generate a
set of 6000+ element to hit the limit of 64KB. My proposal is to have 400 as
default and 600 as max. Then we should be safe.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]