cloud-fan commented on a change in pull request #31984:
URL: https://github.com/apache/spark/pull/31984#discussion_r603438655
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/dynamicpruning/PartitionPruning.scala
##########
@@ -48,6 +49,10 @@ import
org.apache.spark.sql.execution.datasources.{HadoopFsRelation, LogicalRela
*/
object PartitionPruning extends Rule[LogicalPlan] with PredicateHelper with
JoinSelectionHelper {
+ private val buildBroadcastThreshold = math.max(
+
AUTO_BROADCASTJOIN_THRESHOLD.defaultValue.getOrElse(conf.autoBroadcastJoinThreshold),
Review comment:
> To avoid disable DPP by setting autoBroadcastJoinThreshold to a small
value.
Do we really need to consider that? In practice, no one does it AFAIK. It's
hard to define `canBroadcast` when `autoBroadcastJoinThreshold` is -1. I think
it's more reasonable to simple use `conf.autoBroadcastJoinThreshold`
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]