cloud-fan commented on a change in pull request #32816:
URL: https://github.com/apache/spark/pull/32816#discussion_r692179974
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveSparkPlanExec.scala
##########
@@ -656,13 +687,54 @@ case class AdaptiveSparkPlanExec(
// node to prevent the loss of the `BroadcastExchangeExec` node in DPP
subquery.
// Here, we also need to avoid to insert the `BroadcastExchangeExec` node
when the newPlan
// is already the `BroadcastExchangeExec` plan after apply the
`LogicalQueryStageStrategy` rule.
- val finalPlan = currentPhysicalPlan match {
+ def updateBroadcastExchange(plan: SparkPlan): SparkPlan =
currentPhysicalPlan match {
case b: BroadcastExchangeLike
- if (!newPlan.isInstanceOf[BroadcastExchangeLike]) =>
b.withNewChildren(Seq(newPlan))
- case _ => newPlan
+ if (!plan.isInstanceOf[BroadcastExchangeLike]) =>
b.withNewChildren(Seq(plan))
+ case _ => plan
}
- (finalPlan, optimized)
+ val optimizedWithSkewedJoin = applyPhysicalRules(
+ optimizedPhysicalPlan,
+ optimizeSkewedJoinWithExtraShuffleRules,
Review comment:
I find this a bit hard to reason about. In general, we have different
ways to optimize the query. We produce multiple physical plans and pick the one
with the least cost.
Here we apply extra rules to the plan and get a new plan. This doesn't match
the general idea. That's why I proposed
https://github.com/apache/spark/pull/32816/files#r691844920
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]