Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/7959#discussion_r36659313
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/Exchange.scala ---
@@ -208,6 +208,19 @@ private[sql] case class EnsureRequirements(sqlContext:
SQLContext) extends Rule[
}
}
+ private def removeRedundantShuffle(childPlan: SparkPlan): SparkPlan = {
+ // For an query like df1.repartition(1000).join(df2, "_1"), we will
repartition df1
+ // with shuffle. Then we perform shuffle again as part of the exchange
added here.
+ // To avoid the extra shuffle, we should remove repartition operators
if they are
+ // the child of Exchange and shuffle=True.
+ val optimized = childPlan match {
--- End diff --
I think you could just omit the `optimized` here and simply return the
result of the `match` instead of assigning it to a `val`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]