Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/5208#discussion_r28198929
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/Exchange.scala ---
@@ -162,6 +161,11 @@ private[sql] case class AddExchange(sqlContext:
SQLContext) extends Rule[SparkPl
def addExchangeIfNecessary(partitioning: Partitioning, child:
SparkPlan): SparkPlan =
if (child.outputPartitioning != partitioning)
Exchange(partitioning, child) else child
+ // Check if the ordering we want to ensure is the same as the
child's output
+ // ordering. If so, we do not need to add the Sort operator.
+ def addSortIfNecessary(ordering: Seq[SortOrder], child: SparkPlan):
SparkPlan =
+ if (child.outputOrdering != ordering) Sort(ordering, global =
false, child) else child
--- End diff --
The problem with doing it this way is you are no longer taking advantage of
the external sort that spark can do as part of the shuffle anyway. I think we
need to holistically consider whether there needs to be a sort as part of when
there needs to be a shuffle. For now, to simplify things I think it would even
be okay to always shuffle if there needs to be a sort.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]