maropu commented on a change in pull request #32210:
URL: https://github.com/apache/spark/pull/32210#discussion_r619987605



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/ShuffledHashJoinExec.scala
##########
@@ -81,11 +83,22 @@ case class ShuffledHashJoinExec(
 
   protected override def doExecute(): RDD[InternalRow] = {
     val numOutputRows = longMetric("numOutputRows")
+    val spillThreshold = getSpillThreshold
+    val inMemoryThreshold = getInMemoryThreshold
+    val streamSortPlan = getStreamSortPlan
+    val buildSortPlan = getBuildSortPlan
+    val fallbackSMJPlan = SortMergeJoinExec(leftKeys, rightKeys, joinType, 
condition, left, right)
+
     streamedPlan.execute().zipPartitions(buildPlan.execute()) { (streamIter, 
buildIter) =>
-      val hashed = buildHashedRelation(buildIter)
-      joinType match {
-        case FullOuter => fullOuterJoin(streamIter, hashed, numOutputRows)
-        case _ => join(streamIter, hashed, numOutputRows)
+      buildHashedRelation(buildIter) match {
+        case r: UnfinishedUnsafeHashedRelation =>
+          joinWithSortFallback(streamIter, buildIter, r.destructiveValues(), 
streamSortPlan,

Review comment:
       > For runtime, yes. The total query run-time is dominated by the last 
finished task runtime. Just to point it out in case, without this change, this 
would be task and query failure.
   
   Yea, I basically agree to make the shuffle hash-join more robust (since a 
user possibly use inappropriate join hints in some cases). What I'm interested 
in is that; is there any other faster ballback logic than the current approach.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to