c21 commented on a change in pull request #32476: URL: https://github.com/apache/spark/pull/32476#discussion_r630026354
########## File path: sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala ########## @@ -431,6 +433,66 @@ case class SortMergeJoinExec( // Copy the streamed keys as class members so they could be used in next function call. val matchedKeyVars = copyKeys(ctx, streamedKeyVars) + // Handle the case when streamed rows has any NULL keys. + val handleStreamedAnyNull = joinType match { + case _: InnerLike => + // Skip streamed row. + s""" + |$streamedRow = null; + |continue; + """.stripMargin + case LeftOuter | RightOuter => + // Eagerly return streamed row. Only call `matches.clear()` when `matches.isEmpty()` is + // false, to reduce unnecessary computation. + s""" + |if (!$matches.isEmpty()) { + | $matches.clear(); + |} + |return false; + """.stripMargin + case x => + throw new IllegalArgumentException( + s"SortMergeJoin.genScanner should not take $x as the JoinType") + } + + // Handle the case when streamed keys has no match with buffered side. + val handleStreamedWithoutMatch = joinType match { + case _: InnerLike => + // Skip streamed row. + s"$streamedRow = null;" + case LeftOuter | RightOuter => + // Eagerly return with streamed row. + "return false;" + case x => + throw new IllegalArgumentException( + s"SortMergeJoin.genScanner should not take $x as the JoinType") + } + + // Generate a function to scan both streamed and buffered sides to find a match. + // Return whether a match is found. + // + // `streamedIter`: the iterator for streamed side. + // `bufferedIter`: the iterator for buffered side. + // `streamedRow`: the current row from streamed side. + // When `streamedIter` is empty, `streamedRow` is null. + // `matches`: the rows from buffered side already matched with `streamedRow`. + // `matches` is buffered and reused for all `streamedRow`s having same join keys. + // If there is no match with `streamedRow`, `matches` is empty. + // `bufferedRow`: the current matched row from buffered side. + // + // The function has the following step: + // - Step 1: Find the next `streamedRow` with non-null join keys. + // For `streamedRow` with null join keys (`handleStreamedAnyNull`): + // 1. Inner join: skip the row. + // 2. Left/Right Outer join: clear the previous `matches` if needed, keep the row, Review comment: @cloud-fan - here is my theory: 1.Previously some `streamedRow` (say `row_a`) has non-empty `matches`. 2.Current `streamedRow` (say `row_b`) has null join keys. `row_b` does not match with `row_a` (because `row_b` has null key but `row_a` does not have), so `row_b` does not match with `matches`. It's safe to clear `matches` now as there won't be a future `streamedRow` (say `row_c`) to match with `matches`. If there's such `row_c`, then [`row_a`, `row_b`, `row_c`] is not valid sorted on join keys (because then the correct sort ordering should be [`row_a`, `row_c`, `row_b`]). So there's no such `row_c`, and it's safe to clear now. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org