cloud-fan commented on a change in pull request #35139:
URL: https://github.com/apache/spark/pull/35139#discussion_r786183789



##########
File path: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
##########
@@ -1173,8 +1173,20 @@ class Dataset[T] private[sql](
       joined = resolveSelfJoinCondition(joined)
     }
 
-    implicit val tuple2Encoder: Encoder[(T, U)] =
-      ExpressionEncoder.tuple(this.exprEnc, other.exprEnc)
+    // SPARK-37829: an outer-join requires the null semantics to represent 
missing keys.
+    // As we might be running on DataFrames, we need a custom encoder that 
will properly
+    // handle null top-level Rows.
+    def nullSafe[V](exprEnc: ExpressionEncoder[V]): ExpressionEncoder[V] = {
+      if (exprEnc.clsTag.runtimeClass != classOf[Row]) {

Review comment:
       This looks a bit ugly.
   
   > I've tried simply wrapping CreateExternalRow with a null check and a 
number of tests started failing as they were assuming top-level rows couldn't 
be null.
   
   Are they UT or end-to-end tests? If they are UT, we can simply update the 
tests because we have changed the assumption.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to