Github user bdrillard commented on a diff in the pull request: https://github.com/apache/spark/pull/20010#discussion_r158724861 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala --- @@ -158,11 +213,8 @@ object TypeCoercion { findTightestCommonType(t1, t2) .orElse(findWiderTypeForDecimal(t1, t2)) .orElse(stringPromotion(t1, t2)) - .orElse((t1, t2) match { - case (ArrayType(et1, containsNull1), ArrayType(et2, containsNull2)) => - findWiderTypeForTwo(et1, et2).map(ArrayType(_, containsNull1 || containsNull2)) - case _ => None - }) + .orElse(findWiderTypeForTwoComplex(t1, t2, findWiderTypeForTwo)) --- End diff -- It should not. `findWiderTypeForTwoComplex` will only be called as we operate over "complex" types (i.e arrays, maps, structs), and will only recurse (calling `findWiderTypeForTwo`) over the child point-types of a complex type, so we ensure the recursive computation gets narrower as it recurses, until eventually terminating at the leaf level of the schema.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org