Github user mgaido91 commented on a diff in the pull request: https://github.com/apache/spark/pull/20010#discussion_r157794266 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala --- @@ -158,11 +169,6 @@ object TypeCoercion { findTightestCommonType(t1, t2) .orElse(findWiderTypeForDecimal(t1, t2)) .orElse(stringPromotion(t1, t2)) - .orElse((t1, t2) match { - case (ArrayType(et1, containsNull1), ArrayType(et2, containsNull2)) => - findWiderTypeForTwo(et1, et2).map(ArrayType(_, containsNull1 || containsNull2)) - case _ => None - }) --- End diff -- it makes sense, but I'd love that in your implementation in `findTightestCommonType`, you would replicate this logic, ie. removing the `sameType` guard and using `findWiderTypeForTwo`, in order to allow casting an array of int to an array of long. What do you think?
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org