Github user mgaido91 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21028#discussion_r186953487
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
 ---
    @@ -18,15 +18,50 @@ package org.apache.spark.sql.catalyst.expressions
     
     import java.util.Comparator
     
    +import scala.collection.mutable
    +
     import org.apache.spark.sql.catalyst.InternalRow
    -import org.apache.spark.sql.catalyst.analysis.TypeCheckResult
    +import org.apache.spark.sql.catalyst.analysis.{TypeCheckResult, 
TypeCoercion}
     import org.apache.spark.sql.catalyst.expressions.ArraySortLike.NullOrder
     import org.apache.spark.sql.catalyst.expressions.codegen._
     import org.apache.spark.sql.catalyst.util.{ArrayData, GenericArrayData, 
MapData, TypeUtils}
     import org.apache.spark.sql.types._
     import org.apache.spark.unsafe.array.ByteArrayMethods
     import org.apache.spark.unsafe.types.{ByteArray, UTF8String}
     
    +/**
    + * Base trait for [[BinaryExpression]]s with two arrays of the same 
element type and implicit
    + * casting.
    + */
    +trait BinaryArrayExpressionWithImplicitCast extends BinaryExpression
    +  with ImplicitCastInputTypes {
    +
    +  @transient protected lazy val elementType: DataType =
    +    inputTypes.head.asInstanceOf[ArrayType].elementType
    +
    +  override def inputTypes: Seq[AbstractDataType] = {
    +    (left.dataType, right.dataType) match {
    +      case (ArrayType(e1, hasNull1), ArrayType(e2, hasNull2)) =>
    +        TypeCoercion.findTightestCommonType(e1, e2) match {
    --- End diff --
    
    yes, I think so. What we are not supporting here is nested array with 
different datatypes which is coherent with the rest of Spark casting model. The 
other option is to make `findTightestCommonType` less strict about complex data 
type comparison. But I think this is a more throughout change to be done, since 
it would (slightly) change the whole Spark casting model. And I am not sure 
this is the best place to do that, since the main goal is introducing a new 
function. If we decide to do something like that I would propose creating a 
different PR for that (I am happy to create the PR if we agree this should be 
done).
    What do you think?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to