Github user marmbrus commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10435#discussion_r48500881
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
 ---
    @@ -176,3 +178,221 @@ case class Crc32(child: Expression) extends 
UnaryExpression with ImplicitCastInp
         })
       }
     }
    +
    +/**
    + * A function that calculates hash value for a group of expressions.
    + *
    + * The hash value for an expression depends on its type:
    + *  - null:               0
    + *  - boolean:            1 for true, 0 for false.
    + *  - byte, short, int:   the input itself.
    + *  - long:               input XOR (input >>> 32)
    + *  - float:              java.lang.Float.floatToIntBits(input)
    + *  - double:             l = java.lang.Double.doubleToLongBits(input); l 
XOR (l >>> 32)
    + *  - binary:             java.util.Arrays.hashCode(input)
    + *  - array:              recursively calculate hash value for each 
element, and aggregate them by
    + *                        `result = result * 31 + elementHash` with an 
initial value `result = 0`.
    + *  - map:                recursively calculate hash value for each 
key-value pair, and aggregate
    + *                        them by `result += keyHash XOR valueHash`.
    + *  - struct:             similar to array, calculate hash value for each 
field and aggregate them.
    + *  - other type:         input.hashCode().
    + *                        e.g. calculate hash value for string type by 
`UTF8String.hashCode()`.
    + * Finally we aggregate the hash values for each expression by `result = 
result * 31 + exprHash`.
    + *
    + * This hash algorithm follows hive's bucketing hash function, so that our 
bucketing function can
    + * be compatible with hive's, e.g. we can benefit from bucketing even the 
data source is mixed with
    + * hive tables.
    + */
    +case class Hash(children: Seq[Expression]) extends Expression {
    +
    +  override def dataType: DataType = IntegerType
    +
    +  override def foldable: Boolean = children.forall(_.foldable)
    +
    +  override def nullable: Boolean = false
    +
    +  override def eval(input: InternalRow): Any = {
    +    var result = 0
    +    for (e <- children) {
    +      val hashValue = computeHash(e.eval(input), e.dataType)
    +      result = result * 31 + hashValue
    +    }
    +    result
    +  }
    +
    +  private def computeHash(v: Any, dataType: DataType): Int = v match {
    +    case null => 0
    +    case b: Boolean => if (b) 1 else 0
    +    case b: Byte => b.toInt
    +    case s: Short => s.toInt
    +    case i: Int => i
    +    case l: Long => (l ^ (l >>> 32)).toInt
    +    case f: Float => java.lang.Float.floatToIntBits(f)
    +    case d: Double =>
    +      val b = java.lang.Double.doubleToLongBits(d)
    +      (b ^ (b >>> 32)).toInt
    +    case a: Array[Byte] => java.util.Arrays.hashCode(a)
    +    case s: UTF8String => s.toString.hashCode
    --- End diff --
    
    We have to match Hive's hashcode if we want to be able to join data Hive 
has bucketed with our own data.
    
    +1 to avoiding toString.  We should also avoid boxing and runtime type 
reflection for the hash code (which this function is doing).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to