Github user nongli commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10435#discussion_r48778819
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
 ---
    @@ -177,3 +179,44 @@ case class Crc32(child: Expression) extends 
UnaryExpression with ImplicitCastInp
         })
       }
     }
    +
    +/**
    + * A function that calculates hash value for a group of expressions.
    + *
    + * Internally this function will write arguments into an [[UnsafeRow]], 
and calculate hash code of
    + * the unsafe row using murmur3 hasher with a seed.
    + * We should use this hash function for both shuffle and bucket, so that 
we can guarantee shuffle
    + * and bucketing have same data distribution.
    + */
    +case class Murmur3Hash(children: Seq[Expression], seed: Int) extends 
Expression {
    +  def this(arguments: Seq[Expression]) = this(arguments, 42)
    --- End diff --
    
    I think this is fine.
    
    Can you file a follow up jira to look at this again? I think we want to 
remove the projection to unsafe row soon (before we ship this and persist 
metadata that way). This should be decoupled from unsafe row ideally. For 
example, if the row is (int, double, string): the generated hash function 
shoudl be something like
    
    int hash = seed;
    hash = murmur3(getInt(0), hash)
    hash = murmur3(getDouble(1), hash)
    hash = murmur3(getString(2), hash)
    return hash
    
    This is likely not the currently computed hash value so can't defer this 
for too long.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to