Github user sethah commented on a diff in the pull request: https://github.com/apache/spark/pull/15874#discussion_r88753014 --- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/MinHashLSH.scala --- @@ -31,36 +31,40 @@ import org.apache.spark.sql.types.StructType /** * :: Experimental :: * - * Model produced by [[MinHash]], where multiple hash functions are stored. Each hash function is - * a perfect hash function: - * `h_i(x) = (x * k_i mod prime) mod numEntries` - * where `k_i` is the i-th coefficient, and both `x` and `k_i` are from `Z_prime^*` + * Model produced by [[MinHashLSH]], where multiple hash functions are stored. Each hash function is + * picked from a hash family for a specific set `S` with cardinality equal to `numEntries`: + * `h_i(x) = ((x \cdot a_i + b_i) \mod prime) \mod numEntries` + * + * This hash family is approximately min-wise independent according to the reference. * * Reference: - * [[https://en.wikipedia.org/wiki/Perfect_hash_function Wikipedia on Perfect Hash Function]] + * [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.121.8215&rep=rep1&type=pdf Min-wise + * independent permutations]] * * @param numEntries The number of entries of the hash functions. - * @param randCoefficients An array of random coefficients, each used by one hash function. + * @param randCoefficients Pairs of random coefficients. Each pair is used by one hash function. */ @Experimental @Since("2.1.0") -class MinHashModel private[ml] ( +class MinHashLSHModel private[ml]( override val uid: String, - @Since("2.1.0") val numEntries: Int, - @Since("2.1.0") val randCoefficients: Array[Int]) - extends LSHModel[MinHashModel] { + private[ml] val numEntries: Int, + private[ml] val randCoefficients: Array[(Int, Int)]) + extends LSHModel[MinHashLSHModel] { @Since("2.1.0") - override protected[ml] val hashFunction: Vector => Vector = { - elems: Vector => + override protected[ml] val hashFunction: Vector => Array[Vector] = { + elems: Vector => { require(elems.numNonzeros > 0, "Must have at least 1 non zero entry.") val elemsList = elems.toSparse.indices.toList - val hashValues = randCoefficients.map({ randCoefficient: Int => - elemsList.map({elem: Int => - (1 + elem) * randCoefficient.toLong % MinHash.prime % numEntries - }).min.toDouble + val hashValues = randCoefficients.map({ case (a: Int, b: Int) => + elemsList.map { elem: Int => + ((1 + elem) * a + b) % MinHashLSH.HASH_PRIME % numEntries --- End diff -- I'm still looking at it, but I don't think this is correct. Why do we tack on `% numEntries` here. Could you point me to a resource? The paper linked above (and many other references that I've seen) use `(ax + b) mod p` where p is a large prime. I see the formula listed under the wiki article for [perfect hashing functions](https://en.wikipedia.org/wiki/Perfect_hash_function) lists `(kx mod p) mod n`, but that's not the full picture. They are referencing a paper which simply uses that formula as the first part of multilevel scheme. If it's helpful - [this](http://cs.brown.edu/courses/cs253/papers/nearduplicate.pdf) seems to be the original paper on MinHash. The author mentions that ```` This is further explored in [5] where it is shown that random linear transformations are likely to suffice in practice. ```` Reference 5 is [here](http://www.combinatorics.org/ojs/index.php/eljc/article/download/v7i1r26/pdf), which seems to be a more concise version of your reference. In that paper, they describe `(ax + b mod p)`.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org