Github user Yunni commented on a diff in the pull request:
https://github.com/apache/spark/pull/15795#discussion_r90736839
--- Diff: docs/ml-features.md ---
@@ -1478,3 +1478,139 @@ for more details on the API.
{% include_example python/ml/chisq_selector_example.py %}
</div>
</div>
+
+# Locality Sensitive Hashing
+[Locality Sensitive
Hashing(LSH)](https://en.wikipedia.org/wiki/Locality-sensitive_hashing) is an
important class of hashing techniques, which is commonly used in clustering,
approximate nearest neighbor search and outlier detection with large datasets.
+
+The general idea of LSH is to use a family of functions (we call them LSH
families) to hash data points into buckets, so that the data points which are
close to each other are in the same buckets with high probability, while data
points that are far away from each other are very likely in different buckets.
A formal definition of LSH family is as follows:
+
+In a metric space `(M, d)`, where `M` is a set and `d` is a distance
function on `M`, an LSH family is a family of functions `h` that satisfy the
following properties:
+`\[
+\forall p, q \in M,\\
+d(p,q) < r1 \Rightarrow Pr(h(p)=h(q)) \geq p1\\
+d(p,q) > r2 \Rightarrow Pr(h(p)=h(q)) \leq p2
+\]`
+This LSH family is called `(r1, r2, p1, p2)`-sensitive.
+
+In this section, we call a pair of input features a false positive if the
two features are hashed into the same hash bucket but they are far away in
distance, and we define false negative as the pair of features when their
distance are close but they are not in the same hash bucket.
+
+## Bucketed Random Projection for Euclidean Distance
+
+[Bucketed Random
Projection](https://en.wikipedia.org/wiki/Locality-sensitive_hashing#Stable_distributions)
is the LSH family in `spark.ml` for Euclidean distance. The Euclidean distance
is defined as follows:
+`\[
+d(\mathbf{x}, \mathbf{y}) = \sqrt{\sum_i (x_i - y_i)^2}
+\]`
+Its LSH family projects features onto a random unit vector and divide the
projected results to hash buckets:
+`\[
+h(\mathbf{x}) = \lfloor \frac{\mathbf{x} \cdot \mathbf{v}}{r} \rfloor
+\]`
+where `v` is a normalized random unit vector and `r` is user-defined
bucket length. The bucket length can be used to control the average size of
hash buckets. A larger bucket length means higher probability for features to
be in the same bucket.
+
+Bucketed Random Projection accepts arbitrary vectors as input features,
and supports both sparse and dense vectors.
+
+<div class="codetabs">
+<div data-lang="scala" markdown="1">
+
+Refer to the [RandomProjection Scala
docs](api/scala/index.html#org.apache.spark.ml.feature.RandomProjection)
+for more details on the API.
+
+{% include_example
scala/org/apache/spark/examples/ml/BucketedRandomProjectionLSHExample.scala %}
+</div>
+
+<div data-lang="java" markdown="1">
+
+Refer to the [RandomProjection Java
docs](api/java/org/apache/spark/ml/feature/RandomProjection.html)
+for more details on the API.
+
+{% include_example
java/org/apache/spark/examples/ml/JavaBucketedRandomProjectionLSHExample.java %}
+</div>
+</div>
+
+## MinHash for Jaccard Distance
+[MinHash](https://en.wikipedia.org/wiki/MinHash) is the LSH family in
`spark.ml` for Jaccard distance where input features are sets of natural
numbers. Jaccard distance of two sets is defined by the cardinality of their
intersection and union:
+`\[
+d(\mathbf{A}, \mathbf{B}) = 1 - \frac{|\mathbf{A} \cap
\mathbf{B}|}{|\mathbf{A} \cup \mathbf{B}|}
+\]`
+As its LSH family, MinHash applies a random hash function `g` to each
elements in the set and take the minimum of all hashed values:
+`\[
+h(\mathbf{A}) = \min_{a \in \mathbf{A}}(g(a))
+\]`
+
+The input sets for MinHash are represented as binary vectors, where the
vector indices represent the elements themselves and the non-zero values in the
vector represent the presence of that element in the set. While both dense and
sparse vectors are supported, typically sparse vectors are recommended for
efficiency. For example, `Vectors.sparse(10, Array[(2, 1.0), (3, 1.0), (5,
1.0)])` means there are 10 elements in the space. This set contains elem 2,
elem 3 and elem 5. All non-zero values are treated as binary "1" values.
+
+**Note:** Empty sets cannot be transformed by MinHash, which means any
input vector must have at least 1 non-zero entry.
+
+<div class="codetabs">
+<div data-lang="scala" markdown="1">
+
+Refer to the [MinHash Scala
docs](api/scala/index.html#org.apache.spark.ml.feature.MinHash)
+for more details on the API.
+
+{% include_example
scala/org/apache/spark/examples/ml/MinHashLSHExample.scala %}
+</div>
+
+<div data-lang="java" markdown="1">
+
+Refer to the [MinHash Java
docs](api/java/org/apache/spark/ml/feature/MinHash.html)
+for more details on the API.
+
+{% include_example
java/org/apache/spark/examples/ml/JavaMinHashLSHExample.java %}
+</div>
+</div>
+
+## Feature Transformation
+Feature Transformation is the base functionality to add hash results as a
new column. Users can specify input column name and output column name by
setting `inputCol` and `outputCol`. LSH in `spark.ml` also supports multiple
LSH hash tables. Users can specify the number of hash tables by setting
`numHashTables`.
+
+The output type of feature type is `Array[Vector]` where the dimension of
the array equals `numHashTables`, and the dimensions of the vectors are
currently set to 1.
--- End diff --
Done.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]