Github user Yunni commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15795#discussion_r89711233
  
    --- Diff: docs/ml-features.md ---
    @@ -1396,3 +1396,149 @@ for more details on the API.
     {% include_example python/ml/chisq_selector_example.py %}
     </div>
     </div>
    +
    +# Locality Sensitive Hashing
    +[Locality Sensitive 
Hashing(LSH)](https://en.wikipedia.org/wiki/Locality-sensitive_hashing) 
Locality Sensitive Hashing(LSH) is an important class of hashing techniques, 
which is commonly used in clustering and outlier detection with large datasets. 
    +
    +The general idea of LSH is to use a family of functions (we call them LSH 
families) to hash data points into buckets, so that the data points which are 
close to each other are in the same buckets with high probability, while data 
points that are far away from each other are very likely in different buckets. 
A formal definition of LSH family is as follows:
    +
    +In a metric space `(M, d)`, an LSH family is a family of functions `h` 
that satisfy the following properties:
    +`\[
    +\forall p, q \in M,\\
    +d(p,q) < r1 \Rightarrow Pr(h(p)=h(q)) \geq p1\\
    +d(p,q) > r2 \Rightarrow Pr(h(p)=h(q)) \leq p1
    +\]`
    +This LSH family is called `(r1, r2, p1, p2)`-sensitive.
    +
    +In this section, we call a pair of input features a false positive if the 
two features are hashed into the same hash bucket but they are far away in 
distance, and we define false negative as the pair of features when their 
distance are close but they are not in the same hash bucket.
    +
    +## Random Projection for Euclidean Distance
    +**Note:** Please note that this is different than the [Random Projection 
for cosine 
distance](https://en.wikipedia.org/wiki/Locality-sensitive_hashing#Random_projection).
    +
    +[Random 
Projection](https://en.wikipedia.org/wiki/Locality-sensitive_hashing#Stable_distributions)
 is the LSH family in `spark.ml` for Euclidean distance. The Euclidean distance 
is defined as follows:
    +`\[
    +d(\mathbf{x}, \mathbf{y}) = \sqrt{\sum_i (x_i - y_i)^2}
    +\]`
    +Its LSH family projects features onto a random unit vector and divide the 
projected results to hash buckets:
    +`\[
    +h(\mathbf{x}) = \lfloor \frac{\mathbf{x} \cdot \mathbf{v}}{r} \rfloor
    +\]`
    +where `v` is a normalized random unit vector and `r` is user-defined 
bucket length. The bucket length can be used to control the average size of 
hash buckets. A larger bucket length means higher probability for features to 
be in the same bucket.
    +
    +The input features in Euclidean space are represented in vectors. Both 
sparse and dense vectors are supported.
    +
    +<div class="codetabs">
    +<div data-lang="scala" markdown="1">
    +
    +Refer to the [RandomProjection Scala 
docs](api/scala/index.html#org.apache.spark.ml.feature.RandomProjection)
    +for more details on the API.
    +
    +{% include_example 
scala/org/apache/spark/examples/ml/RandomProjectionExample.scala %}
    +</div>
    +
    +<div data-lang="java" markdown="1">
    +
    +Refer to the [RandomProjection Java 
docs](api/java/org/apache/spark/ml/feature/RandomProjection.html)
    +for more details on the API.
    +
    +{% include_example 
java/org/apache/spark/examples/ml/JavaRandomProjectionExample.java %}
    +</div>
    +</div>
    +
    +## MinHash for Jaccard Distance
    +[MinHash](https://en.wikipedia.org/wiki/MinHash) is the LSH family in 
`spark.ml` for Jaccard distance where input features are sets of natural 
numbers. Jaccard distance of two sets is defined by the cardinality of their 
intersection and union:
    +`\[
    +d(\mathbf{A}, \mathbf{B}) = 1 - \frac{|\mathbf{A} \cap 
\mathbf{B}|}{|\mathbf{A} \cup \mathbf{B}|}
    +\]`
    +As its LSH family, MinHash applies a random perfect hash function `g` to 
each elements in the set and take the minimum of all hashed values:
    +`\[
    +h(\mathbf{A}) = \min_{a \in \mathbf{A}}(g(a))
    +\]`
    +
    +Input sets for MinHash is represented in vectors which dimension equals 
the total number of elements in the space. Each dimension of the vectors 
represents the status of an elements: zero value means the elements is not in 
the set; non-zero value means the set contains the corresponding elements. For 
example, `Vectors.sparse(10, Array[(2, 1.0), (3, 1.0), (5, 1.0)])` means there 
are 10 elements in the space. This set contains elem 2, elem 3 and elem 5.
    +
    +**Note:** Empty sets cannot be transformed by MinHash, which means any 
input vector must have at least 1 non-zero indices.
    +
    +<div class="codetabs">
    +<div data-lang="scala" markdown="1">
    +
    +Refer to the [MinHash Scala 
docs](api/scala/index.html#org.apache.spark.ml.feature.MinHash)
    +for more details on the API.
    +
    +{% include_example scala/org/apache/spark/examples/ml/MinHashExample.scala 
%}
    +</div>
    +
    +<div data-lang="java" markdown="1">
    +
    +Refer to the [MinHash Java 
docs](api/java/org/apache/spark/ml/feature/MinHash.html)
    +for more details on the API.
    +
    +{% include_example 
java/org/apache/spark/examples/ml/JavaMinHashExample.java %}
    +</div>
    +</div>
    +
    +## Feature Transformation
    +Feature Transformation is the base functionality to add hash results as a 
new column. Users can specify input column name and output column name by 
setting `inputCol` and `outputCol`. Also in `spark.ml`, all LSH families can 
pick multiple LSH hash functions. The number of hash functions could be set in 
`outputDim` in any LSH family.
    +
    +<div class="codetabs">
    +<div data-lang="scala" markdown="1">
    +
    +{% include_example 
scala/org/apache/spark/examples/ml/LSHTransformationExample.scala %}
    +</div>
    +
    +<div data-lang="java" markdown="1">
    +
    +{% include_example 
java/org/apache/spark/examples/ml/JavaLSHTransformationExample.java %}
    +</div>
    +</div>
    +
    +When multiple hash functions are picked, it's very useful for users to 
apply 
[amplification](https://en.wikipedia.org/wiki/Locality-sensitive_hashing#Amplification)
 to trade off between false positive and false negative rate.
    +* AND-amplifications: Two input vectors are defined to be in the same 
bucket only if ALL of the hash values match. This will decrease the false 
positive rate but increase the false negative rate.
    +* OR-amplifications: Two input vectors are defined to be in the same 
bucket as long as ANY one of the hash value matches. This will increase the 
false positive rate but decrease the false negative rate.
    +
    +Approximate nearest neighbor and approximate similarity join use 
OR-amplification.
    +
    +## Approximate Similarity Join
    +Approximate similarity join takes two datasets, and approximately returns 
row pairs which distance is smaller than a user-defined threshold. Approximate 
Similarity Join supports both joining two different datasets and self joining.
    +
    +Approximate similarity join allows users to cache the transformed columns 
when necessary: If the `outputCol` is missing, the method will transform the 
data; if the `outputCol` exists, it will use the `outputCol` directly.
    +
    +A distance column will be added in the output dataset of approximate 
similarity join to show the distance between each output row pairs.
    +
    +<div class="codetabs">
    +<div data-lang="scala" markdown="1">
    +
    +{% include_example 
scala/org/apache/spark/examples/ml/ApproxSimilarityJoinExample.scala %}
    +</div>
    +
    +<div data-lang="java" markdown="1">
    +
    +{% include_example 
java/org/apache/spark/examples/ml/JavaApproxSimilarityJoinExample.java %}
    +</div>
    +</div>
    +
    +## Approximate Nearest Neighbor Search
    +Approximate nearest neighbor search takes a dataset and a key, and 
approximately returns a number of rows in the dataset that are closest to the 
key. The number of rows to return are defined by user.
    --- End diff --
    
    Done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to