[
https://issues.apache.org/jira/browse/FLINK-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14607907#comment-14607907
]
ASF GitHub Bot commented on FLINK-2131:
---------------------------------------
Github user sachingoel0101 commented on the pull request:
https://github.com/apache/flink/pull/757#issuecomment-117047575
Hi @thvasilo, thanks for taking the time to go through it.
Consider for example a probability distribution P(X_0) = 0.2, P(X_1) = 0.3,
P(X_2) = 0.5
To sample an element out of X_0, X_1 and X_2, we can generate a random
number but we need to map intervals of real numbers to the values X_0, X_1 and
X_2. This is what the discreteSampler does.
It forms a cumulative distribution as [0.2, 0.5, 1.0] and then, if the
generated random no is in [0, 0.2), we pick X_0, and so on.
> Add Initialization schemes for K-means clustering
> -------------------------------------------------
>
> Key: FLINK-2131
> URL: https://issues.apache.org/jira/browse/FLINK-2131
> Project: Flink
> Issue Type: Task
> Components: Machine Learning Library
> Reporter: Sachin Goel
> Assignee: Sachin Goel
>
> The Lloyd's [KMeans] algorithm takes initial centroids as its input. However,
> in case the user doesn't provide the initial centers, they may ask for a
> particular initialization scheme to be followed. The most commonly used are
> these:
> 1. Random initialization: Self-explanatory
> 2. kmeans++ initialization: http://ilpubs.stanford.edu:8090/778/1/2006-13.pdf
> 3. kmeans|| : http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf
> For very large data sets, or for large values of k, the kmeans|| method is
> preferred as it provides the same approximation guarantees as kmeans++ and
> requires lesser number of passes over the input data.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)