Hi Carter,

In Spark 1.0 there will be an implementation of k-means available as part
of MLLib.  You can see the documentation for that below (until 1.0 is fully
released).

https://people.apache.org/~pwendell/spark-1.0.0-rc9-docs/mllib-clustering.html

Maybe diving into the source here will help get you started?
https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/clustering/KMeans.scala

Cheers,
Andrew


On Tue, May 27, 2014 at 4:10 AM, Carter <gyz...@hotmail.com> wrote:

> Any suggestion is very much appreciated.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/K-nearest-neighbors-search-in-Spark-tp6393p6421.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Reply via email to