Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1311#discussion_r14696372
  
    --- Diff: docs/mllib-clustering.md ---
    @@ -69,7 +69,55 @@ println("Within Set Sum of Squared Errors = " + WSSSE)
     All of MLlib's methods use Java-friendly types, so you can import and call 
them there the same
     way you do in Scala. The only caveat is that the methods take Scala RDD 
objects, while the
     Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD 
to a Scala one by
    -calling `.rdd()` on your `JavaRDD` object.
    +calling `.rdd()` on your `JavaRDD` object. A standalone application example
    +that is equivalent to the provided example in Scala is given bellow:
    +
    +{% highlight java %}
    +import org.apache.spark.api.java.*;
    +import org.apache.spark.SparkConf;
    +import org.apache.spark.api.java.function.Function;
    +import org.apache.spark.mllib.clustering.KMeans;
    +import org.apache.spark.mllib.clustering.KMeansModel;
    +import org.apache.spark.mllib.linalg.Vectors;
    +import org.apache.spark.mllib.linalg.Vector;
    +
    +public class Classifier {
    +    public static void main( String[] args ) {
    +        SparkConf conf = new SparkConf().setAppName("K-means Example");
    +        JavaSparkContext sc = new JavaSparkContext(conf);
    +
    +        // Load and parse data
    +        String path = "{SPARK_HOME}/data/kmeans_data.txt";
    +        JavaRDD<String> data = sc.textFile(path);
    +        JavaRDD<Vector> parsedData = data.map(
    +            new Function<String, Vector>() {
    +                public Vector call( String s ) {
    +                    String[] sarray = s.split(" ");
    +                    double[] values = new double[sarray.length];
    +                    for (int i = 0; i < sarray.length; i++)
    +                        values[i] = Double.parseDouble(sarray[i]);
    +                    return Vectors.dense(values);
    +                }
    +            }
    +        );
    +
    +        // Cluster the data into two classes using KMeans
    +        int numClusters = 2;
    +        int numIterations = 20;
    +        KMeansModel clusters = KMeans.train(JavaRDD.toRDD(parsedData), 
numClusters, numIterations);
    +
    +        // Evaluate clustering by computing Within Set Sum of Squared 
Errors
    +        double WSSSE = clusters.computeCost(JavaRDD.toRDD(parsedData));
    +        System.out.println("Within Set Sum of Squared Errors = " + WSSSE);
    +    }
    +}
    +{% endhighlight %}
    +
    +In order to run the following standalone application using Spark framework 
make
    +sure that you follow the instructions provided at section [Standalone
    +Applications](quick-start.html) of the quick-start guide. What is more, you
    +should include to your **pom.xml** file both *spark-core_2.10* and
    --- End diff --
    
    pom is specific to maven. we also support sbt for build. It should be 
sufficient to say "include spark-mllib` as dependencies. spark-mllib depends on 
spark-core so it is not necessary to include spark-core explicitly. Also, 
better not mention the scala version since then we need to update it if we 
upgrade to scala 2.11.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to