Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1311#discussion_r14863059
  
    --- Diff: docs/mllib-linear-methods.md ---
    @@ -242,7 +242,96 @@ Similarly, you can use replace `SVMWithSGD` by
     All of MLlib's methods use Java-friendly types, so you can import and call 
them there the same
     way you do in Scala. The only caveat is that the methods take Scala RDD 
objects, while the
     Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD 
to a Scala one by
    -calling `.rdd()` on your `JavaRDD` object.
    +calling `.rdd()` on your `JavaRDD` object. A standalone application example
    +that is equivalent to the provided example in Scala is given bellow:
    +
    +{% highlight java %}
    +import java.util.Random;
    +
    +import scala.Tuple2;
    +
    +import org.apache.spark.api.java.*;
    +import org.apache.spark.SparkConf;
    +import org.apache.spark.api.java.function.Function;
    +import org.apache.spark.SparkContext;
    +import org.apache.spark.mllib.regression.LabeledPoint;
    +import org.apache.spark.mllib.util.MLUtils;
    +import org.apache.spark.mllib.classification.*;
    +import org.apache.spark.mllib.linalg.Vector;
    +import org.apache.spark.mllib.evaluation.BinaryClassificationMetrics;
    +
    +public class SVMClassifier {
    +  public static void main(String[] args) {
    +    SparkConf conf = new SparkConf().setAppName("SVM Classifier Example");
    +    SparkContext sc = new SparkContext(conf);
    +    String path = "{SPARK_HOME}/mllib/data/sample_libsvm_data.txt";
    +    JavaRDD<LabeledPoint> data = MLUtils.loadLibSVMFile(sc, 
path).toJavaRDD();
    +
    +    // Split initial RDD into two... [60% training data, 40% testing data].
    +    JavaRDD<LabeledPoint> training = data.filter(
    +      new Function<LabeledPoint, Boolean>() {
    +        public final Random random = new Random(11L);
    +        public Boolean call(LabeledPoint p) {
    +          if (random.nextDouble() <= 0.6)
    +            return true;
    +          else
    +            return false;
    +        }
    +      }
    +    );
    +    training.cache();
    +    JavaRDD<LabeledPoint> test = data.subtract(training);
    +    
    +    // Run training algorithm to build the model.
    +    int numIterations = 100;
    +    final SVMModel model = SVMWithSGD.train(JavaRDD.toRDD(training), 
numIterations);
    +    
    +    // Clear the default threshold.
    +    model.clearThreshold();
    +
    +    // Compute raw scores on the test set.
    +    JavaRDD<Tuple2<Object, Object>> scoreAndLabels = test.map(
    +      new Function<LabeledPoint, Tuple2<Object, Object>>() {
    +        public final SVMModel m = model;
    +        public Tuple2<Object, Object> call(LabeledPoint p) {
    +          Double score = m.predict(p.features());
    +          return new Tuple2<Object, Object>(score, p.label());
    +        }
    +      }
    +    );
    +    
    +    // Get evaluation metrics.
    +    BinaryClassificationMetrics metrics = 
    +      new BinaryClassificationMetrics(JavaRDD.toRDD(scoreAndLabels));
    +    double auROC = metrics.areaUnderROC();
    +    
    +    System.out.println("Area under ROC = " + auROC);
    +  }
    +}
    +{% endhighlight %}
    +
    +The `SVMWithSGD.train()` method by default performs L2 regularization with 
the
    +regularization parameter set to 1.0. If we want to configure this 
algorithm, we
    +can customize `SVMWithSGD` further by creating a new object directly and
    +calling setter methods. All other MLlib algorithms support customization in
    +this way as well. For example, the following code produces an L1 
regularized
    +variant of SVMs with regularization parameter set to 0.1, and runs the 
training
    +algorithm for 200 iterations.
    +
    +{% highlight java %}
    +import org.apache.spark.mllib.optimization.L1Updater;
    +
    +SVMWithSGD svmAlg = new SVMWithSGD();
    +svmAlg.optimizer().setNumIterations(200);
    +svmAlg.optimizer().setRegParam(0.1);
    --- End diff --
    
    use the builder pattern:
    
    ~~~
    svmAlg.optimier()
      .setNumIterations(200)
      .setRegParam(0.1)
      .setUpdater(new L1Updater());
    ~~~


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to