Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/8293#discussion_r37463265
  
    --- Diff: docs/ml-features.md ---
    @@ -1387,5 +1387,107 @@ print(output.select("features", "clicked").first())
     </div>
     </div>
     
    +## RFormula
    +
    +`RFormula` selects columns specified by an [R model 
formula](https://stat.ethz.ch/R-manual/R-devel/library/stats/html/formula.html).
 It produces a vector column of features and a double column of labels. Like 
when formulas are used in R for linear regression, string input columns will be 
one-hot encoded, and numeric columns will be cast to doubles. If not already 
present in the DataFrame, the output label column will be created from the 
specified response variable in the formula.
    +
    +**Examples**
    +
    +Assume that we have a DataFrame with the columns `id`, `country`, `hour`, 
and `clicked`:
    +
    +~~~
    +id | country | hour | clicked
    +---|---------|------|---------
    + 7 | "US"    | 18   | 1.0
    + 8 | "CA"    | 12   | 0.0
    +~~~
    +
    +If we use `RFormula` with a formula string of `clicked ~ country + hour`, 
which indicates that we want to
    +predict `clicked` based on `country` and `hour`, after transformation we 
should get the following DataFrame:
    +
    +~~~
    +id | country | hour | clicked | label | features
    +---|---------|------|---------|-------|-----------------------------
    + 7 | "US"    | 18   | 1.0     | 1.0   | [0.0, 1.0, 18.0]
    + 8 | "CA"    | 12   | 0.0     | 0.0   | [1.0, 0.0, 12.0]
    +~~~
    +
    +<div class="codetabs">
    +<div data-lang="scala" markdown="1">
    +
    +[`RFormula`](api/scala/index.html#org.apache.spark.ml.feature.RFormula) 
takes an R formula string, and optional parameters for the names of its output 
columns.
    +
    +{% highlight scala %}
    +import org.apache.spark.ml.feature.RFormula
    +
    +val dataset = sqlContext.createDataFrame(
    +  Seq((7, "US", 18, 1.0)),
    +  Seq((8, "CA", 12, 0.0))
    +).toDF("id", "country", "hour", "clicked")
    +val formula = new RFormula()
    +  .setFormula("clicked ~ country + hour")
    +  .setFeaturesCol("features")
    +  .setLabelCol("label")
    +val output = formula.fit(dataset).transform(dataset)
    +println(output.select("features", "label").first())
    +{% endhighlight %}
    +</div>
    +
    +<div data-lang="java" markdown="1">
    +
    +[`RFormula`](api/java/org/apache/spark/ml/feature/RFormula.html) takes an 
R formula string, and optional parameters for the names of its output columns.
    +
    +{% highlight java %}
    +import java.util.Arrays;
    +
    +import org.apache.spark.api.java.JavaRDD;
    +import org.apache.spark.sql.DataFrame;
    +import org.apache.spark.sql.Row;
    +import org.apache.spark.sql.RowFactory;
    +import org.apache.spark.sql.types.*;
    +import static org.apache.spark.sql.types.DataTypes.*;
    +
    +StructType schema = createStructType(new StructField[] {
    +  createStructField("id", IntegerType, false),
    +  createStructField("country", StringType, false),
    +  createStructField("hour", IntegerType, false),
    +  createStructField("clicked", DoubleType, false)
    +});
    +Row row1 = RowFactory.create(7, "US", 18, 1.0);
    +Row row2 = RowFactory.create(8, "CA", 12, 0.0);
    +JavaRDD<Row> rdd = jsc.parallelize(Arrays.asList(row1, row2));
    +DataFrame dataset = sqlContext.createDataFrame(rdd, schema);
    +
    +RFormula formula = new RFormula()
    +  .setFormula("clicked ~ country + hour")
    +  .setFeaturesCol("features")
    +  .setLabelCol("label");
    +
    +DataFrame output = formula.fit(dataset).transform(dataset);
    +System.out.println(output.select("features", "label").first());
    +{% endhighlight %}
    +</div>
    +
    +<div data-lang="python" markdown="1">
    +
    +[`RFormula`](api/python/pyspark.ml.html#pyspark.ml.feature.RFormula) takes 
an R formula string, and optional parameters for the names of its output 
columns.
    +
    +{% highlight python %}
    +from pyspark.ml.feature import RFormula
    +
    +dataset = sqlContext.createDataFrame(
    +    [(7, "US", 18, 1.0)],
    --- End diff --
    
    only one `[...]`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to