Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/6093#discussion_r30177197
--- Diff: docs/ml-features.md ---
@@ -0,0 +1,181 @@
+---
+layout: global
+title: Feature Extraction, Transformation, and Selection - SparkML
+displayTitle: <a href="ml-guide.html">ML</a> - Features
+---
+
+This section covers algorithms for working with features, roughly divided
into these groups:
+* Extraction: Extracting features from "raw" data
+* Transformation: Scaling, converting, or modifying features
+* Selection: Selecting a subset from a larger set of features
+
+**Table of Contents**
+
+* This will become a table of contents (this text will be scraped).
+{:toc}
+
+
+# Feature Extractors
+
+## Hashing Term-Frequency (HashingTF)
+
+`HashingTF` is a `Transformer` which takes sets of terms (e.g., `String`
terms can be sets of words) and converts those sets into fixed-length feature
vectors.
+The algorithm combines [Term Frequency
(TF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) counts with the [hashing
trick](http://en.wikipedia.org/wiki/Feature_hashing) for dimensionality
reduction. Please refer to the [MLlib user guide on
TF-IDF](mllib-feature-extraction.html#tf-idf) for more details on
Term-Frequency.
+
+HashingTF is implemented in
+[HashingTF](api/scala/index.html#org.apache.spark.ml.feature.HashingTF).
+In the following code segment, we start with a set of sentences. We split
each sentence into words using `Tokenizer`. For each sentence (bag of words),
we hash it into a feature vector. This feature vector could then be passed to
a learning algorithm.
+
+<div class="codetabs">
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+import org.apache.spark.ml.feature.{HashingTF, Tokenizer}
+case class LabeledSentence(label: Double, sentence: String)
+val sentenceDataFrame = sqlContext.createDataFrame(Array(
+ LabeledSentence(0, "Hi I heard about Spark"),
+ LabeledSentence(0, "I wish Java could use case classes"),
+ LabeledSentence(1, "Logistic regression models are neat")
+))
+val tokenizer = new
Tokenizer().setInputCol("sentence").setOutputCol("words")
+val wordsDataFrame = tokenizer.transform(sentenceDataFrame)
+val hashingTF = new
HashingTF().setInputCol("words").setOutputCol("features").setNumFeatures(20)
+val featurized = hashingTF.transform(wordsDataFrame)
+featurized.select("features", "label").take(3).foreach(println)
+{% endhighlight %}
+</div>
+
+<div data-lang="java" markdown="1">
+{% highlight java %}
+import com.google.common.collect.Lists;
+import org.apache.spark.api.java.JavaRDD;
+import org.apache.spark.ml.feature.HashingTF;
+import org.apache.spark.ml.feature.Tokenizer;
+import org.apache.spark.mllib.linalg.Vector;
+import org.apache.spark.sql.DataFrame;
+import org.apache.spark.sql.Row;
+import org.apache.spark.sql.RowFactory;
+import org.apache.spark.sql.types.DataTypes;
+import org.apache.spark.sql.types.Metadata;
+import org.apache.spark.sql.types.StructField;
+import org.apache.spark.sql.types.StructType;
+JavaRDD<Row> jrdd = jsc.parallelize(Lists.newArrayList(
+ RowFactory.create(0, "Hi I heard about Spark"),
+ RowFactory.create(0, "I wish Java could use case classes"),
+ RowFactory.create(1, "Logistic regression models are neat")
+));
+StructType schema = new StructType(new StructField[]{
+ new StructField("label", DataTypes.DoubleType, false, Metadata.empty()),
+ new StructField("sentence", DataTypes.StringType, false,
Metadata.empty())
+});
+DataFrame sentenceDataFrame = sqlContext.createDataFrame(jrdd, schema);
+Tokenizer tokenizer = new
Tokenizer().setInputCol("sentence").setOutputCol("words");
+DataFrame wordsDataFrame = tokenizer.transform(sentenceDataFrame);
+int numFeatures = 20;
+HashingTF hashingTF = new HashingTF()
+ .setInputCol("words")
+ .setOutputCol("features")
+ .setNumFeatures(numFeatures);
+DataFrame featurized = hashingTF.transform(wordsDataFrame);
+for (Row r : featurized.select("features", "label").take(3)) {
+ Vector features = r.getAs(0);
+ Double label = r.getDouble(1);
+ System.out.println(features);
+}
+{% endhighlight %}
+</div>
+
+<div data-lang="python" markdown="1">
+{% highlight python %}
+from pyspark.ml.feature import HashingTF, Tokenizer
+sentenceDataFrame = sqlContext.createDataFrame([
+ (0, "Hi I heard about Spark"),
+ (0, "I wish Java could use case classes"),
+ (1, "Logistic regression models are neat")
+], ["label", "sentence"])
+tokenizer = Tokenizer().setInputCol("sentence").setOutputCol("words")
--- End diff --
use keyword arguments in python:
~~~python
tokenizer = Tokenizer(inputCol="sentence", outputCol="words")
~~~
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]