Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/6093#discussion_r30177188
  
    --- Diff: docs/ml-features.md ---
    @@ -0,0 +1,181 @@
    +---
    +layout: global
    +title: Feature Extraction, Transformation, and Selection - SparkML
    +displayTitle: <a href="ml-guide.html">ML</a> - Features
    +---
    +
    +This section covers algorithms for working with features, roughly divided 
into these groups:
    +* Extraction: Extracting features from "raw" data
    +* Transformation: Scaling, converting, or modifying features
    +* Selection: Selecting a subset from a larger set of features
    +
    +**Table of Contents**
    +
    +* This will become a table of contents (this text will be scraped).
    +{:toc}
    +
    +
    +# Feature Extractors
    +
    +## Hashing Term-Frequency (HashingTF)
    +
    +`HashingTF` is a `Transformer` which takes sets of terms (e.g., `String` 
terms can be sets of words) and converts those sets into fixed-length feature 
vectors.
    +The algorithm combines [Term Frequency 
(TF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) counts with the [hashing 
trick](http://en.wikipedia.org/wiki/Feature_hashing) for dimensionality 
reduction.  Please refer to the [MLlib user guide on 
TF-IDF](mllib-feature-extraction.html#tf-idf) for more details on 
Term-Frequency.
    +
    +HashingTF is implemented in
    +[HashingTF](api/scala/index.html#org.apache.spark.ml.feature.HashingTF).
    +In the following code segment, we start with a set of sentences.  We split 
each sentence into words using `Tokenizer`.  For each sentence (bag of words), 
we hash it into a feature vector.  This feature vector could then be passed to 
a learning algorithm.
    +
    +<div class="codetabs">
    +<div data-lang="scala" markdown="1">
    +{% highlight scala %}
    +import org.apache.spark.ml.feature.{HashingTF, Tokenizer}
    +case class LabeledSentence(label: Double, sentence: String)
    --- End diff --
    
    It would be easier to read if we follow the same Spark code style in 
example code. Should have empty lines before and after `case class ...`. The 
case class is not necessary to create DataFrames.
    
    ~~~scala
    val sentences = sqlContext.createDataFrame(Seq(
      (0, "Hi ..."),
        ...
    )).toDF("label", "sentence")
    ~~~



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to