Github user jkbradley commented on a diff in the pull request:

    https://github.com/apache/spark/pull/6127#discussion_r30863325
  
    --- Diff: docs/ml-features.md ---
    @@ -18,30 +18,38 @@ This section covers algorithms for working with 
features, roughly divided into t
     
     # Feature Extractors
     
    -## Hashing Term-Frequency (HashingTF)
    +## TF-IDF (HashingTF and IDF)
     
    -`HashingTF` is a `Transformer` which takes sets of terms (e.g., `String` 
terms can be sets of words) and converts those sets into fixed-length feature 
vectors.
    -The algorithm combines [Term Frequency 
(TF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) counts with the [hashing 
trick](http://en.wikipedia.org/wiki/Feature_hashing) for dimensionality 
reduction.  Please refer to the [MLlib user guide on 
TF-IDF](mllib-feature-extraction.html#tf-idf) for more details on 
Term-Frequency.
    +[Term Frequency-Inverse Document Frequency 
(TF-IDF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) is a common text 
pre-processing step.  In Spark ML, TF-IDF is separate into two parts: TF 
(+hashing) and IDF.
     
    -HashingTF is implemented in
    -[HashingTF](api/scala/index.html#org.apache.spark.ml.feature.HashingTF).
    -In the following code segment, we start with a set of sentences.  We split 
each sentence into words using `Tokenizer`.  For each sentence (bag of words), 
we hash it into a feature vector.  This feature vector could then be passed to 
a learning algorithm.
    +**TF**: `HashingTF` is a `Transformer` which takes sets of terms and 
converts those sets into fixed-length feature vectors.  In text processing, a 
"set of terms" might be a bag of words.
    +The algorithm combines Term Frequency (TF) counts with the [hashing 
trick](http://en.wikipedia.org/wiki/Feature_hashing) for dimensionality 
reduction.
    +
    +**IDF**: `IDF` is an `Estimator` which fits on a dataset and produces an 
`IDFModel`.  The `IDFModel` takes feature vectors (generally created from 
`HashingTF`) and scales each column.  Intuitively, it down-weights columns 
which appear frequently in a corpus.
    +
    +Please refer to the [MLlib user guide on 
TF-IDF](mllib-feature-extraction.html#tf-idf) for more details on Term 
Frequency and Inverse Document Frequency.
    +For API details, refer to the [HashingTF API 
docs](api/scala/index.html#org.apache.spark.ml.feature.HashingTF) and the [IDF 
API docs](api/scala/index.html#org.apache.spark.ml.feature.IDF).
    +
    +In the following code segment, we start with a set of sentences.  We split 
each sentence into words using `Tokenizer`.  For each sentence (bag of words), 
we use `HashingTF` to hash the sentence into a feature vector.  We use `IDF` to 
rescale the feature vectors; this generally improves performance when using 
text as features.  Our feature vectors could then be passed to a learning 
algorithm.
     
     <div class="codetabs">
     <div data-lang="scala" markdown="1">
     {% highlight scala %}
    -import org.apache.spark.ml.feature.{HashingTF, Tokenizer}
    +import org.apache.spark.ml.feature.{HashingTF, IDF, Tokenizer}
     
    -val sentenceDataFrame = sqlContext.createDataFrame(Seq(
    +val sentenceData = sqlContext.createDataFrame(Seq(
       (0, "Hi I heard about Spark"),
       (0, "I wish Java could use case classes"),
       (1, "Logistic regression models are neat")
     )).toDF("label", "sentence")
     val tokenizer = new 
Tokenizer().setInputCol("sentence").setOutputCol("words")
    -val wordsDataFrame = tokenizer.transform(sentenceDataFrame)
    -val hashingTF = new 
HashingTF().setInputCol("words").setOutputCol("features").setNumFeatures(20)
    -val featurized = hashingTF.transform(wordsDataFrame)
    -featurized.select("features", "label").take(3).foreach(println)
    +val wordsData = tokenizer.transform(sentenceData)
    +val hashingTF = new 
HashingTF().setInputCol("words").setOutputCol("rawFeatures").setNumFeatures(20)
    +val featurizedData = hashingTF.transform(wordsData)
    +val idf = new IDF().setInputCol("rawFeatures").setOutputCol("features")
    +val idfModel = idf.fit(featurizedData)
    +val rescaledData = idfModel.transform(featurizedData)
    +rescaledData.select("features", "label").take(3).foreach(println)
    --- End diff --
    
    I thought about that, but was worried users would copy this into their own 
code and try to print out a huge dataset.  What do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to