Github user atalwalkar commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2061#discussion_r16501444
  
    --- Diff: docs/mllib-feature-extraction.md ---
    @@ -7,9 +7,87 @@ displayTitle: <a href="mllib-guide.html">MLlib</a> - 
Feature Extraction
     * Table of contents
     {:toc}
     
    +
    +## TF-IDF
    +
    +[Term frequency-inverse document frequency 
(TF-IDF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) is a feature 
    +vectorization method widely used in text mining to reflect the importance 
of a term to a document in the corpus.
    +Denote a term by `$t$`, a document by `$d$`, and the corpus by `$D$`.
    +Term frequency `$TF(t, d)$` is the number of times that term `$t$` appears 
in document `$d$`.
    +And document frequency `$DF(t, D)$` is the number of documents that 
contains term `$t$`.
    +If we only use term frequency to measure the importance, it is very easy 
to over-emphasize terms that
    +appear very often but carry little information about the document, e.g., 
"a", "the", and "of".
    +If a term appears very often across the corpus, it means it doesn't carry 
special information about
    +a particular document.
    +Inverse document frequency is a numerical measure of how much information 
a term provides:
    +`\[
    +IDF(t, D) = \log \frac{|D| + 1}{DF(t, D) + 1},
    +\]`
    +where `$|D|$` is the total number of documents in the corpus.
    +Since logarithm is used, if a term appears in all documents, its IDF value 
becomes 0.
    +Note that a smoothing term is applied to avoid dividing by zero for terms 
outside the corpus.
    +The TF-IDF measure is simply the product of TF and IDF:
    +`\[
    +TFIDF(t, d, D) = TF(t, d) \cdot IDF(t, D).
    +\]`
    +There are several variants on the definition of term frequency and 
document frequency.
    +In MLlib, we separate TF and IDF to make them flexible.
    +
    +Our implementation of term frequency utilizes the
    +[hashing trick](http://en.wikipedia.org/wiki/Feature_hashing).
    +A raw feature is mapped into an index (term) by applying a hash function.
    +Then term frequencies are calculated based on the mapped indices.
    +This approach saves the global term-to-index map, which is expensive for a 
large corpus,
    +but it suffers from hash collision, where different raw features may 
become the same term after hashing.
    +To reduce the chance of collision, we can increase the target feature 
dimension, i.e., 
    +the number of buckets of the hash table.
    --- End diff --
    
    Is there a default value that we use for number of hash buckets?  In VW, 
the default is 2^18 = 262K.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to