Github user Ishiihara commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2494#discussion_r17886499
  
    --- Diff: mllib/src/main/scala/org/apache/spark/mllib/feature/IDF.scala ---
    @@ -30,9 +30,20 @@ import org.apache.spark.rdd.RDD
      * Inverse document frequency (IDF).
      * The standard formulation is used: `idf = log((m + 1) / (d(t) + 1))`, 
where `m` is the total
      * number of documents and `d(t)` is the number of documents that contain 
term `t`.
    + *
    + * This implementation supports filtering out terms which do not appear in 
a minimum number
    + * of documents (controlled by the variable minimumOccurence). For terms 
that are not in
    + * at least `minimumOccurence` documents, the IDF is found as 0, resulting 
in TF-IDFs of 0.
    + *
    + * @param minimumOccurence minimum of documents in which a term
    + *                         should appear for filtering
    + *
    + *
      */
     @Experimental
    -class IDF {
    +class IDF(minimumOccurence: Long) {
    --- End diff --
    
    You can add a val before minimumOccurence. Alternatively, if you want to 
set set minimumOccurence after new IDF(), you can define a private field and 
use a setter to set the value. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to