Github user feynmanliang commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7307#discussion_r34323279
  
    --- Diff: mllib/src/main/scala/org/apache/spark/mllib/clustering/LDA.scala 
---
    @@ -77,37 +89,47 @@ class LDA private (
        * Concentration parameter (commonly named "alpha") for the prior placed 
on documents'
        * distributions over topics ("theta").
        *
    -   * This is the parameter to a symmetric Dirichlet distribution.
    +   * This is the parameter to a Dirichlet distribution.
        */
    -  def getDocConcentration: Double = this.docConcentration
    +  def getDocConcentration: Vector = this.docConcentration
     
       /**
        * Concentration parameter (commonly named "alpha") for the prior placed 
on documents'
        * distributions over topics ("theta").
        *
    -   * This is the parameter to a symmetric Dirichlet distribution, where 
larger values
    -   * mean more smoothing (more regularization).
    +   * This is the parameter to a Dirichlet distribution, where larger 
values mean more smoothing
    +   * (more regularization).
        *
    -   * If set to -1, then docConcentration is set automatically.
    -   *  (default = -1 = automatic)
    +   * If set to a vector of -1, then docConcentration is set automatically.
    +   *  (default = a vector of -1 = automatic)
        *
        * Optimizer-specific parameter settings:
        *  - EM
    +   *     - Currently only supports symmetric distributions, so values in 
the vector must be the same
        *     - Value should be > 1.0
        *     - default = (50 / k) + 1, where 50/k is common in LDA libraries 
and +1 follows
        *       Asuncion et al. (2009), who recommend a +1 adjustment for EM.
        *  - Online
    -   *     - Value should be >= 0
    -   *     - default = (1.0 / k), following the implementation from
    +   *     - Values should be >= 0
    +   *     - default = uniformly (1.0 / k), following the implementation from
        *       [[https://github.com/Blei-Lab/onlineldavb]].
        */
    -  def setDocConcentration(docConcentration: Double): this.type = {
    +  def setDocConcentration(docConcentration: Vector): this.type = {
         this.docConcentration = docConcentration
         this
       }
     
    +  /** Replicates Double to create a symmetric prior */
    +  def setDocConcentration(docConcentration: Double): this.type = {
    --- End diff --
    
    @jkbradley Synced offline:
    
     * We will validate the value of `k` and the length of `alpha` during the 
call to `LDAOptimizer#initialize`
     * `docConcentration` will have the following semantics
        * `Vector(-1)` => fill with default values
        * `Vector(x : Double)` of length = 1=> will repeat `x` to a length `k` 
Vector during `LDAOptimizer#initialize`
        * `Vector` of length > 1 => validated and used as `alpha` during 
`initialize`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to