[
https://issues.apache.org/jira/browse/SPARK-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14623032#comment-14623032
]
Feynman Liang commented on SPARK-8986:
--------------------------------------
I don't think there is a well-defined notion of smoothing for EM-based
inference:
[EM based GMM in R
|http://cran.r-project.org/web/packages/EMCluster/EMCluster.pdf] has no
smoothing parameter.
Neither does [sklearn's EM-based
inference|http://scikit-learn.org/stable/modules/generated/sklearn.mixture.GMM.html].
I did find that sklearn allows [using a prior to regularize variational GMM
inference|http://scikit-learn.org/stable/modules/dp-derivation.html] and avoid
degenerate cases. However, this would first require supporting VB inference for
GMMs. I took a look at the [VB algo cited by
sklearn|http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.4467&rep=rep1&type=pdf]
and it is extremely similar to what's already going on in the LDA model. So
perhaps we should think about generalizing the work on variational inference
([this for
example|https://amplab.cs.berkeley.edu/publication/streaming-variational-bayes/]).
> GaussianMixture should take smoothing param
> -------------------------------------------
>
> Key: SPARK-8986
> URL: https://issues.apache.org/jira/browse/SPARK-8986
> Project: Spark
> Issue Type: New Feature
> Components: MLlib
> Reporter: Joseph K. Bradley
> Original Estimate: 144h
> Remaining Estimate: 144h
>
> Gaussian mixture models should take a smoothing parameter which makes the
> algorithm robust against degenerate data or bad initializations.
> Whomever takes this JIRA should look at other libraries (sklearn, R packages,
> Weka, etc.) to see how they do smoothing and what their API looks like.
> Please summarize your findings here.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]