[
https://issues.apache.org/jira/browse/MAHOUT-1272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13696932#comment-13696932
]
Peng Cheng commented on MAHOUT-1272:
------------------------------------
Looks like the 1/n learning rate doesn't work at all on SGD factorizer, maybe
the convergence of stochastic optimization can't be applied on the non-convex
MF problem. Can someone show me a paper discussing convergence bound of such
problem? Much appreciated.
> Parallel SGD matrix factorizer for SVDrecommender
> -------------------------------------------------
>
> Key: MAHOUT-1272
> URL: https://issues.apache.org/jira/browse/MAHOUT-1272
> Project: Mahout
> Issue Type: New Feature
> Components: Collaborative Filtering
> Reporter: Peng Cheng
> Assignee: Sean Owen
> Original Estimate: 336h
> Remaining Estimate: 336h
>
> a parallel factorizer based on MAHOUT-1089 may achieve better performance on
> multicore processor.
> existing code is single-thread and perhaps may still be outperformed by the
> default ALS-WR.
> In addition, its hardcoded online-to-batch-conversion prevents it to be used
> by an online recommender. An online SGD implementation may help build
> high-performance online recommender as a replacement of the outdated
> slope-one.
> The new factorizer can implement either DSGD
> (http://www.mpi-inf.mpg.de/~rgemulla/publications/gemulla11dsgd.pdf) or
> hogwild! (www.cs.wisc.edu/~brecht/papers/hogwildTR.pdf).
> Related discussion has been carried on for a while but remain inconclusive:
> http://web.archiveorange.com/archive/v/z6zxQUSahofuPKEzZkzl
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira