[
https://issues.apache.org/jira/browse/MAHOUT-1272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13696155#comment-13696155
]
Peng Cheng commented on MAHOUT-1272:
------------------------------------
learning rate/step size are set to be identical to package ~.classifier.sgd,
the old learning rate is exponential with a constant decaying factor, this
setting seems to be only working for smooth functions (proved by Nesterov?),
I'm not sure if it is true in CF. Otherwise, either use 1/sqrt(n) for convex f
or 1/n for strongly convex f.
> Parallel SGD matrix factorizer for SVDrecommender
> -------------------------------------------------
>
> Key: MAHOUT-1272
> URL: https://issues.apache.org/jira/browse/MAHOUT-1272
> Project: Mahout
> Issue Type: New Feature
> Components: Collaborative Filtering
> Reporter: Peng Cheng
> Assignee: Sean Owen
> Original Estimate: 336h
> Remaining Estimate: 336h
>
> a parallel factorizer based on MAHOUT-1089 may achieve better performance on
> multicore processor.
> existing code is single-thread and perhaps may still be outperformed by the
> default ALS-WR.
> In addition, its hardcoded online-to-batch-conversion prevents it to be used
> by an online recommender. An online SGD implementation may help build
> high-performance online recommender as a replacement of the outdated
> slope-one.
> The new factorizer can implement either DSGD
> (http://www.mpi-inf.mpg.de/~rgemulla/publications/gemulla11dsgd.pdf) or
> hogwild! (www.cs.wisc.edu/~brecht/papers/hogwildTR.pdf).
> Related discussion has been carried on for a while but remain inconclusive:
> http://web.archiveorange.com/archive/v/z6zxQUSahofuPKEzZkzl
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira