[ 
https://issues.apache.org/jira/browse/MAHOUT-1365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13906747#comment-13906747
 ] 

Sean Owen commented on MAHOUT-1365:
-----------------------------------

It's the exact same implementation you are talking about, as far as I 
understand what you're describing. It's just the standard paper algorithm. The 
input are 'preferences' that are translated to confidences with 1 + a*R. You 
can attach any confidence to any 0/1. It will also weight towards 0, not 1, if 
input value is negative (my own touch).

> Weighted ALS-WR iterator for Spark
> ----------------------------------
>
>                 Key: MAHOUT-1365
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1365
>             Project: Mahout
>          Issue Type: Task
>            Reporter: Dmitriy Lyubimov
>            Assignee: Dmitriy Lyubimov
>             Fix For: 1.0
>
>         Attachments: distributed-als-with-confidence.pdf
>
>
> Given preference P and confidence C distributed sparse matrices, compute 
> ALS-WR solution for implicit feedback (Spark Bagel version).
> Following Hu-Koren-Volynsky method (stripping off any concrete methodology to 
> build C matrix), with parameterized test for convergence.
> The computational scheme is following ALS-WR method (which should be slightly 
> more efficient for sparser inputs). 
> The best performance will be achieved if non-sparse anomalies prefilitered 
> (eliminated) (such as an anomalously active user which doesn't represent 
> typical user anyway).
> the work is going here 
> https://github.com/dlyubimov/mahout-commits/tree/dev-0.9.x-scala. I am 
> porting away our (A1) implementation so there are a few issues associated 
> with that.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to