alpha matters almost not at all by itself. It is used to create a
weight in the cost function of 1 + alpha * rating. Except for the "1
+", it would not matter as it just scales all weights in the first
half of the cost function proportionally.

Where it does matter is in relation to lambda, because lambda scales
the other half of the cost function (regularization term). I would fix
alpha (I've just used 40, the value in the paper, for lack of any
better one) and then figure out what lambda to use. It depends on your
data, but, I use a default of 0.1 - 1.0.

Making recommendations is just a matter of multiplying a user feature
row by the item-feature matrix. I don't know if there is code,
probably not, but conceptually that is all that it involves.

On Wed, Oct 17, 2012 at 12:47 PM, Kris Jack <[email protected]> wrote:
> Hi all,
>
> I'm giving one of the distributed matrix factorisation implementations
> (code org.apache.mahout.cf.taste.hadoop.als.ParallelALSFactorizationJob) a
> try and have a few basic questions.  I can't find much documentation about
> how to run it so can someone please point me in the right direction?
>
> I'd like to know:
> 1) is there any code that helps selecting appropriate values for lambda and
> alpha?
> 2) is there any code that uses the output of the MF, generating recommended
> items for users?
>
> Best,
> Kris
>
>
>
> --
> Dr Kris Jack,
> http://www.mendeley.com/profiles/kris-jack/

Reply via email to