Restricted Boltzmann are of real interest, but again, I repeat the
obligatory warning about replicating all things from the Netflix
competition.

To take a few concrete examples,

- user biases were a huge advance in terms of RMS error, but they don't
affect the ordering of the results presented to a user and thus are of no
interest for almost all production recommender applications

- temporal dynamics as a time variation in user bias is just like the first
case.

- temporal dynamics in the sense of block-busters that decay quickly are
often of little interest in a production recommender because blockbusters
are typically presented outside of the context of recommendations which are
instead used to help users find back catalog items of interest *in*spite* of
current popularity trends.  This means that tracking this kind of temporal
dynamics is good in the Netflix challenge, but neutral or bad in most
recommendation applications.  An exception is magical navigation links that
populate themselves using recommendation technology.

On the other side,

- portfolio approaches that increase the diversity of results presented to
the user increase the probability of user clicking, but decrease RMS score

- dithering of results to give a changing set of recommendations increases
users click rates, but decreases RMS score

The take-away is that the Netflix results can't be used as a blueprint for
all recommendation needs.

On Sat, Nov 28, 2009 at 8:31 AM, Jake Mannix <jake.man...@gmail.com> wrote:

> Machine based recommender, because this makes the final leap from linear
> and quasi-linear decompositions to the truly nonlinear case (my friend on
> the executive team over at Netflix tells me that it was pretty apparent
> that the
> winners were going to be blendings of the RBM and SVD-based approaches
> pretty early on - and he was right!)
>



-- 
Ted Dunning, CTO
DeepDyve

Reply via email to