Thank you Ted!

Do plan to do any talks in Sweden soon?

Best, Niklas


2014-04-07 14:52 GMT+02:00 Ted Dunning <[email protected]>:

> That book is a fine beginning, but doesn't have a lot of detail.
>
> Check out Pat's very nice demo site for more information.  I have also
> given a ton of talks on the subject.
>
> And, to answer your question, cooccurrence recommendation works great with
> diverse sources of behavior.
>
>
>
> On Sun, Apr 6, 2014 at 8:40 PM, Niklas Ekvall <[email protected]
> >wrote:
>
> > Thanks Pat!
> >
> > I did find a book by Ted Dunning and Ellen Friedman (Practical Machine
> > Learning: Innovations in Recommendations) I guess I can us it to read
> more
> > about co-occurrence recommender or co-occurrence analysis.
> >
> > Best, Niklas
> >
> >
> >
> > 2014-04-06 19:37 GMT+02:00 Pat Ferrel <[email protected]>:
> >
> > > >
> > > > On Apr 6, 2014, at 2:48 AM, Niklas Ekvall <[email protected]>
> > > wrote:
> > > >
> > > > Hi Pat and Ted!
> > > >
> > > > Yes I agree with about the rank and MAP. But in this case, that is a
> > good
> > > > initial guess on the parameters *number of features* and *lambda*?
> > >
> > > 20 or 30 features depending on the variance in your data, more is
> > > theoretically better but usually give rapidly diminishing returns. I
> > forget
> > > what lambdas we tried
> > >
> > > >
> > > > Where can I find the best article about cooccurrence recommender? And
> > can
> > > > one use this approach for different types of data, e.g., ratings,
> > > purchase
> > > > histories or click histories?
> > >
> > > Absolutely, but remember that the data you train on is what you are
> > > recommending. So if you train on detail-views (click paths) the
> > recommender
> > > will return items to look at, not necessarily the same as items to
> > > purchase. If you train on what you want to recommend then all of the
> > above
> > > will work.
> > >
> > > If you want to train on click-paths and recommend purchase you probably
> > > need a cross-recommender another discussion altogether.
> > >
> > > >
> > > > Best, Niklas
> > > >
> > > >
> > > > 2014-03-31 7:53 GMT+02:00 Ted Dunning <[email protected]>:
> > > >
> > > >> Yeah... what Pat said.
> > > >>
> > > >> Off-line evaluations are difficult.  At most, they provide
> directional
> > > >> guidance to be refined using live A/B testing.  Of course, A/B
> testing
> > > of
> > > >> recommenders comes with a new set of tricky issues like different
> > > >> recommenders learning from each other.
> > > >>
> > > >> On Sun, Mar 30, 2014 at 4:54 PM, Pat Ferrel <[email protected]>
> > > wrote:
> > > >>
> > > >>> Seems like most people agree that ranking is more important than
> > rating
> > > >> in
> > > >>> most recommender deployments. RMSE was used for a long time with
> > > >>> cross-validation (partly because it was the choice of Netflix
> during
> > > the
> > > >>> competition) but it is really a measure of total rating error.  In
> > the
> > > >> past
> > > >>> we've used mean-average-precision as a good measure of ranking
> > quality.
> > > >> We
> > > >>> chose hold-out tests based on time, so something like 10% of the
> most
> > > >>> recent data was held out for cross-validaton and we measured
> MAP@nfor
> > > >>> tuning parameters.
> > > >>>
> > > >>>
> > > >>
> > >
> >
> http://en.wikipedia.org/wiki/Information_retrieval#Mean_average_precision
> > > >>>
> > > >>> For our data (ecommerce shopping data) most of the ALS tuning
> > > parameters
> > > >>> had very little affect on MAP. However cooccurrence recommenders
> > > >> performed
> > > >>> much better using the same data. Unfortunately comparing two
> > algorithms
> > > >>> with offline tests is of questionable value. Still with nothing
> else
> > to
> > > >> go
> > > >>> on we went with the cooccurrence recommender.
> > > >>>
> > > >>>
> > > >>
> > > >
> > >
> >
>

Reply via email to