Thanks for the list...as a non native speaker I got problems understanding
the meaning of dithering here.

I got the feeling that somewhere between a) and d) there is also
diversification of items in the recommendation list, so increasing the
distance between the list items according to some metric like tf/idf on
item information. Never tried that, but with lucene / solr it should be
possible to use this information during scoring..

Have a nice day




On Wed, May 22, 2013 at 2:30 AM, Ted Dunning <[email protected]> wrote:

> I have so far just used the weights that Solr applies natively.
>
> In my experience, what makes a recommendation engine work better is, in
> order of importance,
>
> a) dithering so that you gather wider data
>
> b) using multiple sources of input
>
> c) returning results quickly and reliably
>
> d) the actual algorithm or weighting scheme
>
> If you can cover items a-c in a real business, you are very lucky.  The
> search engine approach handles (b) and (c) by nature which massively
> improves the likelihood of ever getting to examine (d).
>
>
> On Tue, May 21, 2013 at 1:13 AM, Johannes Schulte <
> [email protected]> wrote:
>
> > Thanks! Could you also add how to learn the weights you talked about, or
> at
> > least a hint? Learning weights for search engine query terms always
> sounds
> > like  "learning to rank" to me but this always seemed pretty complicated
> > and i never managed to try it out..
> >
> >
>

Reply via email to