http://www.dcc.fc.up.pt/~pribeiro/aulas/na1516/slides/na1516-slides-ir.pdf

  see the relevant sections for good info....


On 1/5/2017 3:02 AM, Jeffery Yuan wrote:
> Thanks very much for integrating machine learning to Solr.
> https://github.com/apache/lucene-solr/blob/f62874e47a0c790b9e396f58ef6f14ea04e2280b/solr/contrib/ltr/README.md
>
> In the Assemble training data part: the third column indicates the relative
> importance or relevance of that doc
> Could you please give more info about how to give a score based on what user
> clicks?
>
> I have read
> https://static.aminer.org/pdf/PDF/000/472/865/optimizing_search_engines_using_clickthrough_data.pdf
> http://www.cs.cornell.edu/people/tj/publications/joachims_etal_05a.pdf
> http://alexbenedetti.blogspot.com/2016/07/solr-is-learning-to-rank-better-part-1.html
>
> But still have no clue how to translate the partial pairwise feedback to the
> importance or relevance of that doc.
>
>  From a user's perspective, the steps such as setup the feature and model in
> Solr is simple, but collecting the feedback data and train/update the model
> is much more complex.
>
> It would be great Solr can provide some detailed instruction or sample code
> about how to translate the partial pairwise feedback and use it to train and
> update model.
>
> Thanks again for your help.
>
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/How-to-train-the-model-using-user-clicks-when-use-ltr-learning-to-rank-module-tp4312462.html
> Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to