In the Assemble training data part: the third column indicates the relative
importance or relevance of that doc
Could you please give more info about how to give a score based on what user
clicks?
Hi Jeffery,
Give your questions more detail and there may be more feedback; just a
suggestion.
About above,
some examples of assigning "relative" weighting to training data
user click info gathered (all assumed but similar to omniture monitoring)
- position in the result list
- above/below the fold
- result page number
As a information engineer, you might see 2 attributes here: a) user
perseverance b) effort to find the result
From there, the attributes have a correlation relationship that is not
linear and directly proportional I think:
easy to find outweighs user perseverance every time because it
reduces the need for such
extensive perseverance, page #3 for example, doesn't mitigate
effort, it drives effort towards lower user perseverance need value pairs.
Ok. That is damn confusing. But its what I would want to do, use the pair
in a manner that reranks a document as if the perseverance and effort were
balanced and positioned ... "relative" to the other training data. What that
equation is, will take some more effort....
i'm not sure this response is helpful at all, but i'm going to go with it
because I recognize all of it from AOL, Microsoft and Comcast work. Before the
days of ML in Search.
On 1/5/2017 3:33 PM, Jeffery Yuan wrote:
Thanks , Will Martin.
I checked the pdf it's great. but seems not very useful for my question: How
to train the model using user clicks when use ltr(learning to rank) module.
I know the concept after reading these papers. But still not sure how to
code them.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-train-the-model-using-user-clicks-when-use-ltr-learning-to-rank-module-tp4312462p4312592.html
Sent from the Solr - User mailing list archive at Nabble.com.