alessandrobenedetti commented on a change in pull request #1571: URL: https://github.com/apache/lucene-solr/pull/1571#discussion_r520515596
########## File path: solr/solr-ref-guide/src/learning-to-rank.adoc ########## @@ -247,6 +254,81 @@ The output XML will include feature values as a comma-separated list, resembling }} ---- +=== Running a Rerank Query Interleaving Two Models + +To rerank the results of a query, interleaving two models (myModelA, myModelB) add the `rq` parameter to your search, passing two models in input, for example: + +[source,text] +http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=myModelA model=myModelB reRankDocs=100}&fl=id,score + +To obtain the model that interleaving picked for a search result, computed during reranking, add `[interleaving]` to the `fl` parameter, for example: Review comment: 1 is what is currently implemented and it aligns with the TeamDraft Interleaving papers and evaluation methods. Your observation is interesting though, but to implement that we should invent a new type of Interleaving algorithm that will do that when interleaving the results and will evaluate the user clicks accordingly later on. Your observation on the features to log applies as well. So far no change is needed in this regard in my opinion. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org