[
https://issues.apache.org/jira/browse/LUCENE-2091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12786087#action_12786087
]
Joaquin Perez-Iglesias edited comment on LUCENE-2091 at 12/4/09 7:17 PM:
-------------------------------------------------------------------------
Yes, you are right what I meant was related with multifield queries, if you
search a:F1^F2, the right approach will be to compute IDF with docFreq(a,F1^F1)
what in my understanding cannot be done.
If I'm right Lucene does weight(a)*idf(a,F1) + weight(a)*idf(a,F2), and the
correct approach would be weight(a)*idf(a,F1^F2).
That's the reason why Uwe (and I) suggested to use IDF per field in the
previous case, and if the query is executed on each field, use a kind of
catch-all field to compute docFreq in all fields.
(Michael)
In summary it will be nice to have:
1. docFreq at document level, something like "int docFreq(term, doc_id)" and
return the number of documents where term occurs, but if it is not possible a
catch-all field will be enough.
2. The Collection Average Document Length and Collection Average Field Length
(per each field).
I don't think that we need "How many times does term T occur in all fields for
doc D", frequency is necessary per field and not per document.
I don't know too much about the implementation of PhraseQuery, but I think that
should be possible to implement BM25F for it (and any other query type), as far
as frequency and docFreq of the phrase/terms are available.
At this point it is not supported in the patch, but I don't see any reason why
it couldn't be implemented, moreover what I don't really know is how to do it
:-).
was (Author: joaquin):
Yes, you are right what I meant was related with multifield queries, if you
search a:F1^F2, the right approach will be to compute IDF with docFreq(a,F1^F1)
what in my understanding cannot be done.
If I'm right Lucene does weight(a)*idf(a,F1) + weight(a)*idf(a,F2), and the
correct approach would be weight(a)*idf(a,F1^F2).
That's the reason why Uwe (and I) suggested to use IDF per field in the
previous case, and if the query is executed on each field, use a kind of
catch-all field to compute docFreq in all fields.
(Michael)
In summary it will be nice to have:
1. docFreq at document level, something like "int docFreq(term, doc_id)" and
return the number of documents where term occurs, but if it is not possible a
catch-all field will be enough.
2. The Collection Average Document Length and Collection Average Field Length
(per each field).
I don't think that we need "How many times does term T occur in all fields for
doc D", frequency is necessary per field and not per document.
I don't know too much about the implementation of PhraseQuery, but I think that
should be possible to implement BM25F for it (and any other query type), as far
as frequency and docFreq of the phrase/terms are available.
At this point it is not supported in the patch, but I don't see any reason why
it couldn't be implemented, moreover that I don't really know is how to do it
:-).
> Add BM25 Scoring to Lucene
> --------------------------
>
> Key: LUCENE-2091
> URL: https://issues.apache.org/jira/browse/LUCENE-2091
> Project: Lucene - Java
> Issue Type: New Feature
> Components: contrib/*
> Reporter: Yuval Feinstein
> Priority: Minor
> Fix For: 3.1
>
> Attachments: LUCENE-2091.patch, persianlucene.jpg
>
> Original Estimate: 48h
> Remaining Estimate: 48h
>
> http://nlp.uned.es/~jperezi/Lucene-BM25/ describes an implementation of
> Okapi-BM25 scoring in the Lucene framework,
> as an alternative to the standard Lucene scoring (which is a version of mixed
> boolean/TFIDF).
> I have refactored this a bit, added unit tests and improved the runtime
> somewhat.
> I would like to contribute the code to Lucene under contrib.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]