On 12/1/10, Robert Muir wrote:
>
> you fill the topics files with list of queries, like the lia2 example
> that has a single query for "apache source":
>
>
> Number: 0
> apache source
> Description:
> Narrative:
>
>
> then you populate the qrels file with the "answers" for your document
> c
On Wed, Dec 1, 2010 at 7:25 AM, Yakob wrote:
> can you give me an example of how to populate the topics file and
> qrels file other than those on the LIA2 sample code? I still don't
> understand of how these 2 files text work anyway. :-)
>
> let me get this straight. I need to fill topics file wit
On 12/1/10, Robert Muir wrote:
>
> well you can't use those files with your own document collection.
> you need to populate the topics file with queries that you care about
> measuring.
> then you need to populate the qrels file with judgements for each
> query, *for your collection*. you are sa
On Wed, Dec 1, 2010 at 5:53 AM, Yakob wrote:
>
> well yes your information is really helpful.I did find a topics and
> qrels file that come in /src/lia/benchmark in the LIA2 sample code.
> and the result did change slightly.but the precision and recall value
> is still zero. I did also happen to u
On 11/30/10, Robert Muir wrote:
> On Tue, Nov 30, 2010 at 10:46 AM, Yakob wrote:
>
>> can you tell me what went wrong? what is the difference between
>> topicsFile and qrelsFile anyway?
>>
>
> well its hard to tell what you are supplying as topics and qrels.
> have a look at /src/lia/benchmark in
On Tue, Nov 30, 2010 at 10:46 AM, Yakob wrote:
> can you tell me what went wrong? what is the difference between
> topicsFile and qrelsFile anyway?
>
well its hard to tell what you are supplying as topics and qrels.
have a look at /src/lia/benchmark in the LIA2 sample code: it has an
example top
On 11/30/10, Robert Muir wrote:
>
> Have a look at contrib/benchmark under the
> org.apache.lucene.benchmark.quality package.
> There is code (for example
> org.apache.lucene.benchmark.quality.trec.QueryDriver) that can run an
> experiment and output what you need for trec_eval.exe
> I think ther
On Mon, Nov 29, 2010 at 8:01 AM, Yakob wrote:
> hello all
> I was wondering, if I want to measure precision and recall in lucene
> then what's the best way for me to do it? is there any sample cource
> code that I can use?
>
Have a look at contrib/benchmark under the
org.apache.lucene.benchmark.q
Well, I guess I can answer your original question with "no". There's
no Lucene method that will give you these because they aren't
defined. If you can answer the question "given a corpus and a set
of queries and the correct ordering of the relevant documents, how
close does Lucene come to that orde
On 11/29/10, Erick Erickson wrote:
> Define precision. Define recall. Define measure
>
> Sorry to give in to my impulses, but this question is so broad it's
> unanswerable. Try looking at the Text REtrieval Conference for instance.
> Lots of very bright people spend significant amounts of the
Define precision. Define recall. Define measure
Sorry to give in to my impulses, but this question is so broad it's
unanswerable. Try looking at the Text REtrieval Conference for instance.
Lots of very bright people spend significant amounts of their careers
trying to just define what these m
11 matches
Mail list logo