Grant brought up good points on
the users' information needs and the precision/recall tuning of the system.
Thanks,
Ivan
--- On Thu, 1/28/10, Robert Muir wrote:
> From: Robert Muir
> Subject: Re: Average Precision - TREC-3
> To: java-user@lucene.apache.org
> Date: Thursday, Jan
also perform the judgments ourselves. This
could be a very time consuming process.
Thank you,
Ivan
--- On Thu, 1/28/10, Grant Ingersoll wrote:
> From: Grant Ingersoll
> Subject: Re: Average Precision - TREC-3
> To: java-user@lucene.apache.org
> Date: Thursday, January 28, 2010
right, but the problem is when something is currently ranked as doc 20 but
should be in the top 1, 5, or 10, and you aren't seeing it.
so I think if you are judging top-N docs from an existing system, you should
look a little farther ahead than the top-N you care about.
I think you should also ind
On Jan 28, 2010, at 11:00 AM, Robert Muir wrote:
> in addition to what Grant said, even if your documents are similar, what
> about queries?
>
> For example, if only a few trec queries contain proper names, acronyms,
> abbreviations, or whatever, but your users frequently input things like
> thi
in addition to what Grant said, even if your documents are similar, what
about queries?
For example, if only a few trec queries contain proper names, acronyms,
abbreviations, or whatever, but your users frequently input things like
this, it won't be representative.
i will disagree with him on a f
On Jan 27, 2010, at 1:36 PM, Ivan Provalov wrote:
> Robert, Grant:
>
> Thank you for your replies.
>
> Our goal is to fine-tune our existing system to perform better on relevance.
What kind of documents do you have? Are they very similar to the TREC docs
(i.e. news articles)? There can be
Robert,
Thank you for this great information. Let me look into these suggestions.
Ivan
--- On Wed, 1/27/10, Robert Muir wrote:
> From: Robert Muir
> Subject: Re: Average Precision - TREC-3
> To: java-user@lucene.apache.org
> Date: Wednesday, January 27, 2010, 2:52 PM
> Hi Iva
ensure that our overall system
> doesn't introduce the relevance issues (content pre-processing steps, query
> parsing steps, etc...).
>
> Thank you,
>
> Ivan Provalov
>
> --- On Wed, 1/27/10, Robert Muir wrote:
>
> > From: Robert Muir
> > Subject: Re: Av
Thank you, Jose.
-Original Message-
From: José Ramón Pérez Agüera [mailto:jose.agu...@gmail.com]
Sent: Wednesday, January 27, 2010 1:42 PM
To: java-user@lucene.apache.org
Subject: Re: Average Precision - TREC-3
Hi Ivan,
you might want use the lucene BM25 implementation. Results should
o ensure that our overall system
> doesn't introduce the relevance issues (content pre-processing steps, query
> parsing steps, etc...).
>
> Thank you,
>
> Ivan Provalov
>
> --- On Wed, 1/27/10, Robert Muir wrote:
>
>> From: Robert Muir
>> Subject: Re: Aver
x27;t
introduce the relevance issues (content pre-processing steps, query parsing
steps, etc...).
Thank you,
Ivan Provalov
--- On Wed, 1/27/10, Robert Muir wrote:
> From: Robert Muir
> Subject: Re: Average Precision - TREC-3
> To: java-user@lucene.apache.org
> Date: Wednesday, Janu
Hello, forgive my ignorance here (I have not worked with these english TREC
collections), but is the TREC-3 test collection the same as the test
collection used in the 2007 paper you referenced?
It looks like that is a different collection, its not really possible to
compare these relevance scores
On Jan 26, 2010, at 8:28 AM, Ivan Provalov wrote:
> We are looking into making some improvements to relevance ranking of our
> search platform based on Lucene. We started by running the Ad Hoc TREC task
> on the TREC-3 data using "out-of-the-box" Lucene. The reason to run this old
> TREC-3 (
13 matches
Mail list logo