Back papers available at: http://trec.nist.gov/pubs.html. Lots of interesting 
Echelon-relevant things happening here..."Semantic Forests" was only the tip of 
the iceberg...

CALL FOR PARTICIPATION

TEXT RETRIEVAL CONFERENCE (TREC)

February 2001 - November 2001 

Conducted by: 
National Institute of Standards and Technology (NIST) 

With support from: 
Defense Advanced Research Projects Agency (DARPA) 
Advanced Research and Development Activity (ARDA) 

The Text Retrieval Conference (TREC) workshop series encourages research in 
information retrieval and related text processing applications by providing a 
large test collection, uniform scoring procedures, and a forum for 
organizations interested in comparing their results. Now in its tenth year, the 
conference has become the major experimental effort in the field. Participants 
in the previous TREC conferences have examined a wide variety of retrieval 
techniques and retrieval environments, including cross-language retrieval, 
retrieval of web documents, retrieval of recorded speech, and question 
answering. Details about TREC can be found at the TREC web site, 
http://trec.nist.gov . 

You are invited to participate in TREC 2001. TREC 2001 will consist of a set of 
tasks known as "tracks". Each track focuses on a particular subproblem or 
variant of the retrieval task as described below. Organizations may choose to 
participate in any or all of the tracks. For most tracks, training and test 
materials are available from NIST; a few tracks will use special collections 
that are available from other organizations for a nominal fee. For all tracks, 
NIST will collect and analyze the retrieval results. 

Dissemination of TREC work and results other than in the (publicly available) 
conference proceedings is welcomed, but the conditions of participation 
preclude specific advertising claims based on TREC results. All retrieval 
results submitted to TREC are published in the Proceedings and are archived on 
the TREC web site. As before, the workshop in November will be open only to 
participating groups that submit results and to selected government personnel 
from sponsoring agencies. 


Schedule:
TREC 2001 has now started, but we are still accepting applications. 
Beginning February 21 -- document disks distributed to those new participants 
who have returned the required forms. There is a total of 5 CD-ROMS containing 
about 5 gigabytes of text. In addition, 450 training topics (questions) and 
relevance judgments are available from NIST. Please note that no disks will be 
shipped before February 21. 
August 1 -- earliest results submission deadline. 
August 31 -- latest results submission deadline. (Results deadline will vary by 
track. Specific deadlines for each track will be included in the track 
guidelines which should be finalized by May.) 
September 7 -- speaker proposals due at NIST. 
October 6 -- relevance judgments and individual evaluation scores due back to 
participants. 
Nov. 13-16 -- TREC 2001 conference at NIST in Gaithersburg, Md. 


Task Description:
Below is a brief summary of the tasks. Complete descriptions of tasks performed 
in previous years are included in the Overview papers in each of the TREC 
proceedings (in the Publications section of the web site). 

The exact definition of the tasks to be performed in each track for TREC 2001 
is still being formulated. Track discussion takes place on the track mailing 
list. To be added to a track mailing list, follow the instructions for 
contacting that mailing list as given below. For questions about the track, 
send mail to the track coordinator (or post the question to the track mailing 
list once you join). 


--------------------------------------------------------------------------------

Cross-Language Track 
A track that investigates the ability of retrieval systems to find documents 
that pertain to a topic regardless of the language in which the document is 
written. The main task in the TREC 2001 Cross-Language track will entail 
retrieving Arabic documents using English topics. Other topic languages (e.g., 
French) may also be used. 

[Cross-language retrieval among European languages is evaluated in CLEF (Cross-
Language Evaluation Forum), see http://www.clef-campaign.org/ . 

Cross-language retrieval for Asian languages, including Chinese in the next 
iteration, is evaluated in the NTCIR workshops, see 
http://www.rd.nacsis.ac.jp/~ntcadm/index-en.html . 

Track coordinators: Fred Gey, [EMAIL PROTECTED] and Doug Oard, 
[EMAIL PROTECTED] 
Mailing list: send a mail message to [EMAIL PROTECTED] such that the body 
consists of the line subscribe xlingual <FirstName> <LastName> 


Filtering Track 
A task in which the user's information need is stable (and some relevant 
documents are known) but there is a stream of new documents. For each document, 
the system must make a binary decision as to whether the document should be 
retrieved (as opposed to forming a ranked list). 
Track coordinators: Jamie Callan, [EMAIL PROTECTED] and Steve Robertson, 
[EMAIL PROTECTED] 
Mailing list: Contact [EMAIL PROTECTED] to be added to the list. 


Interactive Track 
A track studying user interaction with text retrieval systems. Participating 
groups develop a consensus experimental protocol and carry out studies with 
real users using a common collection and set of user queries. 
Track coordinator: Bill Hersh, [EMAIL PROTECTED] 
Mailing list: Contact [EMAIL PROTECTED] to be added to the list. 


Question Answering Track 
A track designed to take a step closer to *information* retrieval rather than 
*document* retrieval. Participants are given a large document set and a set of 
questions. For each question, the system returns a text snippet containing the 
answer and a document ID that supports the answer. 
Track coordinator: Ellen Voorhees, [EMAIL PROTECTED] 
Mailing list: send a mail message to [EMAIL PROTECTED] such that the body 
consists of the line subscribe trec-qa <FirstName> <LastName> 


Video Track 
A track designed to investigate content-based retrieval of digital video. 
Webpage: track web page 
Track coordinator: Alan Smeaton, [EMAIL PROTECTED] 
Mailing list: Contact [EMAIL PROTECTED] to be added to the list. 


Web Track 
A track featuring ad hoc search tasks on a document set that is a snapshot of 
the World Wide Web. The main focus of the track will be to form a Web test 
collection using pooled relevance judgments. In addition to the 
typical "informational" topics that have been used in previous TRECs, a set 
of "navigational" topics will also be included. 
Track coordinator: David Hawking, [EMAIL PROTECTED] 
Mailing list: Contact [EMAIL PROTECTED] to be added to the list. 


Depending on data availability, there may also be a Structured Data track, a 
track designed to explore retrieval performance when documents are semantically 
tagged. There is not currently a mailing list for this track, but you may 
contact Ellen Voorhees, [EMAIL PROTECTED] if you wish to be kept informed 
of the status of the track. 

Reply via email to