The first workshop on evaluating IR systems with Large Language Models
(LLMs) is accepting submissions that describe original research findings,
preliminary research results, proposals for new work, and recent relevant
studies already published in high-quality venues.

Topics of interest

We welcome both full papers and extended abstract submissions on the
following topics, including but not limited to:

   - LLM-based evaluation metrics for traditional IR and generative IR.
   - Agreement between human and LLM labels.
   - Effectiveness and/or efficiency of LLMs to produce robust relevance
   labels.
   - Investigating LLM-based relevance estimators for potential systemic
   biases.
   - Automated evaluation of text generation systems.
   - End-to-end evaluation of Retrieval Augmented Generation systems.
   - Trustworthiness in the world of LLMs evaluation.
   - Prompt engineering in LLMs evaluation.
   - Effectiveness and/or efficiency of LLMs as ranking models.
   - LLMs in specific IR tasks such as personalized search, conversational
   search, and multimodal retrieval.
   - Challenges and future directions in LLM-based IR evaluation.

Submission guidelines

We welcome the following submissions:

   - Previously unpublished manuscripts will be accepted as extended
   abstracts and full papers (any length between 1 - 9 pages) with unlimited
   references, formatted according to the latest ACM SIG proceedings template
   available at http://www.acm.org/publications/proceedings-template.
   - Published manuscripts can be submitted in their original format.

All submissions should be made through Easychair:
https://easychair.org/conferences/?conf=llm4eval

All papers will be peer-reviewed (single-blind) by the program committee
and judged by their relevance to the workshop, especially to the main
themes identified above, and their potential to generate discussion. For
already published studies, the paper can be submitted in the original
format. These submissions will be reviewed for their relevance to this
workshop. All submissions must be in English (PDF format).

All accepted papers will have a poster presentation with a few selected for
spotlight talks. Accepted papers may be uploaded to arXiv.org, allowing
submission elsewhere as they will be considered non-archival. The
workshop’s website will maintain a link to the arXiv versions of the papers.

Important Dates

   - Submission Deadline: April 25th, 2024 (AoE time)
   - Acceptance Notifications: May 31st, 2024 (AoE time)
   - Workshop date: July 18, 2024

Website
For  more information, visit the workshop website:
https://llm4eval.github.io/

Contact

For any questions about paper submission, you may contact the workshop
organizers at [email protected]
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to