Second CFP: CHOMPS – Confabulation, Hallucinations, & Overgeneration in 
Multilingual & Precision-critical Settings 
(with our apologies for cross-posting) 

Venue: IJCNLP-AACL 2025 (https://2025.aaclnet.org/), Mumbai, India 
Date: 23/24th December 2025 (TBC) 
Workshop website: https://chomps2025.github.io/ 

* Description * 
Despite rapid advances, LLMs continue to "make things up": a phenomenon that 
manifests as hallucination, confabulation, and overgeneration. That is, produce 
unsupported and unverifiable text that sounds deceptively plausible. These 
outputs pose real risks in settings where accuracy and accountability are 
non-negotiable, including healthcare, legal systems, and education. The aim of 
the CHOMPS workshop is to find ways to mitigate one of major the hurdles that 
currently prevent the adoption of Large Language Models in real-world 
scenarios: namely, their tendency to hallucinate, i.e., produce unsupported and 
unverifiable text that sounds deceptively plausible. 

The workshop will explore hallucination mitigation in practical situations, 
where this mitigation is crucial: in particular, precision-critical 
applications (such as those in the medical, legal and biotech domains), as well 
as multilingual settings (given the lack of resources available to reproduce 
what can be done for English in other linguistic contexts). In practice, we 
intend to invite works of the following (not exclusive) list of topics: 

* Workshop topics * 
- Metrics, benchmarks and tools for hallucination detection 
- Factuality challenges in mission critical & domain-specific (e.g., medical, 
legal, biotech) and their consequences 
- Mitigation strategies during inference or model training 
- Studies of hallucinatory and confabulatory behaviors of LLMS in cross-lingual 
and multilingual scenarios 
- Confabulations in language & multimodal (vision, text, speech) models 
- Perspectives and case studies from other disciplines 
- … 

* Invited speakers * 
- Anna ROGERS, IT University of Copenhagen 
- Danish PRUTHI, IISc Bangalore 
- Abhilasha RAVICHANDER, University of Washington 

* Panel Discussion * 
- Preslav Nakov, MBZUAI 
- Sunayana Sitaram, Microsoft Research 
- Chung-Chi Chen, AIST, Japan 

* Shared Task * 
SHROOM-CAP - A Cross-lingual Scientific Hallucination Detection shared task. 
More info : https://helsinki-nlp.github.io/shroom/2025a 

* Submission details * 
The workshop is designed with a widely inclusive submission policy so as to 
foster as vibrant a discussion as possible. 
Archival or non-archival submissions may consist of up to 8 pages (long) or 4 
pages (short) of content. Dissemination submissions may consist of up to 1 
pages of content. On acceptance, authors may add one additional page to 
accommodate changes suggested by the reviewers. 

Please use the ACL style templates available here: 
https://github.com/acl-org/acl-style-files 
The submissions need to be done in PDF format via: 
(a) via Direct submission 
(https://openreview.net/group?id=aclweb.org/AACL-IJCNLP/2025/Workshop/CHOMPS) 
(b) via ARR commitment 
(https://openreview.net/group?id=aclweb.org/AACL-IJCNLP/2025/Workshop/CHOMPS_ARR_Commitment)
 

* Important dates * 
Paper submission deadline: September 29, 2025 
Direct ARR commitment: October 27, 2025 
Author notification: November 3, 2025 
Camera-Ready due: November 11, 2025 
Workshop date: December 23-24, 2025 (TBC) 

* Contact * 
For questions, please send an email to chomps-aacl2...@googlegroups.com or 
contact one of the workshop chairs: 
- Aman Sinha, Université de Lorraine, aman.si...@univ-lorraine.fr 
- Raúl Vázquez, University of Helsinki, raul.vazq...@helsinki.fi 
- Timothee Mickus, University of Helsinki, timothee.mic...@helsinki.fi 

_______________________________________________
Corpora mailing list -- corpora@list.elra.info
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to corpora-le...@list.elra.info

Reply via email to