We invite you to submit your ongoing, published or pre-reviewed works to our 
workshop on Large Language Models for Cross-Temporal Research (XTempLLMs) at 
COLM 2025.

Our workshop website is available at https://xtempllms.github.io/2025/ 

Workshop Description:
Large language models (LLMs) have been used for a variety of time-sensitive 
applications such as temporal reasoning, forecasting and planning. In addition, 
there has been a growing number of interdisciplinary works that use LLMs for 
cross-temporal research in several domains, including social science, 
psychology, cognitive science, environmental science and clinical studies. 
However, LLMs are hindered in their understanding of time due to many different 
reasons, including temporal biases and knowledge conflicts in pretraining and 
RAG data but also a fundamental limitation in LLM tokenization that fragments a 
date into several meaningless subtokens. Such inadequate understanding of time 
would lead to inaccurate reasoning, forecasting and planning, and 
time-sensitive findings that are potentially misleading.

Our workshop looks for (i) cross-temporal work in the NLP community and (ii) 
interdisciplinary work that relies on LLMs for cross-temporal studies.
Cross-temporal work in the NLP community:
* Novel benchmarks for evaluating the temporal abilities of LLMs across diverse 
date and time formats, culturally grounded time systems, and generalization to 
future contexts;
* Novel methods (e.g., neuro-symbolic approaches) for developing temporally 
robust, unbiased, and reliable LLMs;
* Data analysis such as the distribution of pretraining data over time and 
conflicting knowledge in pretraining and RAG data;
* Interpretability regarding how temporal information is processed from 
tokenization to embedding across different layers, and finally to model output;
* Temporal applications such as reasoning, forecasting and planning;
* Consideration of cross-lingual and cross-cultural perspectives for linguistic 
and cultural inclusion over time.

Interdisciplinary work that relies on LLMs for cross-temporal studies:
* Time-sensitive discoveries, such as social biases over time and personality 
testing over time;
* Assessment of time-sensitive discoveries to identify misleading findings if 
any;
* Interdisciplinary evaluation benchmarks for LLMs’ temporal abilities, e.g., 
psychological time perception and episodic memory evaluation.

Submission Modes:
* Standard submissions: We invite the submission of papers that will receive up 
to three double-blind reviews from the XTempLLMs committee, and a final 
decision of acceptance from the workshop chairs.
* Pre-reviewed submissions: We invite unpublished papers that have already been 
reviewed either through ACL ARR, or recent AACL/EACL/ACL/EMNLP/COLING venues. 
These papers will not receive new reviews but will be judged together with 
their reviews via a meta-review from the workshop chairs.
* Published papers: We invite papers that have been published recently 
elsewhere to present at XTempLLMs. Please send the details of your paper (Paper 
title, authors, publication venue, abstract, and a link to download the paper) 
directly to [email protected]. This allows such papers to gain more 
visibility from the workshop audience.

All deadlines are 11.59 pm UTC -12h (“Anywhere on Earth”):
* June 26, 2025: Submission deadline (standard and published papers)
* July 18, 2025: Submission deadline for papers with ARR reviews
* July 24, 2025: Notification of acceptance
* October 10, 2025: Workshop day

Invited Speakers:
* Jose Camacho Collados, Cardiff University, United Kingdom
* Ali Emami, Brock University, Canada
* Alexis Huet, Huawei Technologies, France

Organizing Committee:
* Wei Zhao, University of Aberdeen, United Kingdom
* Maxime Peyrard, Université Grenoble Alpes & CNRS, France
* Katja Markert, Heidelberg University, Germany
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to