First International Workshop on LLMs and KRR for Trustworthy AI (LMKR-TrustAI 
2025)

Held in conjunction with KR 
2025<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fkr.org%2FKR2025%2F&data=05%7C02%7Cclean-list%40science.ru.nl%7Cd0932611f64f45da910d08ddc5bb34f3%7C084578d9400d4a5aa7c7e76ca47af400%7C1%7C0%7C638884130033738367%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=0Js%2FGBHmBjCXbqittTAWPazFoBTBBot7CBiZrw%2B9ZtI%3D&reserved=0>
Half day, 11-13 November 2025 (TBD)
Paper submission: August 4, 2025

Workshop web site: 
https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsites.google.com%2Fview%2Flmkr-trustai-2025%2Fhome&data=05%7C02%7Cclean-list%40science.ru.nl%7Cd0932611f64f45da910d08ddc5bb34f3%7C084578d9400d4a5aa7c7e76ca47af400%7C1%7C0%7C638884130033760050%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=SRj3C10Mzzgfe%2ByQ%2BdoZDlDMyfPAhJvXX2s8B27vhQo%3D&reserved=0

Call for Papers

Overview
The emergence of large language models (LLMs) has brought significant 
opportunities for developing scalable and generalisable AI applications more 
easily. Compared to knowledge representation and reasoning (KRR) methods, LLMs 
demonstrate remarkable capability in encoding linguistic knowledge, enabling 
them to generate human-like text and generalise across diverse tasks with 
minimal domain-specific training. However, LLMs’ reliance on statistical 
patterns rather than explicit reasoning mechanisms raises concerns about 
factual consistency, logical coherence, vulnerability to hallucinations, bias 
and misalignment with human values. This workshop focuses on an emerging 
research paradigm: the integration of LLMs with KRR techniques to enhance 
transparency, verifiability and robustness in AI systems. We explore approaches 
that incorporate structured knowledge (ontologies, knowledge graphs, symbolic 
logic, etc.), neuro-symbolic methods, formal reasoning frameworks and 
explainability techniques to improve the trustworthiness of LLM-driven 
decision-making.

The workshop will feature invited talks from leading experts, research paper 
presentations, and interactive discussions on bridging probabilistic learning 
with symbolic reasoning for trustworthy AI. By bringing together researchers 
from KRR and deep learning, this workshop aims to foster new collaborations and 
technical insights to develop AI systems that are both powerful and trustworthy.

Topics of interest include but are not limited to:

  *
Knowledge-grounded language models
  *
Hybrid neuro-symbolic architectures
  *
Reasoning-aware prompt engineering
  *
Logical consistency checks in LLM outputs
  *
Uncertainty and automated verification
  *
Causality and reasoning
  *
Explainability and controllability
  *
Commonsense reasoning integrating LLMs and KRR
  *
Reinforcement learning for ensuring safety and trustworthiness
  *
Alignment and preference-guided LLMs
  *
Multi-agent AI frameworks
  *
Benchmarks, datasets and quantitative evaluation metrics
  *
Evaluation and user studies in real-world applications

Organising Committee

  *   Maurice Pagnucco, UNSW, Australia
  *   Yang Song, UNSW, Australia

Program Committee

  *
Professor Tony Cohn, University of Leeds, UK
  *
Dr Mingming Gong, University of Melbourne, Australia
  *
Professor Gerhard Lakemeyer, RWTH Aachen, Germany
  *
Professor Fangzhen Lin, HKUST, China
  *
Professor Tim Miller, University of Queensland, Australia
  *
Dr Nina Narodytska, VMware Research, USA
  *
Associate Professor Abhaya Nayak, Macquarie University, Australia
  *
Professor Ken Satoh, National Institute of Informatics, Japan
  *
Professor Michael Thielscher, University of New South Wales, Australia
  *
Professor Guy Van den Broeck, UCLA, USA

Important Dates

Paper submission: August 4, 2025
Paper notification: August 25, 2025
Workshop date and time: Half-day during November 11-13, 2025 (TBD)

Submissions
Contributions may be regular papers (up to 9 pages) or short/position papers 
(up to 5 pages), including everything. Submissions should follow the KR 2025 
formatting guidelines and be submitted through the submission page. Each 
submission will be reviewed by at least two program committee members. We also 
welcome submissions that have recently been accepted in top AI conferences. At 
least one author of each accepted paper will be required to attend the workshop 
to present the contribution.
Submission link: 
https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fopenreview.net%2Fgroup%3Fid%3Dkr.org%2FKR%2F2025%2FWorkshop%2FLMKR-TrustAI&data=05%7C02%7Cclean-list%40science.ru.nl%7Cd0932611f64f45da910d08ddc5bb34f3%7C084578d9400d4a5aa7c7e76ca47af400%7C1%7C0%7C638884130033773121%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=dpADIsw0vVBk9aY7BUhgh0VEJHx74gPpbIcR1Umnb2s%3D&reserved=0

Best regards,
Maurice Pagnucco, Yang Song
Organisers, KR 2025 Workshop on LLMs and KRR for Trustworthy AI




Confidential communication - This email and any files transmitted with it are 
confidential and are intended solely for the addressee. If you are not the 
intended recipient, please be advised that you have received this email in 
error and that any use, dissemination, forwarding, printing, or copying of this 
email and any file attachments is strictly prohibited. If you have received 
this email in error, please notify me immediately by return email and destroy 
this email.

_______________________________________________
clean-list mailing list
clean-list@science.ru.nl
https://mailman.science.ru.nl/mailman/listinfo/clean-list

Reply via email to