๐—™๐˜‚๐—น๐—น๐˜†-๐—ณ๐˜‚๐—ป๐—ฑ๐—ฒ๐—ฑ ๐Ÿฏ-๐˜†๐—ฒ๐—ฎ๐—ฟ ๐—ฃ๐—ต๐—— ๐—ถ๐—ป ๐—ฐ๐—ผ๐—บ๐—ฝ๐˜‚๐˜๐—ฒ๐—ฟ ๐˜€๐—ฐ๐—ถ๐—ฒ๐—ป๐—ฐ๐—ฒ - ๐—ฎ๐—ฟ๐˜๐—ถ๐—ณ๐—ถ๐—ฐ๐—ถ๐—ฎ๐—น ๐—ถ๐—ป๐˜๐—ฒ๐—น๐—น๐—ถ๐—ด๐—ฒ๐—ป๐—ฐ๐—ฒ - 
๐—ฒ๐˜…๐—ฝ๐—น๐—ฎ๐—ถ๐—ป๐—ฎ๐—ฏ๐—น๐—ฒ ๐—ฟ๐—ฒ๐—ฎ๐˜€๐—ผ๐—ป๐—ถ๐—ป๐—ด

๐—ž๐—ฒ๐˜†๐˜„๐—ผ๐—ฟ๐—ฑ๐˜€: Argumentative AI, Explainable Reasoning, Human-Centered AI, Argument 
Influence
๐—ฆ๐˜๐—ฎ๐—ฟ๐˜๐—ถ๐—ป๐—ด ๐—ฑ๐—ฎ๐˜๐—ฒ: October 2025
๐—Ÿ๐—ผ๐—ฐ๐—ฎ๐˜๐—ถ๐—ผ๐—ป: Centre de Recherche en Informatique de Lens (CRIL UMR 8188 - CNRS & 
Universitรฉ d'Artois), France
๐—ฆ๐˜‚๐—ฝ๐—ฒ๐—ฟ๐˜ƒ๐—ถ๐˜€๐—ผ๐—ฟ๐˜€: Dr Srdjan Vesic (CNRS CRIL Universitรฉ dโ€™Artois) and Dr Mathieu 
Hainselin (CRP-CPO Universitรฉ de Picardie Jules Verne)

๐——๐—ฒ๐˜€๐—ฐ๐—ฟ๐—ถ๐—ฝ๐˜๐—ถ๐—ผ๐—ป:
Computational argumentation theory provides essential tools for analyzing 
structured debates, with applications in AI-assisted decision-making systems, 
online discussion platforms, and human-AI interaction. In this context, 
explainability is critical: systems must not only determine which arguments are 
accepted based on abstract semantics, but also make this reasoning transparent 
and cognitively accessible to human users. Yet, existing semanticsโ€”typically 
grounded in logic-based frameworks and Dungโ€™s abstract argumentationโ€”might fail 
to align with human intuitions, limiting both their usability and 
trustworthiness in practice.

This fully funded PhD thesis will focus on improving the alignment between 
formal acceptability semantics and human reasoning. Research objectives include:
โ€ข Evaluating whether current principled constraints are perceived as intuitive 
by users
โ€ข Assessing their explanatory power, particularly in helping users grasp why 
certain arguments are accepted or rejected
โ€ข Formalizing new principles or designing alternative semantics to better 
capture observed reasoning patterns
โ€ข Investigating quantitative impact measures, which capture how much individual 
arguments influence the acceptability of others, and evaluating how such 
influence is perceived and interpreted by users

The project is highly interdisciplinary, involving close collaboration with 
psychologists and cognitive scientists, and combining formal modeling, 
empirical user studies, and potential software prototypes. The overarching goal 
is to contribute to the development of more explainable, intuitive, and 
responsible AI systems, grounded in both logical foundations and empirical 
validation.

Good level of English is required. Applicants should have a strong background 
in logic, AI, computer science, or related fields. An interest in cognitive 
science is welcome.

The PhD includes full funding, collaboration opportunities and publication 
support.

For more details about the thesis and to apply, send an email to 
ve...@cril.fr<mailto:ve...@cril.fr>
--
[LOGIC] mailing list, provided by DLMPST
More information (including information about subscription management) can
be found here: http://dlmpst.org/pages/logic-list.php

Reply via email to