PhD position in Formal Models of Collective Reasoning

Institut de Recherche en Informatique de Toulouse (IRIT)

Toulouse University

France



The PhD thesis will be carried out as part of the TIRIS project entitled 
“Caring about Others: AI and Psychology Meet to Model and Automate Collective 
Reasoning” (CaRe),

which began in January 2025 and will end in December 2028. The PhD will start 
in October 2025 and will be funded by a three-year contract, with a gross 
monthly salary of approximately 2200 €.



Description of the research project

The CaRe project focuses on the interdisciplinary study of collective reasoning 
and its role in decision-making, from the perspectives of artificial 
intelligence (AI) and psychology.

Collective reasoning refers to the multi-dimensional reasoning of human or 
artificial agents, including the ability to consider others’ minds, engage in 
team-based reasoning, and

take ethical concerns regarding others’ well-being into account. The main 
objectives of the project are: (i) to formalize collective reasoning using 
logic, game theory, and social choice theory;

(ii) to develop new algorithms that enable artificial conversational agents to 
reason collectively; and (iii) to evaluate these formal models and algorithms 
through experiments involving

both human-human (H-H) and human-machine (H-M) interaction. The CaRe project 
aims to support the development of ethical and trustworthy AI systems that 
promote human well-being.

The PhD thesis will focus specifically on the formalization of collective 
reasoning and the development and implementation of related algorithms in a 
conversational agent—corresponding to objectives (i) and (ii) of the project.

The following aspects of collective reasoning will be addressed:

  *   Reasoning about others’ minds: agents attribute desires, goals, and 
preferences to others and take them into account when making decisions or 
choosing actions.
  *   Group (or team) reasoning: agents consider themselves and others as part 
of a group or community and act to achieve shared goals.
  *   Ethical reasoning: agents take ethical norms and values into 
consideration, focusing on the well-being of others (e.g., fairness, equity, 
reciprocity) and their expectations (e.g., norms of honesty and 
promise-keeping).

The PhD will place particular emphasis on formalizing and comparing two types 
of collective reasoning: one based on sympathy, goal adoption, and group 
identification ("warm" collective reasoning),

and another based on explicit ethical values and the maximization of ethical 
utility ("cold" collective reasoning).



Candidate profile

The ideal candidate should have a strong background in mathematics and some 
programming experience. A Master’s degree in Logic, Computer Science, or 
Mathematics is required.

On the theoretical side, familiarity with logic and game theory is expected; on 
the practical side, experience with Python and functional programming (ideally 
Haskell) is desirable.



PhD supervisor

The thesis will be supervised by Emiliano Lorini, CNRS Senior Researcher at the 
Institut de Recherche en Informatique de Toulouse (IRIT). More information is 
available at: https://www.irit.fr/~Emiliano.Lorini/



How to apply

Please send the following documents to 
emiliano.lor...@irit.fr<mailto:emiliano.lor...@irit.fr>

  *   A detailed CV
  *   A motivation letter
  *   Transcripts of your Bachelor's and Master's degrees

Including samples of research publications and reference letters is encouraged.

APPLICATION DEADLINE FOR FULL CONSIDERATION: July 14, 2025



--
[LOGIC] mailing list, provided by DLMPST
More information (including information about subscription management) can
be found here: http://dlmpst.org/pages/logic-list.php

Reply via email to