(Apologies for potential cross-posting)
 
Dear all,
An 18-month post-doctoral (or research engineer) position in argument mining 
(mainly) is available in the WIMMICS team at the I3S laboratory in Sophia 
Antipolis, France.
A detailed description of the position and the AGGREY project is provided at 
the end of the e-mail.
 
Required Qualifications
● A PhD: preferably in computer science, but not necessarily. (If 
post-doctorate, otherwise a Master's degree for a research engineer)
● Research interest in one or more of the following: Argument Mining, Natural 
Language Processing (NLP), Argumentation Theory, Computational Argumentation, 
E-democracy, Graph Theory, Game Theory, Similarity Measure, Explainable AI.
● Interest in interdisciplinary research.
● Excellent critical thinking, written and spoken English.
 
Application Materials – send by email to Victor DAVID: [email protected]
● Current CV
● Short statement of interest
 
Application deadline: February 05, 2024.
 
Questions about the position can also be sent to Victor DAVID: 
[email protected]
 
==========================================================================================================================================================
 
Description of the AGGREY project (An argumentation-based platform for 
e-democracy)

This project brings together 4 French laboratories, including:
- CRIL with VESIC Srdjan, KONIECZNY Sébastien, BENFERHAT Salem, VARZINCZAK 
Ivan, AL ANAISSY Caren,
- LIP6 with MAUDET Nicolas, BEYNIER Aurélie, LESOT Marie-Jeanne,
- LIPADE with DELOBELLE Jérôme, BONZON Elise, MAILLY Jean-Guy and 
- I3S with CABRIO Elena, VILLATA Serena and DAVID Victor. 
 
Summary of the project in general:
E-democracy is a form of government that allows everybody to participate in the 
development of laws. It has numerous benefits since it strengthens the 
integration of citizens in the political debate. Several on-line platforms 
exist; most of them propose to represent a debate in the form of a graph, which 
allows humans to better grasp the arguments and their relations. However, once 
the arguments are entered in the system, little or no automatic treatment is 
done by such platforms. Given the development of online consultations, it is 
clear that in the near future we can expect thousands of arguments on some hot 
topics, which will make the manual analysis difficult and time-consuming. The 
goal of this project is to use artificial intelligence, computational 
argumentation theory and natural language processing in order to detect the 
most important arguments, estimate the acceptability degrees of arguments and 
predict the decision that will be taken.
 
Given the size of the project, the tasks were defined and distributed between 5 
work packages. 
The one corresponding to the postdoc (or research engineer) we are looking for 
is number 3, and depending on progress and priorities, it will also be possible 
to participate in number 5.
 
Work package 3: Manipulation detection

Leader: Elena Cabrio (I3S)

Aim:
We will rely on both heuristics and on state-of-the-art argument mining methods 
in order to detect an anomaly, a fallacious or a duplicate argument 
[Vorakitphan et al., 2021] (i.e., speech acts that violate the rules of a 
rational argumentative discussion for assumed persuasive gains), and 
manipulations (e.g., an organised group of users massively voting for the exact 
same arguments in a short time period, or submitting variants of the same 
argument).

Background: 
The use of NLP, and more precisely of argument mining methods [Cabrio and 
Villata, 2018], will be relevant in supporting the smooth functioning of the 
debate, automatically detecting its structure (supporting and attacking 
argumentative components) and analysing its content (premises or 
claims)[Haddadan et al., 2019b]. Moreover, we will rely on previous studies of 
the similarity between arguments [Amgoud et al., 2018]. This includes, among 
other things, assistance in detecting manipulation by identifying duplicate 
arguments with argument similarity calculation [Reimers et al., 2019], or 
checking the relationships (attack or support) between arguments provided by 
users in the argument graph.

Challenges/Subtasks:
Subtask 3.1. Development of argument mining methods for finding missing 
elements and duplicates
We plan to use argument mining methods to automatically build the argumentative 
graph and detect the missing elements and duplicates. Identifying argument 
components and relations in the debates is a necessary step to improve the 
model’s result in detecting and classifying fallacious and manipulative content 
in argumentation [Vorakitphan et al., 2021]. The use of the notion of 
similarity between arguments [Amgoud et al., 2018] will be further investigated 
in this context. 
 
Subtask 3.2. Development of methods for detecting manipulations
We will develop and test different heuristics for dealing with manipulations. 
Those heuristics will be based on natural language processing, argument mining, 
graph theory, game theory, etc. Some parameters that we might take into account 
include also the ratio of added arguments and votes; the number of users that 
vote on similar arguments during the same time period; the votes on arguments 
attacking / supporting the same argument.
 
Subtask 3.3. Development of graph-based methods for finding missing elements 
and duplicates
We will develop graph-based properties for dealing with missing elements and 
duplicates. Consider, for instance, two arguments x and y that have the same 
attackers except that y is also attacked by z; suppose also that x and y attack 
exactly the same arguments. We might want to check whether z also attacks x. 
This might not be the case, so the system will not add those attacks 
automatically, but ask the users that put forward the arguments attacking x and 
y to consider this question.
 
Bibliography:
[Vorakitphan et al., 2021]: Vorakit Vorakitphan, Elena Cabrio, and Serena 
Villata. "Don’t discuss": Investigating semantic and argu- mentative features 
for supervised propagandist message detection and classification. In 
Proceedings of the International Conference on Recent Advances in Natural 
Language Processing (RANLP 2021), Held Online, 1-3September, 2021, pages 
1498–1507, 2021. URL https://aclanthology.org/2021.ranlp-1.168.
 
[Cabrio and Villata, 2018]: Elena Cabrio and Serena Villata. Five years of 
argument mining: a data-driven analysis. In IJCAI, pages 5427–5433, 2018. URL 
https://www.ijcai.org/proceedings/2018/766.
 
[Haddadan et al., 2019b]: Shohreh Haddadan, Elena Cabrio, and Serena Villata. 
Disputool - A tool for the argumentative analysis of political debates. In 
Proceedings of the Twenty-Eighth International Joint Conference on Artificial 
Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 6524–6526, 
2019b. doi: 10.24963/ijcai.2019/944. URL 
https://doi.org/10.24963/ijcai.2019/944.
 
[Amgoud et al., 2018]: Leila Amgoud, Elise Bonzon, Jérôme Delobelle, Dragan 
Doder, Sébastien Konieczny, and Nicolas Maudet. Gradual semantics accounting 
for similarity between arguments. In International Conference on Principles of 
Knowledge Representation and Reasoning (KR 2018), pages 88–97. AAAI Press, 
2018. URL https: //aaai.org/ocs/index.php/KR/KR18/paper/view/18077.
 
[Reimers et al., 2019]: Nils Reimers, Benjamin Schiller, Tilman Beck, Johannes 
Daxenberger, Christian Stab, and Iryna Gurevych. Classification and clustering 
of arguments with contextualized word embeddings. In ACL, pages 567–578, 2019. 
URL https://aclanthology.org/P19-1054/.
 
 
==========================================================================================================================================================
 
 
Work package 5: Implementation and evaluation of the platform

Leaders: Jean-Guy Mailly (LIPADE) and Srdjan Vesic (CRIL)

Aim: 
The goal of this WP is to implement the platform, evaluate it through 
experiments with end users, and use the obtained data to improve the design of 
the framework. Our experiments will also help us to better understand how 
humans use online platforms, which is essential for the future success of 
online debates. After implementing the platform, we will measure to which 
extent our platform leads to more informed decisions and attitudes. We plan to 
do this by measuring the extent of disagreement between the participants before 
and after the use of our system. We expect that the instructions to explicitly 
state one’s arguments and to link them with other justified counter-arguments 
make people more open to opposite views and more prone to changing their 
opinion.

Background: 
The field of computational argumentation progressively went from using toy 
examples and theoretical only evaluation of the proposed approaches to 
constructing benchmarks [Cabrio and Villata, 2014] and evaluating the proposed 
approaches by comparing their output to that of human reasoners [Rosenfeld and 
Kraus, 2016, Polberg and Hunter, 2018, Cerutti et al., 2014, 2021]. Our recent 
results [Vesic et al., 2022] as well as our current work (unpublished 
experiments) found that when people see the graph representation of the 
corresponding debate they comply significantly more often to rationality 
principles. Furthermore, our experiments show that people are able to draw the 
correct graph (i.e. the one that corresponds to the given discussion) in the 
absolute majority of cases (even after no prior training except reading a three 
minute tutorial). The fact that those participants respect rationality 
principles more frequently is crucial, since it means that they are e.g., less 
prone to accept weak or fallacious arguments.

Challenges/Subtasks:
Subtask 5.1. Implementation of the platform
This task aims at implementing the platform. Implementation of the platform 
will be done by using the agile method. This means that it will be implemented 
progressively and tested in order to allow for adaptive planning, evolutionary 
development and constant improvement. We could use an existing platform and add 
our functionalities. However, we find that building a dedicated platform more 
adequate for several reasons: many of the platforms are proprietary and would 
not allow us to use and publish their code, most of the functionalities we need 
do not exist in any platform, so using an existing platform would not help us 
gain a lot of time.

Subtask 5.2 Measuring the quality of the platform
We will conduct experiments with users in order to test if the platform can be 
used to reduce opinion polarisation and to enhance more rational and informed 
estimations of arguments’ qualities / strengths. To this end, we will examine 
if relevant parameters (such as the degree to which individuals agree with a 
given statement, the extent to which individuals diverge in their opinions, and 
in understanding the issue they debate about, etc.) are significantly different 
before and after the use of our debate platform. Our hypothesis is that seeing 
or producing the graph, making the arguments explicit, and engaging in a 
structured discussion will yield a better understanding of the questions and a 
better chance to reach an agreement with other parties. Ethical permission will 
be asked before conducting the experiments.

Subtask 5.3. Improving the platform
We will take into account the results of the experiments, user feed-back, bugs 
reports etc. in order to develop the final version of the platform.
 
Bibliography:
[Cabrio and Villata, 2014]: Elena Cabrio and Serena Villata. Node: A benchmark 
of natural language arguments. In Simon Parsons, Nir Oren, Chris Reed, and 
Federico Cerutti, editors, Computational Models of Argument - Proceedings of 
COMMA 2014, Atholl Palace Hotel, Scottish Highlands, UK, September 9-12, 2014, 
volume 266 of Frontiers in Artificial Intelligence and Applications, pages 
449–450. IOS Press, 2014. doi: 10.3233/978-1-61499-436-7-449. URL 
https://doi.org/10.3233/978-1-61499-436-7-449.
 
[Rosenfeld and Kraus, 2016]: Ariel Rosenfeld and Sarit Kraus. Providing 
arguments in discussions on the basis of the prediction of human argumentative 
behavior. ACM Trans. Interact. Intell. Syst., 6(4):30:1–30:33, 2016. doi: 
10.1145/2983925. URL https://doi.org/10.1145/2983925.
 
[Polberg and Hunter, 2018]: Sylwia Polberg and Anthony Hunter. Empirical 
evaluation of abstract argumentation: Supporting the need for bipolar and 
probabilistic approaches. Int. J. Approx. Reason., 93:487–543, 2018. doi: 
10.1016/j.ijar.2017. 11.009. URL https://doi.org/10.1016/j.ijar.2017.11.009.
 
[Cerutti et al., 2014]: Federico Cerutti, Nava Tintarev, and Nir Oren. Formal 
arguments, preferences, and natural language interfaces to humans: an empirical 
evaluation. In Torsten Schaub, Gerhard Friedrich, and Barry O’Sullivan, edi- 
tors, ECAI 2014 - 21st European Conference on Artificial Intelligence, 18-22 
August 2014, Prague, Czech Republic, volume 263, pages 207–212. IOS Press, 
2014. doi: 10.3233/978-1-61499-419-0-207. URL 
https://doi.org/10.3233/978-1-61499-419-0-207.
 
[Cerutti et al., 2021]: Federico Cerutti, Marcos Cramer, Mathieu Guillaume, 
Emmanuel Hadoux, Anthony Hunter, and Sylwia Polberg. Empirical cognitive 
studies about formal argumentation. In Guillermo R. Simari Dov Gabbay, 
Massimiliano Giacomin and Matthias Thimm, editors, Handbook of Formal 
Argumentation, volume 2. College Publications, 2021.
 
[Vesic et al., 2022]: Srdjan Vesic, Bruno Yun, and Predrag Teovanovic. 
Graphical representation enhances human compliance with principles for graded 
argumentation semantics. In AAMAS ’22: 21st International Conference on 
Autonomous Agents and Multiagent Systems, Virtual Event, 2022. URL 
https://hal-univ-artois. archives-ouvertes.fr/hal-03615534.
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to