Please consider contributing and/or forwarding to appropriate colleagues and 
groups.

****We apologize for the multiple copies of this e-mail****


                  
----------------------------------------------------------------------------------------------------

                                                Call for Participation
                  
----------------------------------------------------------------------------------------------------


First Call for Participation:

EXIST 2026: Multimodal sexism identification with sensor data

Website: http://nlp.uned.es/exist2026/

EXIST is a series of scientific events and shared tasks on sexism 
identification in social networks. EXIST aims to foster the automatic detection 
of sexism in a broad sense, from explicit misogyny to other subtle expressions 
that involve implicit sexist behaviours (EXIST 2021, EXIST 2022, EXIST 2023, 
EXIST 2024, EXIST 2025). The sixth edition of the EXIST shared task will be 
held as a Lab in CLEF 2026, on September 21-24, 2026, at 
Friedrich-Schiller-Universität Jena, Germany .

In EXIST 2026, we take a significant step forward by integrating the principles 
of Human-Centered AI (HCAI) into the development of automatic tools for 
detecting sexism online. Recognizing that no single interpretation can fully 
capture the diversity of human perception, we go beyond traditional annotation 
paradigms by combining Learning With Disagreement (LeWiDi) with sensor-based 
data (EEG, heart rate, and eye-tracking signals) collected from subjects 
exposed to potentially sexist content, with the aim of capturing unconscious 
responses to sexism. This dual approach represents a breakthrough in dataset 
creation for sensitive and value-laden tasks: for the first time, datasets will 
include not only divergent judgments from annotators, but also the embodied 
traces of how this content affect. This richer, multidimensional annotation 
process will enable the development of more inclusive, equitable, and socially 
aware AI systems for detecting sexism in complex multimedia formats like memes 
and short videos, where ambiguity and affect play a critical role.

Similar to the approaches in the 2023, 2024 and 2025 edition, this edition will 
also embrace the Learning With Disagreement (LeWiDi) paradigm for both the 
development of the dataset and the evaluation of the systems. The LeWiDi 
paradigm doesn’t rely on a single “correct” label for each example. Instead, 
the model is trained to handle and learn from conflicting or diverse 
annotations. This enables the system to consider various annotators’ 
perspectives, biases, or interpretations, resulting in a fairer learning 
process.

Building upon the EXIST 2025 dataset, this edition focuses exclusively on 
multimedia formats, comprising six experimental subtasks applied to images 
(memes) and videos (TikToks). Participants are challenged to address three main 
objectives: sexism identification (x.1), source intention detection (x.2), and 
sexism categorization (x.3) (numbering of subtask is consistent with EXIST 
2025). Participants will be asked to classify memes and videos (in English and 
Spanish) according to the following tasks:

TASK 2: Sexism detection in Memes:

      TASK 2.1 - Sexism Identification in Memes: this is a binary 
classification subtask consisting on determining wheter a meme describes a 
sexist situation or criticizes a sexist behaviour, and classifying it into two 
categories: YES and NO.

      Task 2.2: Source Intention in Memes: this subtask aims to categorize the 
meme according to the intention of the author. Due to the characteristics of 
the memes systems should only classify memes into the DIRECT or JUDGEMENTAL 
categories.

      Task 2.3: Sexism Categorization in Memes: once a message has been 
classified as sexist, the third subtask aims to categorize the message in 
different types of sexism (according to a categorization proposed by experts 
and that takes into account the different facets of women that are undermined). 
In particular, each sexist tweet must be categorized in one or more of the 
following categories: (i) IDEOLOGICAL AND INEQUALITY, (ii) STEREOTYPING AND 
DOMINANCE, (iii) OBJECTIFICATION, (iv) SEXUAL VIOLENCE and (v) MISOGYNY AND 
NON-SEXUAL VIOLENCE.
      
TASK 3: Sexism detection in Videos:

      SUBTASK 3.1 - Sexism Identification in Videos: this is a binary 
classification task as in Subtasks 2.1.

      SUBTASK 3.2: Source Intention in Videos: this subtask replicates subtask 
2.2 for memes, but it takes as source videos.

      SUBTASK 3.3: This subtask aims to classify sexist videos according to the 
categorization provided for Subtask 2.3: (i) IDEOLOGICAL AND INEQUALITY, (ii) 
STEREOTYPING AND DOMINANCE, (iii) OBJECTIFICATION, (iv) SEXUAL VIOLENCE and (v) 
MISOGYNY AND NON-SEXUAL VIOLENCE.

Although we recommend to participate in all subtasks and in both languages, 
participants are allowed to participate just in one of them (e.g. subtask 2.1) 
and in one language (e.g. English).

During the training phase, the task organizers will provide the participants 
with the manually-annotated EXIST 2026 dataset. For the evaluation of the 
systems, the unlabeled test data will be released.

We encourage participation from both academic institutions and industrial 
organizations. We invite participants to register for the lab at CLEF 2026 Labs 
Registration site (https://clef-labs-registration.dipintra.it/). You will 
receive information about how to join the Discord Group for the EXIST 2026 
shared task.


Important Dates:
* 17 November 2025: Registration opens.
* 26 February 2026: Training set available.
* 9 April 2026: Test set available.
* 23 April 2026: Registration closes.
* 7 May 2026: Runs submission due to organizers.
* 28 May 2026: Results notification to participants.
* 4 June 2026: Submission of Working Notes by participants.
* 30 June 2026: Notification of acceptance (peer reviews).
* 6 July 2026: Camera-ready participant papers due to organizers.
* 21-24 September 2026: EXIST 2026 at CLEF Conference.

** Note: All deadlines are 11:59PM UTC-12:00 ("anywhere on Earth") **


Organizers:
Laura Plaza, Universidad Nacional de Educación a Distancia (UNED)
Jorge Carrillo-de-Albornoz, Universidad Nacional de Educación a Distancia (UNED)
Iván Arcos, Universitat Politècnica de València (UPV)
Maria Aloy Mayo, Universitat Politècnica de València (UPV)
Paolo Rosso, Universitat Politècnica de València (UPV)
Damiano Spina, Royal Melbourne Institute of Technology (RMIT)


Contact:
Contact the organizers by writing to: [email protected]

Website: http://nlp.uned.es/exist2026/








AVISO LEGAL. Este mensaje puede contener información reservada y confidencial. 
Si usted no es el destinatario no está autorizado a copiar, reproducir o 
distribuir este mensaje ni su contenido. Si ha recibido este mensaje por error, 
le rogamos que lo notifique al remitente.
Le informamos de que sus datos personales, que puedan constar en este mensaje, 
serán tratados en calidad de responsable de tratamiento por la UNIVERSIDAD 
NACIONAL DE EDUCACIÓN A DISTANCIA (UNED) c/ Bravo Murillo, 38, 28015-MADRID-, 
con la finalidad de mantener el contacto con usted. La base jurídica que 
legitima este tratamiento, será su consentimiento, el interés legítimo o la 
necesidad para gestionar una relación contractual o similar. En cualquier 
momento podrá ejercer sus derechos de acceso, rectificación, supresión, 
oposición, limitación al tratamiento o portabilidad de los datos, ante la UNED, 
Oficina de Protección de datos<https://www.uned.es/dpj>, o a través de la Sede 
electrónica<https://uned.sede.gob.es/> de la Universidad.
Para más información visite nuestra Política de 
Privacidad<https://descargas.uned.es/publico/pdf/Politica_privacidad_UNED.pdf>.
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to