SemEval 2024 Task 4: Multilingual Detection of Persuasion Techniques in Memes

First Call for Participation

Memes that are part of a disinformation campaign achieve their goal of 
influencing social media users through a number of rhetorical and psychological 
techniques, such as causal oversimplification, name calling, appeal to fear, 
straw man, loaded language, and smears.

The goal of the shared task is to build models for identifying such techniques 
in the textual content of a meme as well as in a multimodal setting (in many 
cases requiring to make complex inferences using both textual and visual 
content). Additionally, the task offers a hierarchy of these techniques, which 
allows the use of more complex approaches when building models. Finally, there 
will be three surprise test datasets in different languages (a fourth one in 
English will be released as well), which will be revealed only at the final 
stages of the shared task.

Specifically, we offer the following subtasks:

Subtask 1 (multilabel classification problem; text only; multilingual test 
set): Given only the "textual content" of a meme, identify which of the 20 
persuasion techniques, organized in a hierarchy, it uses.


Subtask 2a (multilabel classification problem; multimodal; multilingual test 
set): Given a meme, identify which of the 22 persuasion techniques, organized 
in a hierarchy, are used both in the textual and in the visual content of the 
meme


Subtask 2b (binary classification problem; multimodal): Given a meme (both the 
textual and the visual content), identify whether it contains a persuasion 
technique, or no technique.


The data is annotated with the following persuasion techniques:

Loaded Language; Name Calling/Labeling; Exaggeration/Minimization; Appeal to 
fear/prejudice; Flag-Waving; Slogans; Repetition; Doubt; Reductio ad Hitlerum; 
Obfuscation/Intentional Vagueness/Confusion; Smears; Glittering Generalities; 
Causal Oversimplification; Black-and-White Fallacy; Appeal to Authority; 
Bandwagon; Red Herring; Whataboutism; Thought-terminating Cliches; Straw Men.

We believe the tasks would be appealing to various NLP communities, including 
researchers working on sentiment analysis, fact-checking, argumentation mining, 
tagging, sequence modeling, as well as researchers working on image analysis in 
a multimodal scenario. 


A live leaderboard will allow participants to track their progress on both 
tasks. All participants will be invited to submit a paper to the SemEval-2024 
workshop. 


Shared task website: https://propaganda.math.unipd.it/semeval2024task4


Competition dates:  4 September 2023 - 31 January 2024

Schedule
October, Release of train labels

January 13, 2023 (tentative)    Release of the gold labels of the dev set

January 20, 2024 (tentative)    Release of the test set

January 31, 2024 at 23:59 (Anywhere on Earth)   Test submission site closes

February 29, 2024       Paper Submission Deadline

April 1, 2024   Notification to authors

April 22, 2024  Camera ready papers due

June 16–21, 2024 SemEval 2024 workshop (co-located with NAACL 2024 in Mexico 
City, Mexico)


Task Organisers: 

Dimitar Dimitrov, Sofia University "St. Kliment Ohrdiski"

Giovanni Da San Martino, University of Padova, Italy

Preslav Nakov, Mohamed bin Zayed University of Artificial Intelligence, UAE

Firoj Alam, Qatar Computing Research Institute, HBKU, Qatar

Maram Hasanain, Qatar Computing Research Institute, HBKU, Qatar

Abul Hasnat, Blackbird.ai

Fabrizio Silvestri, Sapienza University, Rome
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to