*** Final Call for Papers (Deadline extended to March 17th) ***


We invite paper submissions to the 8th Workshop on Online Abuse and Harms 
(WOAH), which will take place on June 20/21 at NAACL 2024.



Website: https://www.workshopononlineabuse.com/cfp.html

Join our WOAH community Slack 
channel<https://hatespeechdet-47d7560.slack.com/join/shared_invite/zt-2a8d96j4z-gkNk_aLrliUK4NxA8woqIw#/shared-invite/email>!



Important Dates

Submission due: March 17, 2024

ARR reviewed submission due: April 7, 2024

Notification of acceptance: April 14, 2024

Camera-ready papers due: April 24, 2024

Workshop: June 20/21, 2024



Overview

Digital technologies have brought many benefits for society, transforming how 
people connect, communicate and interact with each other. However, they have 
also enabled abusive and harmful content such as hate speech and harassment to 
reach large audiences, and for their negative effects to be amplified. The 
sheer amount of content shared online means that abuse and harm can only be 
tackled at scale with the help of computational tools. However, detecting and 
moderating online abuse and harms is a difficult task, with many technical, 
social, legal and ethical challenges. The Workshop on Online Abuse and Harms 
invites paper submissions from a wide range of fields, including natural 
language processing, machine learning, computational social sciences, law, 
politics, psychology, sociology and cultural studies. We explicitly encourage 
interdisciplinary submissions, technical as well as non-technical submissions, 
and submissions that focus on under-resourced languages. We also invite 
non-archival submissions and civil society reports.



The topics covered by WOAH include, but are not limited to:

  *   New models or methods for detecting abusive and harmful online content, 
including misinformation;
  *   Biases and limitations of existing detection models or datasets for 
abusive and harmful online content, particularly those in commercial use;
  *   New datasets and taxonomies for online abuse and harms;
  *   New evaluation metrics and procedures for the detection of harmful 
content;
  *   Dynamics of online abuse and harms, as well as their impact on different 
communities
  *   Social, legal, and ethical implications of detecting, monitoring and 
moderating online abuse



In addition, we invite submissions related to the theme for this eighth edition 
of WOAH, which will be online harms in the age of large language models. Highly 
capable Large Language Models (LLMs) are now widely deployed and easily 
accessible by millions across the globe. Without proper safeguards, these LLMs 
will readily follow malicious instructions and generate toxic content. Even the 
safest LLMs can be exploited by bad actors for harmful purposes. With this 
theme, we invite submissions that explore the implications of LLMs for the 
creation, dissemination and detection of harmful online content. We are 
interested in how to stop LLMs from following malicious instructions and 
generating toxic content, but also how they could be used to improve content 
moderation and enable countermeasures like personalised counterspeech. To 
support our theme, we have invited an interdisciplinary line-up of high-profile 
speakers across academia, industry and public policy.



Submission

Submission is electronic, using the Softconf START conference management system.

Submission link: 
https://softconf.com/naacl2024/WOAH2024/manager/scmd.cgi?scmd=submitPaperCustom&pageid=0&isPreview=yes



The workshop will accept three types of papers.



  *   Academic Papers (long and short): Long papers of up to 8 pages, excluding 
references, and short papers of up to 4 pages, excluding references. Unlimited 
pages for references and appendices. Accepted papers will be given an 
additional page of content to address reviewer comments. Previously published 
papers cannot be accepted.
  *   Non-Archival Submissions: Up to 2 pages, excluding references, to 
summarise and showcase in-progress work and work published elsewhere.
  *   Civil Society Reports: Non-archival submissions, with a minimum of 2 
pages and no upper limit. Can include work published elsewhere.



Format and styling

All submissions must use the official ACL two-column format, using the supplied 
official style files. The templates can be downloaded in Style Files and 
Formatting<https://github.com/acl-org/acl-style-files>.



Please send any questions about the workshop to 
[email protected]<mailto:[email protected]>





Organisers

Paul Röttger, Bocconi University

Yi-Ling Chung, The Alan Turing Institute

Debora Nozza, Bocconi University

Aida Mostafazadeh Davani, Google Research

Agostina Calabrese, University of Edinburgh

Flor Miriam Plaza-del-Arco, Bocconi University

Zeerak Talat, MBZUAI

The Alan Turing Institute is a limited liability company, registered in England 
with registered number 09512457. Our registered office is at British Library, 
96 Euston Road, London, England, NW1 2DB. We are also a charity registered in 
England with charity number 1162533. This email and any attachments are 
confidential and may be legally privileged. If you have received it in error, 
you are on notice of its status. If you have received this message in error, 
please send it back to us, and immediately and permanently delete it. Do not 
use, copy or disclose the information contained in this message or in any 
attachment. DISCLAIMER: Although The Alan Turing Institute has taken reasonable 
precautions to ensure no viruses are present in this email, The Alan Turing 
Institute cannot accept responsibility for any loss or damage sustained as a 
result of computer viruses and the recipient must ensure that the email (and 
attachments) are virus free. While we take care to protect our systems from 
virus attacks and other harmful events, we give no warranty that this message 
(including attachments) is free of any virus or other harmful matter, and we 
accept no responsibility for any loss or damage resulting from the recipient 
receiving, opening or using it. E-mail transmission cannot be guaranteed to be 
secure or error-free as information could be intercepted, corrupted, lost, 
destroyed, arrive late or be incomplete. If you think someone may have 
interfered with this email, please contact the Alan Turing Institute by 
telephone only and speak to the person dealing with your matter or the Accounts 
Department. Fraudsters are increasingly targeting organisations and their 
affiliates, often requesting funds to be transferred to a different bank 
account. The Alan Turing's bank details are contained within our terms of 
engagement. If you receive a suspicious or unexpected email from us, or 
purporting to have been sent on our behalf, particularly containing different 
bank details, please do not reply to the email, click on any links, open any 
attachments, nor comply with any instructions contained within it, but contact 
our Accounts department by telephone. Our Transparency Notice found here - 
https://www.turing.ac.uk/transparency-notice sets out how and why we collect, 
store, use and share your personal data and it explains your rights and how to 
raise concerns with us.
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to