*Apologies for cross-posting*
Fifth Workshop on Gender Bias in Natural Language Processing
Bangkok, Thailand, on August 16, 2024
https://genderbiasnlp.talp.cat/ <https://kemt2024.wixsite.com/home>
Second Call for Papers
Gender bias, among other demographic biases (e.g. race, nationality, religion),
in machine-learned models is of increasing interest to the scientific community
and industry. Models of natural language are highly affected by such biases,
which are present in widely used products and can lead to poor user
experiences. There is a growing body of research into improved representations
of gender in NLP models. Key example approaches are to build and use balanced
training and evaluation datasets (e.g. Webster et al., 2018), and to change the
learning algorithms themselves (e.g. Bolukbasi et al., 2016). While these
approaches show promising results, there is more to do to solve identified and
future bias issues. In order to make progress as a field, we need to create
widespread awareness of bias and a consensus on how to work against it, for
instance by developing standard tasks and metrics. Our workshop provides a
forum to achieve this goal.
Topics of interest
We invite submissions of technical work exploring the detection, measurement,
and mediation of gender bias in NLP models and applications. Other important
topics are the creation of datasets, identifying and assessing relevant biases
or focusing on fairness in NLP systems. Finally, the workshop is also open to
non-technical work addressing sociological perspectives, and we strongly
encourage critical reflections on the sources and implications of bias
throughout all types of work.
In addition this year we are organising a Shared Task on Gender Bias Machine
Translation evaluation.
Paper Submission Information
Submissions will be accepted as short papers (4-6 pages) and as long papers
(8-10 pages), plus additional pages for references, following the ACL 2024
guidelines. Supplementary material can be added, but should not be central to
the argument of the paper. Blind submission is required.
Each paper should include a statement which explicitly defines (a) what system
behaviors are considered as bias in the work and (b) why those behaviors are
harmful, in what ways, and to whom (cf. Blodgett et al. (2020)). More
information on this requirement, which was successfully introduced at GeBNLP
2020, can be found on the workshop website. We also encourage authors to engage
with definitions of bias and other relevant concepts such as prejudice, harm,
discrimination from outside NLP, especially from social sciences and normative
ethics, in this statement and in their work in general.
Non-archival option
The authors have the option of submitting research as non-archival, meaning
that the paper will not be published in the conference proceedings. We expect
these submissions to describe the same quality of work and format as archival
submissions.
Important dates.
May 10, 2024: Workshop Paper Due Date
June 5, 2024: Notification of Acceptance
June 25, 2024: Camera-ready papers due
August 16, 2024: Workshop Dates
Keynote Speakers.
Isabelle Augenstein, University of Copenhagen
Hal Daumé III, University of Maryland and Microsoft Research NYC
Organizers.
Christine Basta, Alexandria University
Marta R. Costa-jussà, FAIR, Meta,
Agnieszka Falénska, University of Stuttgart
Seraphina Goldfarb-Tarrant, Cohere
Debora Nozza, Bocconi University
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]