*Apologies for cross-posting*
Sixth Workshop on Gender Bias in Natural Language Processing
<https://gebnlp-workshop.github.io/>
ACL 2025- Vienna, Austria
July 27–August 1st, 2025
Second Call for Papers
Gender bias, among other demographic biases (e.g., race, nationality,
religion), in machine-learned models is of increasing interest to the
scientific community and industry. Models of natural language are highly
affected by such biases, which are present in widely used products and can lead
to poor user experiences. There is a growing body of research into improved
representations of gender in NLP models. Key example approaches are to build
and use balanced training and evaluation datasets (e.g. Webster et al., 2018),
and to change the learning algorithms themselves (e.g. Bolukbasi et al., 2016).
While these approaches show promising results, there is more to do to solve
identified and future bias issues. In order to make progress as a field, we
need to create widespread awareness of bias and a consensus on how to work
against it, for instance by developing standard tasks and metrics. Our workshop
provides a forum to achieve this goal.
Topics of interest
We invite submissions of technical work exploring the detection, measurement,
and mediation of gender bias in NLP models and applications. Other important
topics are the creation of datasets, identifying and assessing relevant biases
or focusing on fairness in NLP systems. Finally, the workshop is also open to
non-technical work addressing sociological perspectives, and we strongly
encourage critical reflections on the sources and implications of bias
throughout all types of work.
Paper Submission Information
Submissions will be accepted as short papers (4 pages) and as long papers (8
pages), plus additional pages for references, following the ACL 2025
guidelines. Supplementary material can be added, but should not be central to
the argument of the paper. Blind submission is required.
Each paper should include a statement which explicitly defines (a) what system
behaviors are considered as bias in the work and (b) why those behaviors are
harmful, in what ways, and to whom (cf. Blodgett et al. (2020)). More
information on this requirement, which was successfully introduced at GeBNLP
2020, can be found on the workshop website. We also encourage authors to engage
with definitions of bias and other relevant concepts such as prejudice, harm,
discrimination from outside NLP, especially from social sciences and normative
ethics, in this statement and in their work in general.
Non-archival option
The authors have the option of submitting research as non-archival, meaning
that the paper will not be published in the conference proceedings. We expect
these submissions to describe the same quality of work and format as archival
submissions.
Important dates
Direct submission deadline
<https://openreview.net/group?id=aclweb.org/ACL/2025/Workshop/GeBNLP>: March 1,
2025
Pre-reviewed (ARR) submission deadline
<https://openreview.net/group?id=aclweb.org/ACL/2025/Workshop/GeBNLP_ARR_Commitmen>:
March 25, 2025
Notification of acceptance: April 17, 2025
Camera-ready paper deadline: May 16, 2025
Workshop dates: July 31st - August 1st 2025
Please take care of the following policy regarding OpenReview:
New profiles created without an institutional email will go through a
moderation process that can take up to two weeks.
New profiles created with an institutional email will be activated
automatically.
Organizers
Christine Basta, Alexandria University
Marta R. Costa-jussà, FAIR, Meta,
Agnieszka Faleńska, University of Stuttgart
Debora Nozza, Bocconi University
Karolina Stańczak, Mila and McGill University
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]