Dear colleagues,

The Fifth Workshop on Insights from Negative Results in NLP Co-located with
NAACL, June 16-21 2024

First Call for Participation

Insights Website: <https://insights-workshop.github.io/>

Contact email: [email protected]


*Overview

Publication of negative results is difficult in most fields, but in NLP the
problem is exacerbated by the near-universal focus on improvements in
benchmarks. This situation implicitly discourages hypothesis-driven
research, and it turns creation and fine-tuning of NLP models into art
rather than science. Furthermore, it increases the time, effort, and carbon
emissions spent on developing and tuning models, as the researchers have no
opportunity to learn what has already been tried and failed.

This workshop invites both practical and theoretical unexpected or negative
results that have important implications for future research, highlight
methodological issues with existing approaches, and/or point out pervasive
misunderstandings or bad practices. In particular, the most successful NLP
models currently rely on Transformer-based large language models (LLMs). To
complement all the success stories, it would be insightful to see where and
possibly why they fail. Any NLP tasks are welcome: sequence labeling,
question answering, inference, dialogue, machine translation - you name it.

A successful negative results paper would contribute one of the following:

** broadly applicable recommendations for training/fine-tuning/prompting,
especially if X that didn’t work is something that many practitioners would
think reasonable to try, and if the demonstration of X’s failure is
accompanied by some explanation/hypothesis;
** ablation studies of components in previously proposed models, showing
that their contributions are different from what was initially reported;
** datasets or probing tasks showing that previous approaches do not
generalize to other domains or language phenomena;
** trivial baselines that work suspiciously well for a given task/dataset;
** cross-lingual studies showing that a technique X is only successful for
a certain language or language family;
** experiments on (in)stability of the previously published results due to
hardware, random initializations, preprocessing pipeline components, etc;
** theoretical arguments and/or proofs for why X should not be expected to
work;
** demonstration of issues with data processing/collection/annotation
pipelines, especially if they are widely used;
** demonstration of issues with evaluation metrics (e.g. accuracy, F1 or
BLEU), which prevent their usage for fair comparison of methods;
** demonstration of issues with under-reporting of training details of
pre-trained models, including test data contamination and invalid
comparisons

In 2024, we will invite the authors of accepted negative results papers to
nominate the specific work reporting the original positive results. The
goal is to organize joint discussion sessions, so that the community can
learn the most from the specific insightful failure.

* Important Dates

** Submission due: March 10, 2024
** Submission due for papers reviewed through ACL Rolling Review: April 7,
2024
** Notification of acceptance: April 14, 2024
** Camera-ready papers due: April 24, 2024
** Workshop: TBA, between June 21-22, 2024

* Submission

Submission is electronic, using the Softconf START conference management
system.
Submission link: <https://softconf.com/naacl2024/Insights2024>

The workshop will accept short papers (up to 4 pages, excluding
references), as well as 1-2 page non-archival abstract submissions for
papers published elsewhere (e.g. in one of the main conferences or in
non-NLP venues). The goal of this event is to stimulate a meaningful
community-wide discussion of the deep issues in NLP methodology, and the
authors of both types of submissions will be welcome to take part in our
get-togethers.
The workshop will run its own review process, and papers can be submitted
directly to the workshop by March 10, 2024. It is also possible to submit a
paper accompanied with reviews from the ACL Rolling Review system by April
7, 2024. The submission deadline for ARR papers follows the ACL RR
calendar. Both research papers and abstracts must follow the ACL two-column
format. Official style sheets:
https://github.com/acl-org/acl-style-files

Please do not modify these style files, nor should you use templates
designed for other conferences. Submissions that do not conform to the
required styles, including paper size, margin width, and font size
restrictions, will be rejected without review. Please follow the formatting
guidelines outlined here: https://acl-org.github.io/ACLPUB/formatting.html


* Multiple Submission Policy

The workshop cannot accept work for publication or presentation that will
be (or has been) published elsewhere and that have been or will be
submitted to other meetings or publications whose review periods overlap
with that of Insights. Any questions regarding submissions can be sent to
[email protected].

If the paper has been rejected from another venue, the authors will have
the option to provide the original reviews and the author response. The new
reviewers will not have access to this information, but the organizers will
be able to take into account the fact that the paper has already been
revised and improved.

* Anonymity Period

The workshop will follow the new ACL policy:
https://www.aclweb.org/adminwiki/index.php/ACL_Anonymity_Policy

* Presentation

All accepted papers must be presented at the workshop to appear in the
proceedings. Authors of accepted papers must notify the program chairs by
the camera-ready deadline if they wish to withdraw the paper. At least one
author of each accepted paper must register for the workshop.
Previous presentations of the work (e.g. preprints on arXiv.org) should be
noted in a footnote in the camera-ready version (but not in the anonymized
version of the paper).
The workshop will take place during NAACL 2024 (June 16-21 2024). It will
be hybrid, allowing for both in-person and virtual presentations.

* Organization Committee

** Shabnam Tafreshi, inQbator AI at eviCore Healthcare
** Arjun Reddy Akula, Google Research
** João Sedoc, New York University
** Anna Rogers, IT University of Copenhagen
** Aleksandr Drozd, RIKEN
** Anna Rumshisky, University of Massachusetts Lowell / Amazon Alexa

* Contact info
Any questions regarding the workshop can be sent to
[email protected].


Please continue reading about: Authorship, Citation and Comparison, Ethics
Policy, Reproducibility, and Presentation in the call for paper page on our
website: https://insights-workshop.github.io/2024/cfp/


Regards,
Insights 2024 Organizers

-- 
*Shabnam Tafreshi, PhD*

*Machine Learning Senior Advisor - NLP Researcher*

*Computational Linguistics, NLP*
*inQbator AI at eviCore Healthcare*


*"All the problems of the world could be settled easily, if people only
willing to think."*
*-Thomas J. Watson*
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to