(Apologies for cross-posting)

Second Call for Papers for the

WORKSHOP ON THE SCALING BEHAVIOR OF LARGE LANGUAGE MODELS (SCALE-LLM 2024)

https://scale-llm-24.pages.dev/ [1]

Submission deadline: December 18, 2023.

The purpose of the SCALE-LLM workshop is to provide a venue to share and
discuss results of investigations into the scaling behavior of Large Language
Models (LLMs). We are particularly interested in results displaying
"interesting" scaling curves (e.g., inverse, u-shaped, or inverse u-shaped
scaling curves) for a variety of tasks. These results, where the performance
of the LLMs decreases with increasing model size or follows a non-monotonic
trend, deviating from the expected "the bigger, the better" positive scaling
laws, are of great scientific interest as they can reveal intrinsic
limitations of current LLM architectures and training paradigms and they
provide novel research directions towards a better understanding of these
models and of possible approaches to improve them.

Recently, there has been an increasing interest in these phenomena from the
research community, culminating in the Inverse Scaling Prize (McKenzie et al.
2023), which solicited tasks to be systematically evaluated according to a
standardized protocol in order to perform a systematic study. The SCALE-LLM
Workshop will expand these efforts.
In contrast to the Inverse Scaling Prize, which focused on  zero-shot tasks
with a fixed format, we are also interested in, for example, few-shot and
alternate prompting strategies (e.g. Chain-of-Thoughts), multi-step
interactions (e.g. Tree-of-Thoughts, self-critique), hardening against prompt
injection attacks (e.g. user input escaping, canary tokens), etc.

MAIN TOPICS

The workshop will provide focused discussions on multiple topics in the
general field of Scaling behavior of Large Language Models, including, but
not limited to the following:

1. Novel tasks that exhibit Inverse, U-shaped, Inverse U-shaped or other
types of scaling;
2. Scaling behavior of fine-tuned or purpose-built models, in particular
in-distribution (w.r.t. the fine-tuning dataset) vs. out-of-distribution;
3. Scaling with adaptive prompting strategies, e.g. allowing intermediate
reasoning steps, model self-critique or use of external tools;
4. Scaling w.r.t. additional dimensions, such as the number of
in-context/fine-tuning examples, the number of easoning steps, or the
intrinsic task complexity;
5. Scaling on non-English language tasks, in particular low-resource
languages, where models might exhibit tradeoffs as high-resource language
training data overwhelms low-resource language capabilities;
6. Scaling w.r.t. qualitative characteristics: internal aspects (e.g.
modularity, mechanistic interpretability), calibration, uncertainty,
effectiveness of various techniques (pruning, defences against adversarial
attacks, etc.).

IMPORTANT DATES

- Workshop paper submission deadline: December 18, 2023
- EACL rejected paper submission deadline (ARR pre-reviewed): January 17,
2024
- Notification of acceptance: January 20, 2024
- Camera-ready papers due: January 30 2024
- Workshop dates: March 21 or 22, 2024

SUBMISSION INSTRUCTIONS

We solicit short and long paper submissions with no more than 4 and 8 pages,
respectively, plus unlimited pages for references and appendices.

Papers must contain "Limitations" and "Ethics Statement" sections which will
not count towards the page limit. Upon acceptance, one additional page will
be provided to address the reviewers' comments. Paper submissions must use
the official ACL style templates (https://github.com/acl-org/acl-style-files
[2]) and must follow the ACL formatting guidelines
(https://acl-org.github.io/ACLPUB/formatting.html [3]).

All submissions must be anonymous. De-anonymized versions of the submitted
papers may be released on pre-print servers such as arXiv, however, we kindly
ask the authors not discuss these papers on social media during the review
period.

Please, send your submissions to our OpenReview interface:
https://openreview.net/group?id=eacl.org/EACL/2024/Workshop/SCALE-LLM [4]

We can also consider papers submitted via ACL Rolling Reviews (ARR) to EACL
and rejected. A paper may not be simultaneously under review through ARR and
SCALE-LLM. A paper that has or will receive reviews through ARR may not be
submitted for review to SCALE-LLM. Keep in mind that ARR has stricter
anonymity requirements regarding pre-print servers and social media, so make
sure you do not de-anonymize papers submitted through ARR by posting them on
arXiv or social media. Please refer to the ARR instructions for autors
(https://aclrollingreview.org/authors [5]) for more information.

STUDENT SCHOLARSHIP

Thanks to our Platinum sponsor Google, we can offer financial support to a
limited number of students from low-income countries or other disadvantaged
financial situation who would like to participate to the SCALE-LLM workshop.
We may able to cover the EACL virtual conference registration fee. We will
prioritize students who are authors of one of the accepted papers. If you are
interested in receiving financial support, please contact us before January
30 2024, explaning your situation.

INVITED SPEAKERS

Najoung Kim will give a keynote talk. Dr. Kim is an Assistant Professor at
Boston University and a researcher at Google. She is one of the authors of
the Inverse Scaling Prize paper as well as other foundational works in this
field.

Additional speakers will be announced at a later date.

SCHEDULE

To be decided.

ORGANIZING COMMITTEE

- Antonio Valerio Miceli-Barone, Research Associate, University of Edinburgh
- Fazl Barez, Research fellow, University of Oxford
- Shay Cohen, Reader, University of Edinburgh
- Elena Voita, Research Scientist, Meta
- Ulrich Germann, Senior Computing Officer (Research), University of
Edinburgh
- Michal Lukasik, Researcher, Google Research

CONTACTS

Workshop website: https://scale-llm-24.pages.dev/ [6]
Email: amiceli [at] ed.ac.uk

Best Regards,

The SCALE-LLM organizers
Antonio Valerio Miceli-Barone, Fazl Barez, Shay Cohen, Elena Voita, Ulrich
Germann, Michal Lukasik

Read more:
https://www.aclweb.org/portal/content/workshop-scaling-behavior-large-language-models

[1] https://scale-llm-24.pages.dev/
[2] https://github.com/acl-org/acl-style-files
[3] https://acl-org.github.io/ACLPUB/formatting.html
[4] https://openreview.net/group?id=eacl.org/EACL/2024/Workshop/SCALE-LLM
[5] https://aclrollingreview.org/authors
[6] https://scale-llm-24.pages.dev/

The University of Edinburgh is a charitable body, registered in Scotland, with 
registration number SC005336. Is e buidheann carthannais a th’ ann an Oilthigh 
Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to