[Corpora-List] [Call for Abstracts] Deadline extended: Analysis of Linguistic VAriation for BEtter Tools (ALVABET) within the LLcD 2024 Conference

2024-04-23 Thread Mathilde Regnault via Corpora
Call for Abstracts: Analysis of Linguistic VAriation for BEtter Tools (ALVABET) 
within the LLcD 2024 Conference (https://llcd2024.sciencesconf.org/)

Workshop
Variation plays a particularly important role in linguistic change, since every 
change stem from a state of variation; but each state of variation does not 
necessarily end up with a change: the new variant can disappear, or variation 
can linger but in different contexts. Access to sufficient amounts of data and 
their quantification, in order to detect the emergence of new variants as 
precisely as possible, and the recession or even disappearance of others, is a 
precious tool for the study of variations, whatever their dimensions 
(diachronic, diatopic, …) and in whatever field (syntax, morphology, …). The 
appearance of large corpora has thus renewed the study of variation. NLP has 
contributed largely to this renewal, providing tools for the enrichment and the 
exploration of these corpora. In return, linguistic analysis can help explain 
some of these errors and thus deepen the picture where performance metrics tend 
to flatten out everything under a single number, or even help improve the 
performances.

NLP annotation tools, such as syntactic parsers and morphological taggers, 
reach great performances nowadays when they are applied on similar data to 
those seen during their development. However, they quickly drop as the target 
data diverges from those of the training scenario. This raises a number of 
issues when it comes to using automatically annotated data to perform 
linguistic studies.

This workshop aims at exploring bilateral contributions between Natural 
Language Processing and variation analysis in the fields of morphosyntax and 
syntax, from diachronic and diatopic perspectives but also from genre, domain 
or form of writing, without any restriction on the languages of interest.

We warmly welcome submissions dealing with the issues and contributions of 
applying NLP to variation analysis :
• Quantification of variation along its different dimensions (both external 
and internal ones as well as in interaction with each other);
• Impact of annotation errors on the study of marginal structures (emergent 
or recessing);
• Syntactic variation when it is induced by semantic changes.

But also submissions dealing with the contributions of variation analysis to 
NLP:
• Variation mitigation (spelling standardisation...);
• Domain adaptation (domain referring here to any variation dimension);
• Error analysis (in and out of domain) in light of known variation 
phenomena, amongst which (de-)grammaticalisation;
• The evolution of grammatical categories and its impact on prediction 
models;
• The place of variation studies in NLP in the large language model era.

These themes are only suggestions, and the workshop will gladly host any 
submission that deals substantially with the reciprocal contributions between 
NLP and variation analysis in the mentioned fields.

Full workshop description: 
https://llcd2024.sciencesconf.org/data/pages/WS12Eng.pdf

Important Dates
• Apr 30, 2024: extended deadline for abstract submission
• May 15, 2024: Notification
• Sep 9-11: Conference

Submissions
Abstracts must clearly state the research questions, approach, method, data and 
(expected) results. They must be anonymous: not only must they not contain the 
presenters' names, affiliations or addresses, but they must avoid any other 
information that might reveal their author(s). They should not exceed 500 words 
(including examples, but excluding bibliographical references).
Abstracts will be assessed by two members of the Scientific Committee and (one 
of) the workshop organizers.
___
Corpora mailing list -- corpora@list.elra.info
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to corpora-le...@list.elra.info


[Corpora-List] Two postdoc positions at University of Turin (HARMONIA)

2024-04-23 Thread Valerio Basile via Corpora
The Content-Centered Computing
 group at the
University of Turin, Italy, offers *two 14-month postdoc positions* in the
context of HARMONIA (Harmony in Hybrid Decision-Making: Knowledge-enhanced
Perspective-taking LLMs for Inclusive Decision-Making), funded by the
European Union under the NextGenerationEU program within the larger project
FAIR (Future Artificial Intelligence) Spoke 2 "Integrative AI"
. The project aims at developing methods for the
adoption of knowledge-enhanced Large Language Models (LLMs) in supporting
informed and inclusive political decisions within public decision-making
processes.

The topics of the postdoc fellowships are:
- Computational linguistics methods for knowledge-enhanced
perspective-taking LLMs to support Inclusive Decision-Making
- Perspective-taking LLMs for supporting Inclusive Decision-Making
(full descriptions below)

The team includes members of the Computer Science Department and Economics
and Statistics Department of the University of Turin.
A PhD in Computer Science, Computational Linguistics, or related areas is
highly recommended. Knowledge of Italian is not mandatory.
The deadline for application is *May 13th 2024*.

The gross salary is €25.328 (about €1,860/month net salary). Turin
 is a
vibrant and liveable city in Northern Italy, close to the beautiful Italian
Alps and with a manageable cost of living
.

Link to the call
 (in
Italian). Link to the application platform
.
Please write to  or  for
further information on how to apply.

Best regards,
Valerio Basile

--

*Computational linguistics methods for knowledge-enhanced
perspective-taking LLMs to support Inclusive Decision-Making*
The activity will focus on a) design of a semantic model to represent
interactions between urban services and citizens and integrate multi-source
hybrid data; b) data annotation by citizens with different socio-cultural
backgrounds to collect different perspectives on social issues. Data will
be collected and organized in a Knowledge Graph. The activity will be
supported by an interdisciplinary team of experts in KR, behavioral
economics and LLMs (link with the design of knowledge-enhanced LLMs).

*Perspective-taking LLMs for supporting Inclusive Decision-Making*
The activity will focus on a) exploring techniques for integrating
multi-source hybrid citizen data into LLMs (RAG and Knowledge Injection);
b) developing methods for training and evaluating perspective-taking LLMs,
which explicitly encode multiple perspectives, embodying the point of view
of different citizen communities on a topic. Planned activities include:
benchmark creation, error analysis, and evaluation of the efficiency and
reliability of the developed technologies.
___
Corpora mailing list -- corpora@list.elra.info
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to corpora-le...@list.elra.info


[Corpora-List] CALAMITA - Challenge the Abilities of LAnguage Models in ITAlian - Call for Challenges

2024-04-23 Thread Malvina Nissim via Corpora
*CALAMITA - Challenge the Abilities of LAnguage Models in ITAlian*


*Special event co-located with the Tenth Italian Conference on
Computational Linguistics - CLiC-it 2024 Pisa, 4 - 6 December, 2024 -
https://clic2024.ilc.cnr.it/  *


*Upcoming deadline: 17th May 2024, challenge pre-proposal submission!
Pre-proposal form: *https://forms.gle/u4rSt9yXHHYquKrB6

*Project Description*

AILC, the Italian Association for Computational Linguistics, is launching a
*collaborative* effort to develop a dynamic and growing benchmark for
evaluating LLMs’ capabilities in Italian.

In the *long term*, we aim to establish a suite of tasks in the form of a
benchmark which can be accessed through a shared platform and a live
leaderboard. This would allow for ongoing evaluation of existing and newly
developed Italian or multilingual LLMs.

In the *short term*, we are looking to start building this benchmark
through a series of challenges collaboratively construed by the research
community. Concretely, this happens through the present call for challenge
contributions. In a similar style to standard Natural Language Processing
shared tasks, *participants are asked to contribute a task and the
corresponding dataset with which a set of LLMs should be challenged*.
Participants are expected to provide an explanation and motivation for a
given task, a dataset that reflects that task together with any information
relevant to the dataset (provenance, annotation, distribution of labels or
phenomena, etc.) and a rationale for putting that together that way.
Evaluation metrics and example prompts should also be provided. Existing
relevant datasets are also very welcome, together with related publications
if available. All of the proposed challenges either with existing datasets
or new datasets, will have to follow the challenge template, which will be
distributed in due time, towards the write-up of a challenge paper.

In this first phase, all prospective participants are asked to submit a
*pre-proposal* by filling in this form https://forms.gle/u4rSt9yXHHYquKrB6.
Please fill in all the fields so we can get an idea of what challenge you’d
like to propose, how the model should be prompted to perform the task,
where you’d get the data and how much, whether it’s already available, etc.

The organizers will examine the submitted pre-proposals and select those
challenges that comply with the template’s requirements, with an eye to
balancing different challenge types. The selected challenges will be
expanded with a full dataset, longer descriptions, etc. according to the
aforementioned template which will be distributed later. The final report
of each accepted challenge must provide the code for the evaluation with an
example that must smoothly run on a pre-selected base LLM (most likely
LLaMa-2) which will be communicated by the organisers in the second phase.
All reports will be published as CEUR Proceedings related to the CALAMITA
event. Subsequently, all challenge organisers who wish to be involved can
participate in a broader follow-up paper, targeting a top venue, which will
describe the whole benchmark, procedures, results, and analyses.

Once this first challenge set is put together, the *CALAMITA organizers*
will run *zero* or *few* shots experiments with a selection of LLMs, and
write a final report. No tuning materials or experiments are expected at
this stage of the project.

*Deadlines (tentative)*

   - *17th May 2024: pre-proposal submission*
   - 27th May 2024: notification of pre-proposal acceptance
   - End of May 2024: distribution of challenge paper template and further
   instructions
   - 2nd September 2024: data and report submission
   - 30th September 2024: benchmark ready with reports for each challenge
   (after light review)
   - October-November 2024: running selected models on the benchmark with
   analyses
   - 4th-6th December 2024: CLIC-it Pisa (special event co-located with
   CLIC-it 2024)

*Website:* https://clic2024.ilc.cnr.it/calamita (under construction)

*Mail: *calamita.a...@gmail.com

*Organizers*

   - Pierpaolo Basile (University of Bari Aldo Moro)
   - Danilo Croce (University of Rome, Tor Vergata)
   - Malvina Nissim (University of Groningen)
   - Viviana Patti (University of Turin)
___
Corpora mailing list -- corpora@list.elra.info
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to corpora-le...@list.elra.info


[Corpora-List] Postdoc or Research Assistant in London (QMUL)

2024-04-23 Thread Haim Dubossarsky via Corpora
On behalf of Prof. Mark Sandler.

Lyrics generation project using LLMs.

Notice the closing deadline.


From: Mark Sandler 

I am happy to announce that the Centre for Digital Music is now formally 
advertising the new research positions I posted last week. One area is lyrics 
generation and the other is music signal processing (instrument ID, loop ID, 
lyric transcription). Both are collaborative with London-based music industry 
companies, session and stage.



These are available immediately and can be offered as either post-doctoral or 
graduate research assistants, and can be either full- or part-time. Closing 
date is May 1 2024.



Details can be found here

https://www.qmul.ac.uk/jobs/vacancies/items/9619.html

https://www.qmul.ac.uk/jobs/vacancies/items/9617.html
___
Corpora mailing list -- corpora@list.elra.info
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to corpora-le...@list.elra.info


[Corpora-List] Re: Corpora Digest, Vol 789, Issue 1

2024-04-23 Thread frcchang--- via Corpora
help
 Replied Message 
From corpora-requ...@list.elra.info Date 04/22/2024 20:00 To 
corpora@list.elra.info Cc Subject Corpora Digest, Vol 789, Issue 1 
Send Corpora mailing list submissions to
corpora@list.elra.info
To subscribe or unsubscribe via email, send a message with subject or
body 'help' to
corpora-requ...@list.elra.info
You can reach the person managing the list at
corpora-ow...@list.elra.info
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Corpora digest..."
Today's Topics:
1. WMT 2024: Low-Resource Indic Language Translation. (Santanu Pal)
2. Final CPF: SIGIR eCom'24: May 3rd (Tracy Holloway King)
3. [2nd CFP] Special issue on Abusive Language Detection of the journal 
Traitement Automatique des Langues (TAL)
(Farah Benamara)
4. [Call for Participation]: GermEval2024 Shared Task GerMS-Detect - Sexism 
Detection in German Online News Fora @Konvens 2024
(stephanie.gr...@ofai.at)
--
Message: 1
Date: Sun, 21 Apr 2024 13:02:42 +0100
From: Santanu Pal 
Subject: [Corpora-List] WMT 2024: Low-Resource Indic Language
Translation.
To: corpora@list.elra.info
Message-ID:

Content-Type: multipart/alternative;
boundary="43488c06169a1b64"
Dear Colleagues,
We are pleased to inform you that we will be hosting the "Shared Task:
Low-Resource Indic Language Translation" again this year as part of WMT
2024. Following the outstanding success and enthusiastic participation
witnessed in the previous year's edition, we are excited to continue this
important initiative. Despite recent advancements in machine translation
(MT), such as multilingual translation and transfer learning techniques,
the scarcity of parallel data remains a significant challenge, particularly
for low-resource languages.
The WMT 2024 Indic Machine Translation Shared Task aims to address this
challenge by focusing on low-resource Indic languages from diverse language
families. Specifically, we are targeting languages such as Assamese, Mizo,
Khasi, Manipuri, Nyishi, Bodo, Mising, and Kokborok.
For inquiries and further information, please contact us at
lrilt.wm...@gmail.com. Additionally, you can find more details and updates
on the task through the following link: Task Link:
https://www2.statmt.org/wmt24/indic-mt-task.html.
We highly encourage participants to register in advance so that we can
provide updates regarding release dates of data and other relevant
information periodically
To register for the event, please fill out the registration form available
here. (
https://docs.google.com/forms/d/e/1FAIpQLSd8LwriqdLLhVNAvUWEcGRJmKuBFQZ9BR_TKpb6VYZEnyGU0g/viewform?pli=1
)
We look forward to your participation and contributions to advancing
low-resource Indic language translation.
with best regards,
Santanu
-- next part --
A message part incompatible with plain text digests has been removed ...
Name: not available
Type: text/html
Size: 1892 bytes
Desc: not available
--
Message: 2
Date: Sun, 21 Apr 2024 14:38:11 -0700
From: Tracy Holloway King 
Subject: [Corpora-List] Final CPF: SIGIR eCom'24: May 3rd
To: corpora@list.elra.info
Message-ID:

Content-Type: multipart/alternative;
boundary="d0b3b80616a22378"
Final Call For Papers - SIGIR eCom'24 - https://sigir-ecom.github.io/
The SIGIR Workshop on eCommerce will serve as a platform for publication
and discussion of Information Retrieval, NLP and Vision research relative
to their applications in the domain of eCommerce. This workshop will bring
together practitioners and researchers from academia and industry to
discuss the challenges and approaches to product search and recommendation
in eCommerce. The deadline for paper submission is May 3rd, 2024 (11:59
P.M. AoE)
The special theme of this year's workshop is eCommerce Search in the Age of
Generative AI and LLMs.
The workshop will also include a data challenge. This year we will
collaborate with TREC on a product search data challenge (
https://trec-product-search.github.io/index.html). The overarching goal is
to study how end-to-end retrieval systems can be built and evaluated given
a large set of products. The data challenge provides a corpus of products
and a set of user intents (queries): the goal is to find the product that
suits the user’s needs.
SIGIR eCom is a full day workshop taking place on Thursday, July 18, 2024
in conjunction with SIGIR 2024. SIGIR eCom'24 will be an in-person workshop.

Important Dates:
Paper submission deadline - May 3rd, 2024 (11:59 P.M. AoE)
Notification of acceptance - May 23, 2024
Camera Ready Version of Papers Due - June 24, 2024
SIGIR eCom Full day Workshop - July 18, 2024
We invite quality research contributions, position and opinion papers
addressing relevant challenges in the domain of eCommerce. We invite
submission of both papers and posters. All submitted papers and posters
will be single-blind and will 

[Corpora-List] [CFP] Deadline extended - 2024 SIGIR First Workshop on Large Language Models, LLMs, for Evaluation in Information Retrieval

2024-04-23 Thread Guglielmo Faggioli via Corpora
The first workshop on evaluating IR systems with Large Language Models
(LLMs) is accepting submissions that describe original research findings,
preliminary research results, proposals for new work, and recent relevant
studies already published in high-quality venues.

Topics of interest

We welcome both full papers and extended abstract submissions on the
following topics, including but not limited to:

   - LLM-based evaluation metrics for traditional IR and generative IR.
   - Agreement between human and LLM labels.
   - Effectiveness and/or efficiency of LLMs to produce robust relevance
   labels.
   - Investigating LLM-based relevance estimators for potential systemic
   biases.
   - Automated evaluation of text generation systems.
   - End-to-end evaluation of Retrieval Augmented Generation systems.
   - Trustworthiness in the world of LLMs evaluation.
   - Prompt engineering in LLMs evaluation.
   - Effectiveness and/or efficiency of LLMs as ranking models.
   - LLMs in specific IR tasks such as personalized search, conversational
   search, and multimodal retrieval.
   - Challenges and future directions in LLM-based IR evaluation.

Submission guidelines

We welcome the following submissions:

   - Previously unpublished manuscripts will be accepted as extended
   abstracts and full papers (any length between 1 - 9 pages) with unlimited
   references, formatted according to the latest ACM SIG proceedings template
   available at http://www.acm.org/publications/proceedings-template.
   - Published manuscripts can be submitted in their original format.

All submissions should be made through Easychair:
https://easychair.org/conferences/?conf=llm4eval

All papers will be peer-reviewed (single-blind) by the program committee
and judged by their relevance to the workshop, especially to the main
themes identified above, and their potential to generate discussion. For
already published studies, the paper can be submitted in the original
format. These submissions will be reviewed for their relevance to this
workshop. All submissions must be in English (PDF format).

All accepted papers will have a poster presentation with a few selected for
spotlight talks. Accepted papers may be uploaded to arXiv.org, allowing
submission elsewhere as they will be considered non-archival. The
workshop’s website will maintain a link to the arXiv versions of the papers.

Important Dates

   - Submission Deadline: April 25th May 2nd, 2024 (AoE time)
   - Acceptance Notifications: May 31st, 2024 (AoE time)
   - Workshop date: July 18, 2024

Website
For  more information, visit the workshop website:
https://llm4eval.github.io/

Contact

For any questions about paper submission, you may contact the workshop
organizers at llm4e...@easychair.org
___
Corpora mailing list -- corpora@list.elra.info
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to corpora-le...@list.elra.info


[Corpora-List] Edge Hill Corpus Research Group – Meeting #12

2024-04-23 Thread Costas Gabrielatos via Corpora
The next meeting of the Edge Hill Corpus Research Group will take place online 
(via MS Teams) on Thursday 25 April 2024, 2:00-3:30 pm (UK time).

Registration closes tomorrow (Wednesday 24 April), 11 am.

Attendance is free. You can register here:
https://store.edgehill.ac.uk/conferences-and-events/conferences/events/edge-hill-corpus-research-group-thursday-25th-april-2024

Topics: Corpus Methodology, Large Language Models

Speakers: Sylvia 
Jaworska (University 
of Reading, UK) & Mathew 
Gillings (Vienna 
University of Economics and Business, Austria)

Title: How humans vs. machines identify discourse topics: an exploratory 
triangulation

Abstract

Identifying discourses and discursive topics in a set of texts has not only 
been of interest to linguists, but to researchers working across social 
sciences. Traditionally, these analyses have been conducted based on 
small-scale interpretive analyses of discourse which involve some form of close 
reading. Naturally, however, that close reading is only possible when the 
dataset is small, and it leaves the analyst open to accusations of bias and/or 
cherry-picking.

Designed to avoid these issues, other methods have emerged which involve larger 
datasets and have some form of quantitative component. Within linguistics, this 
has typically been through the use of corpus-assisted methods, whilst outside 
of linguistics, topic modelling is one of the most widely-used approaches. 
Increasingly, researchers are also exploring the utility of LLMs (such as 
ChatGPT) to assist analyses and identification of topics. This talk reports on 
a study assessing the effect that analytical method has on the interpretation 
of texts, specifically in relation to the identification of the main topics. 
Using a corpus of corporate sustainability reports, totalling 98,277 words, we 
asked 6 different researchers, along with ChatGPT, to interrogate the corpus 
and decide on its main ‘topics’ via four different methods. Each method 
gradually increases in the amount of context available.

•   Method A: ChatGPT is used to categorise the topic model output and 
assign topic labels;
•   Method B: Two researchers were asked to view a topic model output and 
assign topic labels based purely on eyeballing the co-occurring words;
•   Method C: Two researchers were asked to assign topic labels based on a 
concordance analysis of 100 randomised lines of each co-occurring word;
•   Method D: Two researchers were asked to reverse-engineer a topic model 
output by creating topic labels based on a close reading.

The talk explores how the identified topics differed both between researchers 
in the same condition, and between researchers in different conditions shedding 
light on some of the mechanisms underlying topic identification by machines vs 
humans or machines assisted by humans. We conclude with a series of tentative 
observations regarding the benefits and limitations of each method along with 
suggestions for researchers in selecting an analytical approach for discourse 
topic identification. While this study is exploratory and limited in scope, it 
opens up a way for further methodological and larger scale triangulations of 
corpus-based analyses with other computational methods including AI.

  
Edge Hill University
Modern University of the Year, The Times and Sunday Times Good University Guide 
2022
University of the Year, Educate North 2021/21
  
This message is private and confidential. If you have received this message in 
error, please notify the sender and remove it from your system. Any views or 
opinions presented are solely those of the author and do not necessarily 
represent those of Edge Hill or associated companies. Edge Hill University may 
monitor email traffic data and also the content of email for the purposes of 
security and business communications during staff 
absence.
___
Corpora mailing list -- corpora@list.elra.info
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to corpora-le...@list.elra.info


[Corpora-List] First Call For Papers: 17th International Natural Language Generation Conference INLG 2024

2024-04-23 Thread Saad Mahamood via Corpora
*First Call For papers: 17th International Natural Language Generation 
Conference INLG 2024*

We invite the submission of long and short papers, as well as system 
demonstrations, related to all aspects of Natural Language Generation (NLG), 
including data-to-text, concept-to-text, text-to-text and vision-to-text 
approaches. Accepted papers will be presented as oral talks or posters.

The event is organized under the auspices of the Special Interest Group on 
Natural Language Generation (SIGGEN) (https://aclweb.org/aclwiki/SIGGEN) of the 
Association for Computational Linguistics (ACL) (https://aclweb.org/). The 
event will be held from 23-27 September in Tokyo, Japan. INLG 2024 will be 
taking place after SIGDial 2024 (18-20 September) nearby in Kyoto.

**Important dates**

All deadlines are Anywhere on Earth (UTC-12)
• START system regular paper submission deadline: May 31, 2024
• ARR commitment to INLG deadline via START system: June 24, 2024
• START system demo paper submission deadline: June 24, 2024
• Notification: July 15, 2024
• Camera ready: August 16, 2024
• Conference: 23-27 September 2024

**Topics** 

INLG 2024 solicits papers on any topic related to NLG. General topics of 
interest include, but are not limited to:
• Large Language Models (LLMs) for NLG 
• Affect/emotion generation
• Analysis and detection of automatically generated text
• Bias and fairness in NLG systems
• Cognitive modelling of language production
• Computational efficiency of NLG models
• Content and text planning
• Corpora and resources for NLG
• Ethical considerations of NLG
• Evaluation and error analysis of NLG systems
• Explainability and Trustworthiness of NLG systems
• Generalizability of NLG systems
• Grounded language generation
• Lexicalisation
• Multimedia and multimodality in generation
• Natural language understanding techniques for NLG
• NLG and accessibility
• NLG in speech synthesis and spoken language models
• NLG in dialogue
• NLG for human-robot interaction
• NLG for low-resourced languages
• NLG for real-world applications
• Paraphrasing, summarization and translation
• Personalisation and variation in text
• Referring expression generation
• Storytelling and narrative generation
• Surface realization
• System architectures

**Submissions & Format**

Three kinds of papers can be submitted:
• Long papers are most appropriate for presenting substantial research results 
and must not exceed eight (8) pages of content, plus unlimited pages of ethical 
considerations, supplementary material statements, and references. The 
supplementary material statement provides detailed descriptions to support the 
reproduction of the results presented in the paper (see below for details). The 
final versions of long papers will be given one additional page of content (up 
to 9 pages) so that reviewers' comments can be taken into account.
• Short papers are more appropriate for presenting an ongoing research effort 
and must not exceed four (4) pages, plus unlimited pages of ethical 
considerations, supplementary material statements, and references. The final 
versions of short papers will be given one additional page of content (up to 5 
pages) so that reviewers' comments can be taken into account.
• Demo papers should be no more than two (2) pages, including references, and 
should describe implemented systems relevant to the NLG community. It also 
should include a link to a short screencast of the working software. In 
addition, authors of demo papers must be willing to present a demo of their 
system during INLG 2024.

Submissions should follow ACL Author Guidelines 
(https://www.aclweb.org/adminwiki/index.php?title=ACL_Author_Guidelines) and 
policies for submission, review and citation, and be anonymised for double 
blind reviewing. Please use ACL 2023 style files; LaTeX style files and 
Microsoft Word templates are available at: 
https://acl-org.github.io/ACLPUB/formatting.html

Authors must honor the ethical code set out in the ACL Code of Ethics 
(https://www.aclweb.org/portal/content/acl-code-ethics). If your work raises 
any ethical issues, you should include an explicit discussion of those issues. 
This will also be taken into account in the review process. You may find the 
following checklist of use: https://aclrollingreview.org/responsibleNLPresearch/

Authors are strongly encouraged to ensure that their work is reproducible; see, 
e.g., the following reproducibility checklist 
(https://2021.aclweb.org/calls/reproducibility-checklist/). Papers involving 
any kind of experimental results (human judgments, system outputs, etc) should 
incorporate a data availability statement into their paper. Authors are asked 
to indicate whether the data is made publicly available. If the data is not 
made available, authors should provide a brief explanation why. (E.g. because 
the data contains proprietary information.) A statement guide is available on 
the INLG 2024 website: https://inlg2024.github.io/

To submit a