Dear colleagues,

due to popular demand, the deadline for regular paper submissions to the 4th 
Workshop on Evaluation and Comparison for NLP systems (Eval4NLP) at AACL 2023 
will be extended by 1 week. 

The new submission deadline is **September 1**.

Please note that Eval4NLP also allows submission of pre-reviewed papers 
*together* with their reviews and explanation of possible refinements. These 
can be submitted up to one month later. For details, please see below or visit 
the Eval4NLP webpage: https://eval4nlp.github.io/2023/cfp.html.

-----------------------------------------------------------------------------------------

The 4th Workshop on Evaluation and Comparison for NLP systems (Eval4NLP), 
co-located at the 2023 Conference of the Asia-Pacific Chapter of the 
Association for Computational Linguistics (AACL 2023), invites the submission 
of long and short papers, with a theoretical or experimental nature, describing 
recent advances in system evaluation and comparison in NLP.

** Important Dates **

All deadlines are 11.59 pm UTC -12h (“Anywhere on Earth”).

- Direct submission to Eval4NLP: ** September 1 **
- Submission of pre-reviewed papers to Eval4NLP (see below for details) : 
September 25
- Notification of acceptance: October 2
- Camera-ready papers due: October 10
- Workshop day: November 1

Please see the Call for Papers for more details: 
https://eval4nlp.github.io/2023/cfp.html.

** Special topic of this year’s workshop **

This year's edition of the Eval4NLP workshop puts a focus on the evaluation of 
and through large language models (LLMs). Notably, the workshop will feature a 
shared task on LLM evaluation and specifically encourages the submission of LLM 
evaluation focused papers. Other submissions that fit the general scope of 
Eval4NLP are of course also welcome. See below for more details.

** Shared Task **

This year’s version will come with a shared task on explainable evaluation of 
generated language (MT and summarization) with a focus on LLM prompts. Please 
find more information on the shared task page: 
https://eval4nlp.github.io/2023/shared-task.html.

** Topics **

Designing evaluation metrics: Proposing and/or analyzing metrics with desirable 
properties, e.g., high correlations with human judgments, strong in 
distinguishing high-quality outputs from mediocre and low-quality outputs, 
robust across lengths of input and output sequences, efficient to run, etc.; 
Reference-free evaluation metrics, which only require source text(s) and system 
predictions; Cross-domain metrics, which can reliably and robustly measure the 
quality of system outputs from heterogeneous modalities (e.g., image and 
speech), different genres (e.g., newspapers, Wikipedia articles and scientific 
papers) and different languages; Cost-effective methods for eliciting 
high-quality manual annotations; and Methods and metrics for evaluating 
interpretability and explanations of NLP models.

Creating adequate evaluation data: Proposing new datasets or analyzing existing 
ones by studying their coverage and diversity, e.g., size of the corpus, 
covered phenomena, representativeness of samples, distribution of sample types, 
variability among data sources, eras, and genres; and Quality of annotations, 
e.g., consistency of annotations, inter-rater agreement, and bias check.

Reporting correct results: Ensuring and reporting statistics for the 
trustworthiness of results, e.g., via appropriate significance tests, and 
reporting of score distributions rather than single-point estimates, to avoid 
chance findings; reproducibility of experiments, e.g., quantifying the 
reproducibility of papers and issuing reproducibility guidelines; and 
Comprehensive and unbiased error analyses and case studies, avoiding 
cherry-picking and sampling bias.

** Submission Guidelines **

The workshop welcomes two types of submission -- long and short papers. Long 
papers may consist of up to 8 pages of content, plus unlimited pages of 
references. Short papers may consist of up to 4 pages of content, plus 
unlimited pages of references. Please follow the ACL ARR formatting 
requirements, using the official templates: 
https://github.com/acl-org/acl-style-files. Final versions of both submission 
types will be given one additional page of content for addressing reviewers’ 
comments. The accepted papers will appear in the workshop proceedings. The 
review process is double-blind. Therefore, no author information should be 
included in the papers and the (optional) supplementary materials. 
Self-references that reveal the author's identity must be avoided. Papers that 
do not conform to these requirements will be rejected without review.

** The submission sites on Openreview **

Standard submissions: 
https://openreview.net/group?id=aclweb.org/AACL-IJCNLP/2023/Workshop/Eval4NLP&referrer=%5BHomepage%5D(%2F)
Pre-reviewed submissions: 
https://openreview.net/group?id=aclweb.org/AACL-IJCNLP/2023/Workshop/Eval4NLP_Previously_Reviewed&referrer=%5BHomepage%5D(%2F)

See below for more information on the two submission modes.

** Two submission modes: standard and pre-reviewed **

Eval4NLP features two modes of submissions. Standard submissions: We invite the 
submission of papers that will receive up to three double-blind reviews from 
the Eval4NLP committee, and a final verdict from the workshop chairs. 
Pre-reviewed: To a later deadline, we invite unpublished papers that have 
already been reviewed, either through ACL ARR, or recent 
AACL/EACL/ACL/EMNLP/COLING venues (these papers will not receive new reviews 
but will be judged together with their reviews via a meta-review; authors are 
invited to attach a note with comments on the reviews and describe possible 
revisions).

Final verdicts will be either accept, reject, or conditional accept, i.e., the 
paper is only accepted provided that specific (meta-)reviewer requirements have 
been met. Please also note the multiple submission policy.

** Optional Supplementary Materials **

Authors are allowed to submit (optional) supplementary materials (e.g., 
appendices, software, and data) to improve the reproducibility of results 
and/or to provide additional information that does not fit in the paper. All of 
the supplementary materials must be zipped into one single file (.tgz or .zip) 
and submitted via Openreview together with the paper. However, because 
supplementary materials are completely optional, reviewers may or may not 
review or even download them. So, the submitted paper should be fully 
self-contained.

** Preprints **

Papers uploaded to preprint servers (e.g., ArXiv) can be submitted to the 
workshop. There is no deadline concerning when the papers were made publicly 
available. However, the version submitted to Eval4NLP must be anonymized, and 
we ask the authors not to update the preprints or advertise them on social 
media while they are under review at Eval4NLP.

** Multiple Submission Policy **

Eval4NLP allows authors to submit a paper that is under review in another venue 
(journal, conference, or workshop) or to be submitted elsewhere during the 
Eval4NLP review period. However, the authors need to withdraw the paper from 
all other venues if they get accepted and want to publish in Eval4NLP. Note 
that AACL and ARR do not allow double submissions. Hence, papers submitted both 
to the main conference and AACL workshops (including Eval4NLP) will violate the 
multiple submission policy of the main conference. If authors would like to 
submit a paper under review by AACL to the Eval4NLP workshop, they need to 
withdraw their paper from AACL and submit it to our workshop before the 
workshop submission deadline.


** Best Paper Awards **

We will optionally award prizes to the best paper submissions (subject to 
availability; more details to come soon). Both long and short submissions will 
be eligible for prizes.

** Presenting Published Papers **

If you want to present a paper which has been published recently elsewhere 
(such as other top-tier AI conferences) at our workshop, you may send the 
details of your paper (Paper title, authors, publication venue, abstract, and a 
link to download the paper) directly to [email protected]. We will select a 
few high-quality and relevant papers to present at Eval4NLP. This allows such 
papers to gain more visibility from the workshop audience and increases the 
variety of the workshop program. Note that the chosen papers are considered as 
non-archival here and will not be included in the workshop proceedings.

-----------------------------------------------------------------------------------------

Best wishes,

Eval4NLP organizers

Website: https://eval4nlp.github.io/2023/index.html
Email: [email protected]
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to