Third Workshop on Human Evaluation of NLP Systems (HumEval’23)
###############################################################

https://humeval.github.io/

RANLP’23, Varna, Bulgaria, 7 September 2023

submission deadline:  20 July 2023, 23:59 UTC-12




Final Call for Papers
++++++++++++++++++++++

The Third Workshop on Human Evaluation of NLP Systems (HumEval’23) invites
the submission of long and short papers on substantial, original, and
unpublished research on all aspects of human evaluation of NLP systems with
a focus on NLP systems which produce language as output. We welcome work on
any quality criteria relevant to NLP, on both intrinsic evaluation (which
assesses systems and outputs directly) and extrinsic evaluation (which
assesses systems and outputs indirectly in terms of its impact on an
external task or system), on quantitative as well as qualitative methods,
score-based (discrete or continuous scores) as well as annotation-based
(marking, highlighting).


Important dates
----------------

    Workshop paper submission deadline: 20 July 2023
    Workshop paper acceptance notification: 5 August 2023
    Workshop paper camera-ready versions: 25 August 2023
    Workshop camera-ready proceedings ready: 31 August 2023

All deadlines are 23:59 UTC-12.



Topics
-------

We invite papers on topics including, but not limited to, the following:

    Experimental design and methods for human evaluations
    Reproducibility of human evaluations
    Work on inter-evaluator and intra-evaluator agreement
    Ethical considerations in human evaluation of computational systems
    Quality assurance for human evaluation
    Crowdsourcing for human evaluation
    Issues in meta-evaluation of automatic metrics by correlation with
human evaluations
    Alternative forms of meta-evaluation and validation of human evaluations
    Comparability of different human evaluations
    Methods for assessing the quality and the reliability of human
evaluations
    Role of human evaluation in the context of Responsible and Accountable
AI

We welcome work from any subfield of NLP (and ML/AI more generally), with a
particular focus on evaluation of systems that produce language as output.


ReproNLP shared task
---------------------

The workshop will also host a shared Task on Reproducibility of Evaluations
in NLP (ReproNLP).


Papers
------

Long papers
- - - - - -

Long papers must describe substantial, original, completed and unpublished
work. Wherever appropriate, concrete evaluation and analysis should be
included. Long papers may consist of up to eight (8) pages of content, plus
unlimited pages of references. Final versions of long papers will be given
one additional page of content (up to 9 pages) so that reviewers’ comments
can be taken into account. Long papers will be presented orally or as
posters as determined by the programme committee. Decisions as to which
papers will be presented orally and which as posters will be based on the
nature rather than the quality of the work. There will be no distinction in
the proceedings between long papers presented orally and as posters.

Short papers
- - - - - - -

Short paper submissions must describe original and unpublished work. Short
papers should have a point that can be made in a few pages. Examples of
short papers are a focused contribution, a negative result, an opinion
piece, an interesting application nugget, a small set of interesting
results. Short papers may consist of up to four (4) pages of content, plus
unlimited pages of references. Final versions of short papers will be given
one additional page of content (up to 5 pages) so that reviewers’ comments
can be taken into account. Short papers will be presented orally or as
posters as determined by the programme committee. While short papers will
be distinguished from long papers in the proceedings, there will be no
distinction in the proceedings between short papers presented orally and as
posters.
Multiple submission policyPermalink

HumEval’23 allows multiple submissions. However, if a submission has
already been, or is planned to be, submitted to another event, this must be
clearly stated in the submission form.


Submission procedure and templates
-----------------------------------

To submit, go directly to the workshop page at the Softconf START system
https://softconf.com/ranlp23/HumEval/

The papers should follow the format of the main conference, described at
the RANLP website, Submissions page.
http://ranlp.org/ranlp2023/index.php/submissions/


Organisers

Anya Belz, ADAPT Centre, Dublin City University, Ireland
Maja Popović, ADAPT Centre, Dublin City University, Ireland
Ehud Reiter, University of Aberdeen, UK
João Sedoc, New-York University
Craig Thomson, University of Aberdeen, UK

For questions and comments regarding the workshop please contact the
organisers at [email protected].
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to