*****
*1st Workshop on Automated Evaluation of Learning and Assessment Content*
AIED 2024 workshop |  Recife (Brazil) & Hybrid | 8-12 July 2024
https://sites.google.com/view/eval-lac-2024/
*****

We are happy to announce the first edition of the Workshop on Automated
Evaluation of Learning and Assessment Content will be held in Recife
(Brazil) & online during the AIED 2024 conference.

*About the workshop*
The evaluation of learning and assessment content has always been a crucial
task in the educational domain, but traditional approaches based on human
feedback are not always usable in modern educational settings. Indeed, the
advent of machine learning models, in particular Large Language Models
(LLMs), enabled to quickly and automatically generate large quantities of
texts, making human evaluation unfeasible. Still, these texts are used in
the educational domain -- e.g., as questions, hints, or even to score and
assess students -- and thus the need for accurate and automated techniques
for evaluation becomes pressing. This hybrid workshop aims to attract
professionals from both academia and the industry, and to to offer an
opportunity to discuss which are the common challenges in evaluating
learning and assessment content in education.

Topics of interest include but are not limited to:

   - Question evaluation (e.g., in terms of alignment to learning
   objectives, factual accuracy, language level, cognitive validity, etc.).
   - Estimation of question statistics (e.g., difficulty, discrimination,
   response time, etc.).
   - Evaluation of distractors in Multiple Choice Questions.
   - Evaluation of reading passages in reading comprehension questions.
   - Evaluation of lectures and course material.
   - Evaluation of learning paths (e.g., in terms of prerequisites and
   topics taught before a specific exam).
   - Evaluation of educational recommendation systems (e.g., personalised
   curricula).
   - Evaluation of hints and scaffolding questions, as well as their
   adaptation to different students.
   - Evaluation of automatically generated feedback provided to students.
   - Evaluation of techniques for automated scoring.
   - Evaluation of bias in educational content and LLM outputs.

Human-in-the-loop approaches are welcome, provided that there is also an
automated component in the evaluation and there is a focus on the
scalability of the proposed approach. Papers on generation are also very
welcome, as long as there is an extensive focus on the evaluation step.


*Important dates*
Submission deadline: May 17, 2024
Notification of acceptance: June 4, 2024
Camera ready: June 11, 2024
Workshop: 8 July or 12 July 2024

*Submission guidelines*
Authors are invited to submit short papers (5 pages, excluding references)
and long papers (10 pages, excluding references), formatted according to
the workshop style available on the website.

Submissions should contain mostly novel work, but there can be some overlap
between the submission and work submitted elsewhere (e.g., summaries, focus
on the evaluation phase of a broader work). Each of the submissions will be
reviewed by the members of the Program Committee, and the proceedings
volume will be submitted for publication to CEUR Workshop Proceedings.

*Organisers*
Luca Benedetto (1), Andrew Caines (1), George Dueñas (2), Diana Galvan-Sosa
(1), Anastassia Loukina (3), Shiva Taslimipoor (1), Torsten Zesch (4)

(1) ALTA Institute, Dept. of Computer Science and Technology, University of
Cambridge
(2) National Pedagogical University, Colombia
(3) Grammarly, Inc.
(4) FernUniversität in Hagen
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to