*1st Workshop on Automated Evaluation of Learning and Assessment Content*

AIED 2024 workshop | Recife (Brazil) & Hybrid | 8 July 2024

https://sites.google.com/view/eval-lac-2024/


Important dates

[Extended] Submission deadline: *May 22, 2024*

Notification of acceptance: June 4, 2024

Camera ready: June 11, 2024

Workshop: 8 July 2024


About the workshop

The evaluation of learning and assessment content has always been a crucial
task in the educational domain, but traditional approaches based on human
feedback are not always usable in modern educational settings. Indeed, the
advent of machine learning models, in particular Large Language Models
(LLMs), enabled to quickly and automatically generate large quantities of
texts, making human evaluation unfeasible. Still, these texts are used in
the educational domain -- e.g., as questions, hints, or even to score and
assess students -- and thus the need for accurate and automated techniques
for evaluation becomes pressing. This hybrid workshop aims to attract
professionals from both academia and the industry, and to to offer an
opportunity to discuss which are the common challenges in evaluating
learning and assessment content in education.


 Topics of interest include but are not limited to:

   -

   Question evaluation (e.g., in terms of alignment to learning objectives,
   factual accuracy, language level, cognitive validity, etc.).
   -

   Estimation of question statistics (e.g., difficulty, discrimination,
   response time, etc.).
   -

   Evaluation of distractors in Multiple Choice Questions.
   -

   Evaluation of reading passages in reading comprehension questions.
   -

   Evaluation of lectures and course material.
   -

   Evaluation of learning paths (e.g., in terms of prerequisites and topics
   taught before a specific exam).
   -

   Evaluation of educational recommendation systems (e.g., personalised
   curricula).
   -

   Evaluation of hints and scaffolding questions, as well as their
   adaptation to different students.
   -

   Evaluation of automatically generated feedback provided to students.
   -

   Evaluation of techniques for automated scoring.
   -

   Evaluation of bias in educational content and LLM outputs.

Human-in-the-loop approaches are welcome, provided that there is also an
automated component in the evaluation and there is a focus on the
scalability of the proposed approach. Papers on generation are also very
welcome, as long as there is an extensive focus on the evaluation step.


*Submission URL:*

https://easychair.org/conferences/?conf=evallac2024



For more information about the workshop please visit the website:
https://sites.google.com/view/eval-lac-2024/



>
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to