We are pleased to announce a brand new Model Compression track
<https://www2.statmt.org/wmt25/model-compression.html> at WMT 2025
<https://www2.statmt.org/wmt25/index.html>.

This shared task aims to evaluate the potential of model compression
techniques in reducing the size of large, general-purpose large language
models, with the goal of achieving an optimal balance between practical
deployability and high translation quality in specific machine translation
(MT) scenarios. The task’s broader objectives include fostering research
into efficient, accessible, and sustainable deployment of LLMs for MT,
establishing a common evaluation framework to monitor progress in model
compression across a wide range of languages, and enabling meaningful
comparisons with state-of-the-art MT systems through standardized
evaluation protocols aimed at assessing not only translation quality but
also efficiency.

Although the focus is on model compression, the task is closely aligned
with the General MT shared task
<https://www2.statmt.org/wmt25/translation-task.html>, sharing language
directions, test data, and protocols for automatic MT quality evaluation.
Additionally, the task follows the same timeline as the flagship WMT task.

We warmly invite participation from academic teams and industry players
interested in applying existing compression methods to MT or exploring
innovative, cutting-edge approaches.

THE TASK IN A NUTSHELL


   -

   Goal: Reduce the size of a general-purpose LLM while maintaining a
   balance between model compactness and MT performance.
   -

   Languages: The first round will focus on the same language pairs as the
   General MT track.
   -

   Conditions:
   -

      Constrained: Participants work within a predefined model and language
      setting for directly comparable results.
      -

      Unconstrained: Participants are free to compress any model across
      language directions of their choice.
      -

   Evaluation Criteria:
   -

      Translation quality: Automatically measured using the LLM-as-a-judge
      framework from the General MT task
      -

      Model size: Defined by the memory usage
      -

      Inference speed: Measured by total processing time over the test set


IMPORTANT DATES

   -

   Test data released: 26th June 2025
   -

   Translation submission deadline: 3rd July 2025
   -

   System description abstract paper: 10th July 2025
   -

   System description submission: 14th August 2025


WEBSITE:  https://www2.statmt.org/wmt25/model-compression.html

ORGANIZERS:

   -

   Marco Gaido, Fondazione Bruno Kessler
   -

   Matteo Negri, Fondazione Bruno Kessler
   -

   Roman Grundkiewicz - Microsoft Translator
   -

   TG Gowda - Microsoft Translator


CONTACTS:

   -

   Marco Gaido - [email protected]

Matteo Negri - [email protected]

-- 
--
Le informazioni contenute nella presente comunicazione sono di natura 
privata e come tali sono da considerarsi riservate ed indirizzate 
esclusivamente ai destinatari indicati e per le finalità strettamente 
legate al relativo contenuto. Se avete ricevuto questo messaggio per 
errore, vi preghiamo di eliminarlo e di inviare una comunicazione 
all’indirizzo e-mail del mittente.

--
The information transmitted is 
intended only for the person or entity to which it is addressed and may 
contain confidential and/or privileged material. If you received this in 
error, please contact the sender and delete the material.
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to