(Apologizes for cross posting)

CALL FOR PARTICIPATION

in the

Fourth Automatic Post-Editing (APE) shared task

at the Third Conference on Machine Translation (WMT18)

--------------------------------------------------------------------

OVERVIEW

The fourth round of the APE shared task follows the success of the previous
three rounds organised in 2015, 2016 and 2017. The aim is to examine
automatic methods for correcting errors produced by an unknown machine
translation (MT) system. This has to be done by exploiting knowledge
acquired from human post-edits, which are provided as training material.


Similar to the last round, this year the task focuses on Information
Technology domain for English-German language direction. One novelty,
however, is represented by the addition of one MT system: this year, the
task will hence cover MT output generated by a phrase-based system (PBSMT),
and a neural MT system (NMT). In both cases, the source sentences have been
translated into the target language by an MT system unknown to the
participants (in terms of system configuration) and then manually
post-edited by professional translators.


At training stage, the collected human post-edits have to be used to learn
correction rules for the APE systems. At test stage they will be used for
system evaluation with automatic metrics (TER and BLEU).

--------------------------------------------------------------------

GOALS

The aim of the APE task is to improve MT output in black-box scenarios, in
which the MT system is used "as is" and cannot be modified. From the
application point of view, APE components would make it possible to:

   -

   Cope with systematic errors of an MT system whose decoding process is
   not accessible;
   -

   Provide professional translators with improved MT output quality to
   reduce (human) post-editing effort;
   -

   Adapt the output of a general-purpose system to the lexicon/style
   requested in a specific application domain.


--------------------------------------------------------------------

DATA & EVALUATION


Training, development and test data consist in English-German triplets
(source, target, and post-edit) belonging to the IT domain, and are already
tokenized. All data is provided by the EU project QT21 (http://www.qt21.eu/
).

Systems' performance will be evaluated with respect to their capability to
reduce the distance that separates an automatic translation from its
human-revised version. Such distance will be measured in terms of TER,
which will be computed between automatic and human post-edits in
case-sensitive mode. Also BLEU will be taken into consideration as a
secondary evaluation metric.

To gain further insights on final output quality, a subset of the outputs
of the submitted systems will also be manually evaluated.

--------------------------------------------------------------------

DIFFERENCES FROM THE 2016 ROUND OF THE APE TASK

Compared to the the thrid round, the main differences are:

   - Larger data set. (the eSCAPE corpus)
   - Additional MT system (Neural-based);

--------------------------------------------------------------------

IMPORTANT DATES

Release of training data February 16, 2018 Release of test data May 4, 2018
Submission deadline June 4, 2018 Paper submission deadline July 27, 2018
(TBC) Manual evaluation TBD Notification of acceptance August 18, 2018
(TBC) Camera-ready deadline August 31, 2018 (TBC)

-- 
On behalf of
The APE task organizers
_______________________________________________
Mt-list site list
Mt-list@eamt.org
http://lists.eamt.org/mailman/listinfo/mt-list

Reply via email to