*LREC 2016 Workshop*
*Translation evaluation: *
*From fragmented tools and data sets to an integrated ecosystem*
*24 May 2016, Portorož, Slovenia*
*http://www.cracking-the-language-barrier.eu/mt-eval-workshop-2016/*
*Deadline for submissions: 15 February 2016*
This workshop takes an in-depth look at an area of ever-increasing
importance: approaches, tools and data support for the evaluation of
human translation (HT) and machine translation (MT), with a focus on MT.
Two clear trends have emerged over the past several years. The first
trend involves standardising evaluations in research through large
shared tasks in which actual translations are compared to reference
translations using automatic metrics and/or human ranking. The second
trend focuses on achieving high quality translations with the help of
increasingly complex data sets that contain many levels of annotation
based on sophisticated quality metrics – often organised in the
context of smaller shared tasks. In industry, we also observe an
increased interest in workflows for high quality outbound translation
that combine Translation Memory (TM)/Machine Translation and
post-editing. In stark contrast to this trend to quality translation
(QT) and its inherent overall approach and complexity, the data and
tooling landscapes remain rather heterogeneous, uncoordinated and not
interoperable.
The event will bring together MT and HT researchers, users and providers
of tools, and users and providers of manual and automatic evaluation
methodologies currently used for the purpose of evaluating HT and MT
systems. The key objective of the workshop is to initiate a dialogue and
discuss whether the current approach involving a diverse
and heterogeneous set of data, tools and evaluation methodologies is
appropriate enough or if the community should, instead, collaborate
towards building an integrated ecosystem that provides better and more
sustainable access to data sets, evaluation workflows, approaches and
metrics and supporting processes such as annotation, ranking and so on.
The workshop is meant to stimulate a dialogue about the commonalities,
similarities and differences of the existing solutions in the three
areas (1) tools, (2) methodologies, (3) data sets. A key question
concerns the high level of flexibility and lack of interoperability
of heterogeneous approaches, while a homogeneous approach would provide
less flexibility but higher interoperability. How much flexibility and
interoperability does the MT/HT research community need? How much does
it want?
*TOPICS OF INTEREST INCLUDE BUT ARE NOT LIMITED TO *
• MT/HT evaluation methodologies (incl. scoring mechanisms, integrated
metrics)
• Benchmarks for MT evaluation
• Data and annotation formats for the evaluation of MT/HT
• Workbenches, tools, technologies for the evaluation of MT/HT (incl.
specialised workflows)
• Integration of MT/TM, and terminology in industrial evaluation scenarios
• Evaluation ecosystems
• Annotation concepts such as MQM, DQF and their implementation in MT
evaluation processes
We invite contributions on the topics mentioned above and any related
topics of interest. The workshop website
<http://www.cracking-the-language-barrier.eu/mt-eval-workshop-2016/> provides
some additional information.
*IMPORTANT DATES*
• Publication of the call for papers: 10 December 2015
• Submissions due: 15 February 2016
• Notification of acceptance: 1 March 2016
• Final version of accepted papers: 31 March 2016
• Final programme and online proceedings: 15 April 2016
• Workshop: 24 May 2016 (this event will be a full-day workshop)
*SUBMISSION
*
Please submit your papers at
https://www.softconf.com/lrec2016/MTEVAL/ before the deadline of 15
February 2016. Accepted papers will be presented as oral presentations
or as posters. All accepted papers will be published in the workshop
proceedings.
Papers should be formatted according to the stylesheet soon to be
provided on the LREC 2016 website and should not exceed 8 pages,
including references and appendices. Papers should be submitted in PDF
format through the URL mentioned above.
When submitting a paper, authors will be asked to provide essential
information about resources (in a broad sense, i.e., also technologies,
standards, evaluation kits, etc.) that have been used for the work
described in the paper or are a new result of your research. Moreover,
ELRA encourages all LREC authors to share the described LRs (data,
tools, services, etc.) to enable their reuse and replicability of
experiments (including evaluation ones).
*PROGRAMME COMMITTEE*
Nora Aranberri, University of the Basque Country, Spain
Ondrej Bojar, Charles University in Prague, Czech Republic
Aljoscha Burchardt, Deutsches Forschungszentrum für Künstliche
Intelligenz (DFKI), Germany
Christian Dugast, Deutsches Forschungszentrum für Künstliche Intelligenz
(DFKI), Germany
Marcello Federico, Fondazione Bruno Kessler (FBK), Italy
Christian Federmann, Microsoft, USA
Rosa Gaudio, Higher Functions, Portugal
Josef van Genabith, Deutsches Forschungszentrum für Künstliche
Intelligenz (DFKI), Germany
Barry Haddow, University of Edinburgh, UK
Jan Hajic, Charles University in Prague, Czech Republic
Kim Harris, text&form, Germany
Matthias Heyn, SDL, Belgium
Philipp Koehn, Johns Hopkins University, USA, and University of
Edinburgh, UK
Christian Lieske, SAP, Germany
Lena Marg, Welocalize, UK
Katrin Marheinecke, text&form, Germany
Matteo Negri, Fondazione Bruno Kessler (FBK), Italy
Martin Popel, Charles University in Prague, Czech Republic
Jörg Porsiel, Volkswagen AG, Germany
Georg Rehm, Deutsches Forschungszentrum für Künstliche Intelligenz
(DFKI), Germany
Rubén Rodriguez de la Fuente, PayPal, Spain
Lucia Specia, University of Sheffield, UK
Marco Turchi, Fondazione Bruno Kessler (FBK), Italy
Hans Uszkoreit, Deutsches Forschungszentrum für Künstliche Intelligenz
(DFKI), Germany
*http://www.cracking-the-language-barrier.eu/mt-eval-workshop-2016/*
This workshop is a joint activity of the EU projects QT21 and CRACKER.
/– apologies for cross-posting –/
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support