Dear all EAMT members,
I'm Toshiaki Nakazawa from JST (Japan Science and Technology Agency),
Japan. This is an announcement of the new open machine translation
evaluation campaign. Those who are working on machine translation,
please join us.
Best regards,
---------------------------------------------------------------------------
WAT 2014
(The 1st Workshop on Asian Translation)
http://orchid.kuee.kyoto-u.ac.jp/WAT/
October 4, 2014, Tokyo, Japan
WAT is a new open evaluation campaign focusing on Asian languages. We
would like to invite a broad range of participants and conduct various
forms of machine translation experiments and evaluation. Collecting
and sharing our knowledge will allow us to understand the essence of
machine translation and the problems to be solved. We are working
toward the practical use of machine translation among all Asian
countries.
For the 1st WAT, we chose "scientific papers" as the targeted domain,
and selected the languages Japanese, Chinese and English.
TASK
----
The task is to improve the text translation quality for scientific
papers. Participants choose any of the subtasks in which they would
like to participate and translate the test data using their machine
translation systems. The WAT organizers will evaluate the results
submitted using automatic evaluation and human evaluation. We will
also provide a baseline machine translation.
Subtasks:
Japanese --> English
English --> Japanese
Japanese --> Chinese
Chinese --> Japanese
Dataset:
WAT uses ASPEC (Asian Scientific Paper Excerpt Corpus) for the dataset
including training, development, development test and test
data. Participants must get a copy of ASPEC by themselves from
http://orchid.kuee.kyoto-u.ac.jp/ASPEC/
Automatic evaluation:
We will prepare an automatic evaluation server. You will be able to
evaluate the translation results at any time using this server.
Human evaluation:
Human evaluation will be carried out using crowdsourcing. Participants
can submit translation results a maximum of
twice. Sentence-by-sentence pair-wise evaluation compared to the
baseline system will be carried out. The crowdsourcing workers will be
asked to judge which translation is better than the other in view of
adequacy and fluency. All systems will be ranked by the percentage of
translations judged to improve upon the baseline system.
REGISTRATION
------------
The registration procedure for WAT2014 will be announced
later. However, you can start developing and evaluating your MT
engines today using ASPEC. The registration fee will be FREE for all
participants.
IMPORTANT DATES
---------------
Crowdsourcing evaluation due July 31, 2014
System description draft paper due August 31, 2014
Review feedback September 7, 2014
Camera-ready paper due September 14, 2014
WAT 2014 October 4, 2014
PAPER
-----
Participants who submit results for human evaluation should submit
description papers of their translation systems and evaluation
results.
We strongly prefer that papers include a section entitled "Issues for
Context-aware Machine Translation" which discusses the importance and
usefulness of context.
ORGANIZERS
----------
Toshiaki Nakazawa (Japan Science and Technology Agency (JST))
Hideya Mino (National Institute of Information and Communications Technology
(NICT))
Isao Goto (Japan Broadcasting Corporation (NHK))
Eiichiro Sumita (National Institute of Information and Communications
Technology (NICT))
Sadao Kurohashi (Kyoto University)
CONTACT
-------
[email protected]
---------------------------------------------------------------------------
--
Toshiaki Nakazawa (Researcher)
Japan Science and Technology Agency (JST)
(@ Graduate School of Informatics, Kyoto University)
Yoshida-honmachi, Sakyo-ku, Kyoto, 606-8501, Japan
tel: +81-75-753-5346, fax: +81-75-753-5962
[email protected] / [email protected]
_______________________________________________
Mt-list site list
[email protected]
http://lists.eamt.org/mailman/listinfo/mt-list