[apologies for multiple cross-postings]

Dear NLP-researchers,

  We're glad to announce the release of the IQMT Framework for Machine
Translation Evaluation v 1.0, an open source software package, released under
the GNU Lesser General Public License (LGPL) of the Free Software Foundation.
This tool is a joint effort by the UNED NLP & IR Research Group and the
TALP Research Center NLP group.

    The IQMT package (Giménez et al., IWSLT'2005) is based on the QARLA
Framework (Amigó et al., ACL'2005). Rather than defining a new supermetric
'XXX' our tool follows a 'divide and conquer' strategy. You can define a set of
metrics and then combine them into a single measure of MT quality, in a robust
and elegant manner, avoiding scaling problems and metric weightings.

    Using IQMT offers a number of advantages over previous MT evaluation packages.
First, individual metrics improve their level of correlation with human judgements
when they are applied inside QARLA. Second, it permits to avoid the 'metric bias'
problem, by allowing you to tune your system on a combination of metrics instead
of on a single metric. Third, it allows you to define a set of subtle metrics focusing
on partial aspects of MT quality possibly at different linguistic levels, and then
combine them into a single measure.

    Several well-known and freely available current MT evaluation tools such as
BLEU, GTM, METEOR, NIST, ROUGE, WER and PER, have been incorporated
so far. We plan to incorporate new metrics working at linguistic levels other than
lexical overlap in a near future.

    However, the main feature of the IQMT package is that it allows to supply
user-defined MT metrics. So, do not hesitate to try your own metrics inside
QARLA!

  Please feel free to download IQMT v1.0 at http://www.lsi.upc.edu/~nlp/IQMT/ .

  Any feedback (comments, suggestions, bug reporting) from the NLP community
members will be highly appreciated.


   IQMT developing team.


_______________________________________________
Mt-list mailing list

Reply via email to