here is what I got

make sense ?


MT evaluation scorer began on 2015 Jul 20 at 23:27:39
command line: /home/moses/mosesdecoder/scripts/generic/mteval-v13a.pl -c 
-c -s /home/moses/working/data/dev/newstest2011-src.fr.sgm -r 
/home/moses/working/data/dev/newstest2011-ref.en.sgm -t 
/home/moses/working/evaluation/newstest2011.detokenized.sgm.3
   Evaluation of any-to-en translation using:
     src set "newstest2011" (110 docs, 3003 segs)
     ref set "newstest2011" (1 refs)
     tst set "newstest2011" (1 systems)

length ratio: 0.994844739625875 (74296/74681), penalty (log): 
-0.00518197480348868
NIST score = 6.8964  BLEU score = 0.2268 for system "Edinburgh"

# ------------------------------------------------------------------------

Individual N-gram scoring
         1-gram   2-gram   3-gram   4-gram   5-gram   6-gram 7-gram   
8-gram   9-gram
         ------   ------   ------   ------   ------   ------ ------   
------   ------
  NIST:  5.2752   1.3399   0.2499   0.0273   0.0041   0.0005 0.0000   
0.0000   0.0000  "Edinburgh"

  BLEU:  0.5883   0.2887   0.1636   0.0972   0.0589   0.0364 0.0230   
0.0146   0.0093  "Edinburgh"

# ------------------------------------------------------------------------
Cumulative N-gram scoring
         1-gram   2-gram   3-gram   4-gram   5-gram   6-gram 7-gram   
8-gram   9-gram
         ------   ------   ------   ------   ------   ------ ------   
------   ------
  NIST:  5.2752   6.6151   6.8650   6.8923   6.8964   6.8969 6.8970   
6.8970   6.8970  "Edinburgh"

  BLEU:  0.5853   0.4100   0.3013   0.2268   0.1730   0.1333 0.1037   
0.0811   0.0637  "Edinburgh"
MT evaluation scorer ended on 2015 Jul 20 at 23:28:01
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to