my bad the europarl corpus was commented out in the config.basic
I need to re run it.


Le 22/07/2015 15:23, Vincent Nguyen a écrit :

shouldn't the Belu score be more in the 50's for a test set close to the corpus ? I meant by "real text" that I have a corpus of translations (fr to eng) made by translators, typically the kind of text I would like to test with Moses.

so my question is : should I use these texts to 1) train or 2) tune my model ?

also in terms of language model, can we make it evolve with new texts to make it better in time ?





Le 22/07/2015 14:28, Hieu Hoang a écrit :
it looks ok, your bleu score is 22.68 for this test set.

I don't know what you mean by real text.

Hieu Hoang
Researcher
New York University, Abu Dhabi
http://www.hoang.co.uk/hieu

On 21 July 2015 at 23:45, Vincent Nguyen <[email protected] <mailto:[email protected]>> wrote:

    here is what I got

    make sense ?


    MT evaluation scorer began on 2015 Jul 20 at 23:27:39
    command line:
    /home/moses/mosesdecoder/scripts/generic/mteval-v13a.pl
    <http://mteval-v13a.pl> -c
    -c -s /home/moses/working/data/dev/newstest2011-src.fr.sgm -r
    /home/moses/working/data/dev/newstest2011-ref.en.sgm -t
    /home/moses/working/evaluation/newstest2011.detokenized.sgm.3
       Evaluation of any-to-en translation using:
         src set "newstest2011" (110 docs, 3003 segs)
         ref set "newstest2011" (1 refs)
         tst set "newstest2011" (1 systems)

    length ratio: 0.994844739625875 (74296/74681), penalty (log):
    -0.00518197480348868
    NIST score = 6.8964  BLEU score = 0.2268 for system "Edinburgh"

    #
    ------------------------------------------------------------------------

    Individual N-gram scoring
             1-gram   2-gram   3-gram   4-gram   5-gram  6-gram 7-gram
    8-gram   9-gram
             ------   ------   ------   ------   ------  ------ ------
    ------   ------
      NIST:  5.2752   1.3399   0.2499   0.0273   0.0041  0.0005 0.0000
    0.0000   0.0000  "Edinburgh"

      BLEU:  0.5883   0.2887   0.1636   0.0972   0.0589  0.0364 0.0230
    0.0146   0.0093  "Edinburgh"

    #
    ------------------------------------------------------------------------
    Cumulative N-gram scoring
             1-gram   2-gram   3-gram   4-gram   5-gram  6-gram 7-gram
    8-gram   9-gram
             ------   ------   ------   ------   ------  ------ ------
    ------   ------
      NIST:  5.2752   6.6151   6.8650   6.8923   6.8964  6.8969 6.8970
    6.8970   6.8970  "Edinburgh"

      BLEU:  0.5853   0.4100   0.3013   0.2268   0.1730  0.1333 0.1037
    0.0811   0.0637  "Edinburgh"
    MT evaluation scorer ended on 2015 Jul 20 at 23:28:01
    _______________________________________________
    Moses-support mailing list
    [email protected] <mailto:[email protected]>
    http://mailman.mit.edu/mailman/listinfo/moses-support





_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to