Hi,

you are on the right track.

> Suppose the following entry in an n-best list:
>
> 4 ||| así .  ||| d: 0 -0.619042 0 0 0 0 0 lm: -11.4288 tm: -4.84733
> -6.39323 -6.90676 -7.23185 0.999896 w: -2 ||| -2.40665
>
> * "4"
>     -> the number of the sentence
> * "así ."
>     -> the output sentence
> * "d: 0 -0.619042 0 0 0 0 0"

the first number is the distance-based cost (total number of word movements),
the next six are the lexicalized reordering model costs:
- backward: monotone, swap, discontinuous
- forward: monotone, swap, discontinuous
so here the only non-zero value is the lexicalized reordering log-probablitity
of the first phrase translated monotone regarding to the sentence start.

>     -> No idea ¿?
> * "lm: -11.4288"
>     -> I suppose "lm:" stands for language model (log probability).
>        Is this correct?

yes.

> * "tm: -4.84733 -6.39323 -6.90676 -7.23185 0.999896"
>      -> I suppose "tm:"  stands for translation model. But,
>         to which translation model correspond each different value?

Currently, five different phrase translation scores are computed:

    * phrase translation probability φ(f|e)
    * lexical weighting lex(f|e)
    * phrase translation probability φ(e|f)
    * lexical weighting lex(e|f)
    * phrase penalty

http://www.statmt.org/moses/?n=FactoredTraining.ScorePhrases

> * "w: -2"
>      -> I suppose "w:" stands from word penalty.

yes, there are two words in the output.

> * "-2.40665"
>      -> weighted overall socore.

yes.

-phi

_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to