Felipe Sánchez Martínez wrote:

> * Does SRILM introduces begin-of-sentence and end-of-sentence tokens
> during training?

Yes, by default I believe - see the -no-sos and no-eos switches.

> * and, during scoring (or decoding)?

I don't think Moses adds them - it can't know how you trained the LM.   
We add them ourselves, and tell SRILM not to add them.  (We get some  
small gain in BLEU by doing this, by the way.)

> * Does IRSTLM introduces begin-of-sentence and end-of-sentence tokens
> during scoring (or decoding)?

No, unless this has recently changed.

> if I introduce <s> and </s> when scoring with IRSTLM I get a log  
> prob of
> -55.3099 (very similar to that of SRILM).

This makes sense, given the above.

Some of the remaining discrepancy might be explained by the fact that  
you trained the SRILM model with  Kneser-Ney discounting, while IRSTLM  
uses Witten-Bell by default.  This doesn't seem sufficient to  
completely explain the discrepancy, though.

- John D. Burger
   MITRE


_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to