On Fri, Jun 19, 2015 at 11:28 AM, Read, James C <jcr...@essex.ac.uk> wrote:

What I take issue with is the en-masse denial that there is a problem with
> the system if it behaves in such a way with no LM + no pruning and/or
> tuning.
>

There is no mass denial taking place.

Regardless of whether or not you tune, the decoder will do its best to find
translations with the highest model score. That is the expected behavior.

What I have tried to tell you, and what other people have tried to tell
you, is that translations with high model scores are not necessarily good
translations.

We all want our models to be such that high model scores correspond to good
translations, and that low model scores correspond with bad translations.
But unfortunately, our models do not innately have this characteristic. We
all know this. We also know a good way to deal with this shortcoming,
namely tuning. Tuning is the process by which we attempt to ensure that
high model scores correspond to high quality translations, and that low
model scores correspond to low quality translations.

If you can design models that naturally correspond with translation quality
without tuning, that's great. If you can do that, you've got a great shot
at winning a Best Paper award at ACL.

In the meantime, you may want to consider an apology for your rude behavior
and unprofessional attitude.

Goodbye.
Lane
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to