the decoder's job is NOT to find the high quality translation (as measured by bleu). It's job is to find translations with high model score.
you need the tuning to make sure high quality translation correlates with high model score. If you don't tune, it's pot luck what quality you get. You should tune with the features you use Hieu Hoang Researcher New York University, Abu Dhabi http://www.hoang.co.uk/hieu On 17 June 2015 at 21:52, Read, James C <jcr...@essex.ac.uk> wrote: > The analogy doesn't seem to be helping me understand just how exactly it > is a desirable quality of a TM to > > a) completely break down if no LM is used (thank you for showing that such > is not always the case) > b) be dependent on a tuning step to help it find the higher scoring > translations > > What you seem to be essentially saying is that the TM cannot find the > higher scoring translations because I didn't pretune the system to do so. > And I am supposed to accept that such is a desirable quality of a system > whose very job is to find the higher scoring translations. > > Further, I am still unclear which features you prequire a system to be > tuned on. At the very least it seems that I have discovered the selection > process that tuning seems to be making up for in some unspecified and > altogether opaque way. > > James > > > ________________________________________ > From: Hieu Hoang <hieuho...@gmail.com> > Sent: Wednesday, June 17, 2015 8:34 PM > To: Read, James C; Kenneth Heafield; moses-support@mit.edu > Cc: Arnold, Doug > Subject: Re: [Moses-support] Major bug found in Moses > > 4 BLEU is nothing to sniff at :) I was answering Ken's tangent aspersion > that LM are needed for tuning. > > I have some sympathy for you. You're looking at ways to improve > translation by reducing the search space. I've bashed my head against > this wall for a while as well without much success. > > However, as everyone is telling you, you haven't understood the role of > tuning. Without tuning, you're pointing your lab rat to some random part > of the search space, instead of away from the furry animal with whiskers > and towards the yellow cheesy thing > > On 17/06/2015 20:45, Read, James C wrote: > > Doesn't look like the LM is contributing all that much then does it? > > > > James > > > > ________________________________________ > > From: moses-support-boun...@mit.edu <moses-support-boun...@mit.edu> on > behalf of Hieu Hoang <hieuho...@gmail.com> > > Sent: Wednesday, June 17, 2015 7:35 PM > > To: Kenneth Heafield; moses-support@mit.edu > > Subject: Re: [Moses-support] Major bug found in Moses > > > > On 17/06/2015 20:13, Kenneth Heafield wrote: > >> I'll bite. > >> > >> The moses.ini files ship with bogus feature weights. One is required to > >> tune the system to discover good weights for their system. You did not > >> tune. The results of an untuned system are meaningless. > >> > >> So for example if the feature weights are all zeros, then the scores are > >> all zero. The system will arbitrarily pick some awful translation from > >> a large space of translations. > >> > >> The filter looks at one feature p(target | source). So now you've > >> constrained the awful untuned model to a slightly better region of the > >> search space. > >> > >> In other words, all you've done is a poor approximation to manually > >> setting the weight to 1.0 on p(target | source) and the rest to 0. > >> > >> The problem isn't that you are running without a language model (though > >> we generally do not care what happens without one). The problem is that > >> you did not tune the feature weights. > >> > >> Moreover, as Marcin is pointing out, I wouldn't necessarily expect > >> tuning to work without an LM. > > Tuning does work without a LM. The results aren't half bad. fr-en > > europarl (pb): > > with LM: 22.84 > > retuned without LM: 18.33 > >> On 06/17/15 11:56, Read, James C wrote: > >>> Actually the approximation I expect to be: > >>> > >>> p(e|f)=p(f|e) > >>> > >>> Why would you expect this to give poor results if the TM is well > trained? Surely the results of my filtering experiments provve otherwise. > >>> > >>> James > >>> > >>> ________________________________________ > >>> From: moses-support-boun...@mit.edu <moses-support-boun...@mit.edu> > on behalf of Rico Sennrich <rico.sennr...@gmx.ch> > >>> Sent: Wednesday, June 17, 2015 5:32 PM > >>> To: moses-support@mit.edu > >>> Subject: Re: [Moses-support] Major bug found in Moses > >>> > >>> Read, James C <jcread@...> writes: > >>> > >>>> I have been unable to find a logical explanation for this behaviour > other > >>> than to conclude that there must be some kind of bug in Moses which > causes a > >>> TM only run of Moses to perform poorly in finding the most likely > >>> translations according to the TM when > >>>> there are less likely phrase pairs included in the race. > >>> I may have overlooked something, but you seem to have removed the > language > >>> model from your config, and used default weights. your default model > will > >>> thus (roughly) implement the following model: > >>> > >>> p(e|f) = p(e|f)*p(f|e) > >>> > >>> which is obviously wrong, and will give you poor results. This is not > a bug > >>> in the code, but a poor choice of models and weights. Standard steps > in SMT > >>> (like tuning the model weights on a development set, and including a > >>> language model) will give you the desired results. > >>> > >>> _______________________________________________ > >>> Moses-support mailing list > >>> Moses-support@mit.edu > >>> http://mailman.mit.edu/mailman/listinfo/moses-support > >>> > >>> _______________________________________________ > >>> Moses-support mailing list > >>> Moses-support@mit.edu > >>> http://mailman.mit.edu/mailman/listinfo/moses-support > >> _______________________________________________ > >> Moses-support mailing list > >> Moses-support@mit.edu > >> http://mailman.mit.edu/mailman/listinfo/moses-support > >> > > -- > > Hieu Hoang > > Researcher > > New York University, Abu Dhabi > > http://www.hoang.co.uk/hieu > > > > _______________________________________________ > > Moses-support mailing list > > Moses-support@mit.edu > > http://mailman.mit.edu/mailman/listinfo/moses-support > > . > > > > -- > Hieu Hoang > Researcher > New York University, Abu Dhabi > http://www.hoang.co.uk/hieu > >
_______________________________________________ Moses-support mailing list Moses-support@mit.edu http://mailman.mit.edu/mailman/listinfo/moses-support