It would be really wonderful if Moses had an out-of-the-box example that ran 
without further tuning.  Would you be willing to create that for us?  We would 
greatly appreciate it.

The open source community exists on a somewhat different model than the 
commercial software community.  In the open-source community, if a feature 
doesn't exist, and if you believe it should exist, then the correct response is 
"may I contribute this feature to the codebase, please"? 

The fact that no such feature currently exists in Moses means that none of its 
current users have ever had a need for it.  That probably means that all of its 
current users are machine translation experts, who have no need for an 
out-of-the-box example that runs without tuning.  You are quite correct that it 
would be nice to expand the user base, so that it includes people who are not 
machine translation experts, but just want a tool that runs reasonably well 
out-of-the-box.  Since nobody is paid to maintain Moses, however, nobody has 
ever yet had sufficient incentive to create such an example.  If you believe 
that you have sufficient incentive to create such an example, then please do; 
we would appreciate it.

Thanks.


-----Original Message-----
From: moses-support-boun...@mit.edu [mailto:moses-support-boun...@mit.edu] On 
Behalf Of Read, James C
Sent: Wednesday, June 24, 2015 10:29 AM
To: John D. Burger
Cc: moses-support@mit.edu
Subject: Re: [Moses-support] Major bug found in Moses

Please allow me to give a synthesis of my understanding of your response:

a) we understand that out of the box Moses performs notably less well than 
merely selecting the most likely translation for each phrase
b) we don't see this as a problem because for years we've been applying a 
different type of fix
c) we have no intention of rectifying the problem or even acknowledging that 
there is a problem
d) we would rather continue performing this gratuitous step and insisting that 
our users perform it also

Please explain to me. Why even bother running the training process if you have 
already decided that the default setup should not be designed to maximise on 
the probabilities learned during that step?

James

________________________________________
From: John D. Burger <j...@mitre.org>
Sent: Wednesday, June 24, 2015 6:03 PM
To: Read, James C
Cc: moses-support@mit.edu
Subject: Re: [Moses-support] Major bug found in Moses

> On Jun 24, 2015, at 10:47 , Read, James C <jcr...@essex.ac.uk> wrote:
>
> So you still think it's fine that the default would perform at 37 BLEU points 
> less than just selecting the most likely translation of each phrase?

Yes, I'm pretty sure we all think that's fine, because one of the steps of 
building a system is tuning.

Is this really the essence of your complaint? That the behavior without tuning 
is not very good?

(Please try to reply without your usual snarkiness.)

- John Burger
  MITRE

> You know I think I would have to try really hard to design a system that 
> performed so poorly.
>
> James
>
> ________________________________________
> From: amittai axelrod <amit...@umiacs.umd.edu>
> Sent: Wednesday, June 24, 2015 5:36 PM
> To: Read, James C; Lane Schwartz
> Cc: moses-support@mit.edu; Philipp Koehn
> Subject: Re: [Moses-support] Major bug found in Moses
>
> what *i* would do is tune my systems.
>
> ~amittai
>
> On 6/24/15 09:15, Read, James C wrote:
>> Thank you for such an invitation. Let's see. Given the choice of
>>
>> a) reading through thousands of lines of code trying to figure out 
>> why the default behaviour performs considerably worse than merely 
>> selecting the most likely translation of each phrase or
>> b) spending much less time implementing a simple system that does 
>> just that
>>
>> which one would you do?
>>
>> For all know maybe I've already implemented such a system that does just 
>> that and not only that improves considerably on such a basic benchmark. But 
>> given that on this list we don't seem to be able to accept that there is a 
>> problem with the default behaviour of Moses I can only conclude that nobody 
>> would be interested in access to the code of such a system.
>>
>> James
>>
>> ________________________________________
>> From: amittai axelrod <amit...@umiacs.umd.edu>
>> Sent: Friday, June 19, 2015 7:52 PM
>> To: Read, James C; Lane Schwartz
>> Cc: moses-support@mit.edu; Philipp Koehn
>> Subject: Re: [Moses-support] Major bug found in Moses
>>
>> if we don't understand the problem, how can we possibly fix it?
>> all the relevant code is open source. go for it!
>>
>> ~amittai
>>
>> On 6/19/15 12:49, Read, James C wrote:
>>> So, all I did was filter out the less likely phrase pairs and the 
>>> BLEU score shot up. Was that such a stroke of genius? Was that not 
>>> blindingly obvious?
>>>
>>>
>>> Your telling me that redesigning the search algorithm to prefer 
>>> higher scoring phrase pairs is all we need to do to get a best paper at ACL?
>>>
>>>
>>> James
>>>
>>>
>>>
>>> --------------------------------------------------------------------
>>> ----
>>> *From:* Lane Schwartz <dowob...@gmail.com>
>>> *Sent:* Friday, June 19, 2015 7:40 PM
>>> *To:* Read, James C
>>> *Cc:* Philipp Koehn; Burger, John D.; moses-support@mit.edu
>>> *Subject:* Re: [Moses-support] Major bug found in Moses On Fri, Jun 
>>> 19, 2015 at 11:28 AM, Read, James C <jcr...@essex.ac.uk 
>>> <mailto:jcr...@essex.ac.uk>> wrote:
>>>
>>>     What I take issue with is the en-masse denial that there is a
>>>     problem with the system if it behaves in such a way with no LM + no
>>>     pruning and/or tuning.
>>>
>>>
>>> There is no mass denial taking place.
>>>
>>> Regardless of whether or not you tune, the decoder will do its best 
>>> to find translations with the highest model score. That is the 
>>> expected behavior.
>>>
>>> What I have tried to tell you, and what other people have tried to 
>>> tell you, is that translations with high model scores are not 
>>> necessarily good translations.
>>>
>>> We all want our models to be such that high model scores correspond 
>>> to good translations, and that low model scores correspond with bad 
>>> translations. But unfortunately, our models do not innately have 
>>> this characteristic. We all know this. We also know a good way to 
>>> deal with this shortcoming, namely tuning. Tuning is the process by 
>>> which we attempt to ensure that high model scores correspond to high 
>>> quality translations, and that low model scores correspond to low 
>>> quality translations.
>>>
>>> If you can design models that naturally correspond with translation 
>>> quality without tuning, that's great. If you can do that, you've got 
>>> a great shot at winning a Best Paper award at ACL.
>>>
>>> In the meantime, you may want to consider an apology for your rude 
>>> behavior and unprofessional attitude.
>>>
>>> Goodbye.
>>> Lane
>>>
>>>
>>>
>>> _______________________________________________
>>> Moses-support mailing list
>>> Moses-support@mit.edu
>>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>>
>>
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support


_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to