Hi,

On Wed, Nov 23, 2011 at 5:05 PM, somayeh bakhshaei
<[email protected]> wrote:
>
> - should we average all the weights in the various moses.ini generated
> during these tunings? Would weights really still make sense doing so?
>
> ** We do not do this in our lab. We repeat the training phase and then
> choose the moses.ini related to the best BLEU tuning.
> Yes, it is not a correct job to average the weights, it does not make sense.
> Just consider two vector in the space located on two peak of a function. The
> average of these two even might be in a valley.

Yes. I thought so too. I probably misunderstood what Tom meant by
"averaging the final BLEU scores" in this thread.

> - should we compare the BLEU values of the various tuning and take
> as-is (without modifying it) the moses.ini whose BLEU was the closer
> to the average of all the BLEUs?
>
> ** we choose the best BLEU, hoping we have cached a better optimum point and
> use its moses.ini.
>

Oh ok. So you take the moses.ini with the best BLEU. So I get you have
a different method from Tom Hoar and Barry Haddow, who said too in the
initial (at least what I think is:
http://thread.gmane.org/gmane.comp.nlp.moses.user/5418/focus=5419)
topic, who said:
"The best plan is to do several runs and take the average bleu."

I guess there are several ways to see the problem here. Or maybe I am
totally out of my way here and really even more misunderstood what I
read here.
Thanks anyway. I take good note. :-)

Jehan

> Best Regards,
>
> On Wed, Nov 23, 2011 at 8:14 AM, Jehan Pages <[email protected]> wrote:
>>
>> Hi,
>>
>> On Tue, Nov 22, 2011 at 10:18 PM, somayeh bakhshaei
>> <[email protected]> wrote:
>> > Hello,
>> >
>> > Thanks for all answers.
>> >
>> > Also thanks Jehan.
>> > As you might follow moses emails there is an inconsistency problem about
>> > tuning in mert (expressed by Neda)
>> > For reducing this problem everyone offered to tune the system repeatedly
>> > then choosing the best answer.
>>
>> Thanks for this explication. Reading Tom Hoar's email, yours and
>> researching and finding the original discussion, I am not sure to have
>> understood what is the proposed solution:
>>
>> - should we average all the weights in the various moses.ini generated
>> during these tunings? Would weights really still make sense doing so?
>>
>> - should we compare the BLEU values of the various tuning and take
>> as-is (without modifying it) the moses.ini whose BLEU was the closer
>> to the average of all the BLEUs?
>>
>> > It is a way of getting rid of local maxima but not exactly catching the
>> > global Maxima but instead trapping in another local one :)
>> > So I think a better solution is needed!
>>
>> So if I get it, the logics is that we may get very good BLEU (as from
>> what I read, the closer to 1, the better) on some tuning, but they are
>> actually local maxima (hence may be in fact terrible against real life
>> data). Hence in order to counter this, we prefer to use a tuning which
>> made an average BLEU on our data because it would be more robust on
>> the long term?
>>
>> Also, my mathematics are far, but from what I recall, when we want to
>> get away from local maxima/minima, one would prefer to use median
>> rather than the average (even more on short samples like here), which
>> is also very influence by local maxima. Shouldn't it also be the case
>> here?
>>
>> Regards,
>>
>> Jehan
>>
>> >
>> > On Tue, Nov 22, 2011 at 3:12 PM, Jehan Pages <[email protected]> wrote:
>> >>
>> >> Hi,
>> >>
>> >> On Tue, Nov 22, 2011 at 5:57 PM, somayeh bakhshaei
>> >> <[email protected]> wrote:
>> >> > Hello all,
>> >> >
>> >> > Salam,
>> >> >
>> >> > I am using moses in this way:
>> >> >
>> >> > train,
>> >> > for i=1 to 3
>> >> >     tune
>> >> > end for
>> >>
>> >> Sorry for not answering your problem (I don't have the solution though
>> >> I saw others did answer with a possible resolution). I just note that
>> >> you tune 3 times. Do you mean you re-tune using the exact same data
>> >> set these 3 times? Does it bring better results to tune several times
>> >> like this?
>> >> Thanks!
>> >>
>> >> Jehan
>> >>
>> >> > decode
>> >> > evaluate
>> >> >
>> >> > in the above loop for something unexpected happens, in large
>> >> > execution
>> >> > sometime the weights produced in moses.ini are wrong. For example
>> >> > once
>> >> > it
>> >> > produce 3 in the other case produce 4, take a look hear:
>> >> >
>> >> > # translation model weights
>> >> > [weight-t]
>> >> > 0.0106455
>> >> > 0.036391
>> >> > 0.0453815
>> >> > 0.0716856
>> >> > 0.0271838
>> >> >
>> >> > # translation model weights
>> >> > [weight-t]
>> >> > 0.0705978
>> >> > 0.0652413
>> >> > 0.100475
>> >> > 0.00356951
>> >> >
>> >> > in the case in the previous iteration nothing is wrong.
>> >> > Did anyone can tell me what is happening here please?
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > ---------------------
>> >> > Best Regards,
>> >> > S.Bakhshaei
>> >> >
>> >> > After All you will come ....
>> >> > And will spread light on the dark desolate world!
>> >> > O' Kind Father! We will be waiting for your affectionate hands ...
>> >> >
>> >> >
>> >> > _______________________________________________
>> >> > Moses-support mailing list
>> >> > [email protected]
>> >> > http://mailman.mit.edu/mailman/listinfo/moses-support
>> >> >
>> >> >
>> >
>> >
>> >
>> > --
>> >
>> >
>> >
>> > ---------------------
>> > Best Regards,
>> > S.Bakhshaei
>> >
>> > After All you will come ....
>> > And will spread light on the dark desolate world!
>> > O' Kind Father! We will be waiting for your affectionate hands ...
>> >
>> >
>
>
>
> --
>
>
>
> ---------------------
> Best Regards,
> S.Bakhshaei
>
> After All you will come ....
> And will spread light on the dark desolate world!
> O' Kind Father! We will be waiting for your affectionate hands ...
>
>

_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to