The train/test/dev data sets are pre-processed in the same way (tokenize
and cleaning only). Though, I will recheck everything again and let you
know.

On Mon, Apr 25, 2016 at 4:45 PM, Hieu Hoang <[email protected]> wrote:

> that looks ok. I'm not sure what wrong, you should check everything
> yourself. Look at your tuning and test data, make sure they have been
> pre-processed and post-processed correctly. Look at your training data and
> make sure it's processed in the same way as your tuning and test sets.
>
> You should never report BLEU score with untuned weights
>
> Hieu Hoang
> http://www.hoang.co.uk/hieu
>
> On 25 April 2016 at 15:10, Rajnath Patel <[email protected]> wrote:
>
>> Hi Hieu,
>>
>> We are using simple tuning command with default settings as given bellow.
>> Kindly suggest, what is missing here?
>> Thank you!
>>
>> *TUNING:*
>> $SCRIPTS_ROOTDIR/training/mert-moses.pl \
>>         *$src $ref* \
>>         /home/speech/smt/decoder/mosesdecoder/bin/moses *$model* \
>>         --mertdir /home/speech/smt/decoder/mosesdecoder/bin/ \
>>         --decoder-flags *'-threads 24'* >& mert.out &
>>
>> *TESTING:*
>> $moses -f moses.ini < test > test.out 2> err.log &
>>
>> On Mon, Apr 25, 2016 at 4:25 PM, Hieu Hoang <[email protected]> wrote:
>>
>>> You MUST use the tuned weights. You'll be in deep water if you don't
>>>    https://www.mail-archive.com/moses-support%40mit.edu/msg12446.html
>>>
>>> If they produce bad results, it indicates there's something wrong
>>> somewhere in your pipeline
>>>
>>> Hieu Hoang
>>> http://www.hoang.co.uk/hieu
>>>
>>> On 25 April 2016 at 14:32, Rajnath Patel <[email protected]> wrote:
>>>
>>>> Hi Jasneet,
>>>>
>>>> Thanks for quick response. We are comparing the results on default
>>>> weight (moses.ini) vs tuned weight. And with default weights we are
>>>> getting higher BLEU on test set than tuned weights.
>>>>
>>>> .
>>>>
>>>> On Mon, Apr 25, 2016 at 3:17 PM, Jasneet Sabharwal <
>>>> [email protected]> wrote:
>>>>
>>>>> Hi Rajnath,
>>>>>
>>>>> Against what test set are you comparing your BLEU scores? If you mean
>>>>> that your BLEU score on test set is lower than the BLEU on dev/tuning set
>>>>> then that is fine. The BLEU score on tuning set is generally higher than
>>>>> the BLEU score on test set as the parameters of the features were tuned
>>>>> using the tuning set.
>>>>>
>>>>> Best,
>>>>> Jasneet
>>>>>
>>>>> > On Apr 25, 2016, at 2:38 AM, Rajnath Patel <[email protected]>
>>>>> wrote:
>>>>> >
>>>>> > Hi all,
>>>>> >
>>>>> > I am trying to tune a phrase based model with default tuning
>>>>> parameters (MERT, BLEU). But, instead of improvement getting reduced BLEU
>>>>> on test set. Kindly help to choose the appropriate algorithm and metrics
>>>>> for English-French SMT.
>>>>> >
>>>>> > Thank you!
>>>>> >
>>>>> > --
>>>>> > Regards,
>>>>> > Raj Nath Patel
>>>>> >
>>>>> > _______________________________________________
>>>>> > Moses-support mailing list
>>>>> > [email protected]
>>>>> > http://mailman.mit.edu/mailman/listinfo/moses-support
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Regards:
>>>> राज नाथ पटेल/Raj Nath Patel
>>>> KBCS dept.
>>>> CDAC Mumbai.
>>>> http://kbcs.in/
>>>>
>>>> _______________________________________________
>>>> Moses-support mailing list
>>>> [email protected]
>>>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>>>
>>>>
>>>
>>
>>
>> --
>> Regards:
>> राज नाथ पटेल/Raj Nath Patel
>> KBCS dept.
>> CDAC Mumbai.
>> http://kbcs.in/
>>
>
>


-- 
Regards:
राज नाथ पटेल/Raj Nath Patel
KBCS dept.
CDAC Mumbai.
http://kbcs.in/
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to