Thanks,

Yes configuration for neural LM is as you mentioned, but I was asking for
back-off Language model decoding, mentioned by Rico in his response.

-- 
Regards:
Raj Nath


On Mon, Sep 14, 2015 at 11:23 AM, Raj Dabre <[email protected]> wrote:

> Hi,
>
> 1. 65k is quite small. You might need many (Read: MANY) iterations till
> the perplexity stops dropping by significant amounts.
>
> 2. In Moses, I think you can try this--- Add 2 lines as below:
>
> Under *feature* add this: NeuralLM factor=0 name=LM1 order=5
> path=<path/to/neural/lm/file>
>
> Under *weight *add this: LM1=0.5
>
> I am not 100% sure but it should work.
>
>
>
>
> On Mon, Sep 14, 2015 at 1:54 PM, Rajnath Patel <[email protected]>
> wrote:
>
>> Thanks for quick response.
>>
>> @Raj Dabre
>> Corpus statistics as follows-
>> Approx -65k sentences, 1200k words, 50k vocab.
>> Please suggest, what size of corpus is enough for neural LM training?
>>
>> @Riko
>> I will try with development set and more epochs as you suggested.
>> Back-off LM you mean fall back to neural LM if its not found in n-gram
>> model(Please correct if I got it wrong). If so, could you please suggest
>> how to configure the same with moses.
>>
>> Thanks.
>>
>>
>>
>>> Message: 1
>>> Date: Mon, 14 Sep 2015 01:56:14 +0900
>>> From: Raj Dabre <[email protected]>
>>> Subject: Re: [Moses-support] Performance issue with Neural LM for
>>>         English-Hindi SMT
>>> To: Rajnath Patel <[email protected]>
>>> Cc: moses-support <[email protected]>
>>> Message-ID:
>>>         <CAB3gfjCGapWtYTheh6mKHhica7v7d=
>>> [email protected]>
>>> Content-Type: text/plain; charset="utf-8"
>>>
>>> Hi,
>>> I have had a similar experience with NPLM.
>>> Do you perhaps have a small corpus?
>>>
>>> On Sun, Sep 13, 2015 at 6:51 PM, Rajnath Patel <[email protected]>
>>> wrote:
>>>
>>> > Hi all,
>>> >
>>> > I have tried Neural LM(nplm) with phrase based English-Hindi SMT, but
>>> > translation quality is kind of not good as compared to n-gram
>>> LM(scores are
>>> > given below). I have trained LM for 3-gram and 5-gram with default
>>> > setting(as mentioned on statmt.org/moses). Kindly suggest, If some one
>>> > has tried the same English-Hindi SMT and got improved results. What
>>> may be
>>> > probable cause of degraded results?
>>> >
>>> > BLEU scores:
>>> > n-gram(5-gram)=24.40
>>> > neural-lm(5-gram)=11.30
>>> > neural-lm(3-gram)=12.10
>>> >
>>> > Thank you.
>>> >
>>> > --
>>> > Regards:
>>> > Raj Nath Patel
>>> >
>>> > _______________________________________________
>>> > Moses-support mailing list
>>> > [email protected]
>>> > http://mailman.mit.edu/mailman/listinfo/moses-support
>>> >
>>> >
>>>
>>>
>>> --
>>> Raj Dabre.
>>> Doctoral Student,
>>> Graduate School of Informatics,
>>> Kyoto University.
>>> CSE MTech, IITB., 2011-2014
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL:
>>> http://mailman.mit.edu/mailman/private/moses-support/attachments/20150913/7fa15fdd/attachment-0001.html
>>>
>>> ------------------------------
>>>
>>> Message: 2
>>> Date: Sun, 13 Sep 2015 23:19:19 +0100
>>> From: Rico Sennrich <[email protected]>
>>> Subject: Re: [Moses-support] Performance issue with Neural LM for
>>>         English-Hindi SMT
>>> To: [email protected]
>>> Message-ID: <[email protected]>
>>> Content-Type: text/plain; charset="windows-1252"
>>>
>>> Hello Raj,
>>>
>>> Usually, nplm is used in addition to a back-off LM for best results.
>>> That being said, your results indicate that nplm is performing poorly.
>>> If you have little training data, a smaller vocabulary size and more
>>> training epochs may be appropriate. I would advise to provide a
>>> development set to the nplm training program so that you can track the
>>> training progress, and compare perplexity with back-off models.
>>>
>>> best wishes,
>>> Rico
>>>
>>> On 13/09/15 10:51, Rajnath Patel wrote:
>>> > Hi all,
>>> >
>>> > I have tried Neural LM(nplm) with phrase based English-Hindi SMT, but
>>> > translation quality is kind of not good as compared to n-gram
>>> > LM(scores are given below). I have trained LM for 3-gram and 5-gram
>>> > with default setting(as mentioned on statmt.org/moses
>>> > <http://statmt.org/moses>). Kindly suggest, If some one has tried the
>>> > same English-Hindi SMT and got improved results. What may be probable
>>> > cause of degraded results?
>>> >
>>> > BLEU scores:
>>> > n-gram(5-gram)=24.40
>>> > neural-lm(5-gram)=11.30
>>> > neural-lm(3-gram)=12.10
>>> >
>>> > Thank you.
>>> >
>>> > --
>>> > Regards:
>>> > Raj Nath Patel
>>>
>>>
>>
>>
>> --
>> Regards:
>> Raj Nath Patel
>>
>
>
>
> --
> Raj Dabre.
> Doctoral Student,
> Graduate School of Informatics,
> Kyoto University.
> CSE MTech, IITB., 2011-2014
>
>
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to