>> end of the hypothesis.
> Your LM state is dependent on the entire target phrase? ie. these 
> target phrases have difference states:
>    a b c d e f g h i j
>    z b c d e f g h i j
> This would probably negatively impact search as the stacks will have 
> to be pruned more often, leading to search errors.
>
> I think this is also the experience of people trying to add syntactic 
> LM to SMT decoders

Hi Hieu,

How does moses use the LM state? If I used the same state for both 
phrases in your example and different LM scores, would moses keep both 
hypotheses in its search space immediately after appending your two 
phrases or would it discard one of them? Is this behaviour dependant on 
the choice of the search-algorithm?

David

>> And when a hypothesis is being extended, its LM
>> state is extended by one target word at a time in a loop over the new
>> phrase from start to finish. Ngram LM implementation does not work in
>> this way and it seems to harm ngram performance. Can anyone shed some
>> light on the motivation behind the behaviour described above in 
>> points 1-3?
>>
>> I used moses with its default, a.k.a. "normal", search algorithm (no
>> [search-algorithm] variable specified in my config). For completeness,
>> my config when using moses with its Kenlm class is pasted below.
>>
>> Best regards,
>> David
>>
>>
>> # input factors
>> [input-factors]
>> 0
>>
>> # mapping steps
>> [mapping]
>> 0 T 0
>>
>> [distortion-limit]
>> 6
>>
>> # feature functions
>> [feature]
>> UnknownWordPenalty
>> WordPenalty
>> PhrasePenalty
>> PhraseDictionaryMemory name=TranslationModel0 table-limit=20
>> num-features=4 path=model/phrase-table.1.gz input-factor=0 
>> output-factor=0
>> LexicalReordering name=LexicalReordering0 num-features=6
>> type=wbe-msd-bidirectional-fe-allff input-factor=0 output-factor=0
>> path=model/reordering-table.1.wbe-msd-bidirectional-fe.gz
>> Distortion
>> KENLM lazyken=1 name=LM0 factor=0 path=lm/europarl.binlm.1 order=5
>>
>> # dense weights for feature functions
>> [weight]
>> UnknownWordPenalty0= 1
>> WordPenalty0= -1
>> PhrasePenalty0= 0.2
>> TranslationModel0= 0.2 0.2 0.2 0.2
>> LexicalReordering0= 0.3 0.3 0.3 0.3 0.3 0.3
>> Distortion0= 0.3
>> LM0= 0.5
>>
>>
>>
>> _______________________________________________
>> Moses-support mailing list
>> [email protected]
>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>
>

_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to