Thanks Hieu for pointing me to this section of your thesis. This is really
useful.

- Sriram


On Thu, Apr 18, 2013 at 2:30 PM, Hieu Hoang <[email protected]> wrote:

> 'is good' --> is not good
>
>
> On 18 April 2013 09:57, Hieu Hoang <[email protected]> wrote:
>
>> If you are using multiple phrase-tables and generation tables, I don't
>> think there's much you can do about the speed of the decoding. Also, the
>> translation quality is good with this configuration
>>
>> You can have a look analysis on page 40 here:
>>    http://statmt.org/~s0565741/download/ddd.pdf
>>
>> Instead of
>>   --translation-factors 0-0+1-1 --generation-factors 1-0 --decoding-steps
>> t0,t1,g0
>> you're better off doing
>>   --translation-factors 0,1-0,1
>>
>>
>>
>>
>> On 17 April 2013 17:03, Sriram venkatapathy <
>> [email protected]> wrote:
>>
>>>
>>> Thanks Philipp.
>>>
>>> I had tried with very tight t-table limits, even with a limit of 1 for
>>> both words and pos tags and still it didn't work for this example sequence.
>>> This was surprising.
>>>
>>> I hope I can avoid shorter phrase-lengths because the task I have in
>>> mind would require me to have default phrase lengths at least at the POS
>>> tag level. And, would like to avoid using factored model as backoff too
>>> because I would like to encourage those translations that have a particular
>>> pattern of POS tags.
>>>
>>> - Sriram
>>>
>>>
>>> On Tue, Apr 16, 2013 at 9:29 PM, Philipp Koehn <[email protected]>wrote:
>>>
>>>> Hi,
>>>>
>>>> the translation option expansion of factored models may explode in the
>>>> setup that you use above
>>>> (there are many possible lemma and pos mappings, and the product of
>>>> them is explored during
>>>> your first two decoding steps).
>>>>
>>>> You could remedy this by:
>>>> - use shorter phrase lengths
>>>> - use tighter t-table limits
>>>> - use the factored model only as backoff
>>>>
>>>> -phi
>>>>
>>>> On Mon, Apr 15, 2013 at 4:15 PM, Sriram venkatapathy
>>>> <[email protected]> wrote:
>>>> >
>>>> > The decoder (with a factored model) seems to get stuck for certain
>>>> > sentences. For example,
>>>> >
>>>> > It gets stuck for the sentence :
>>>> >
>>>> > saint|noun mary|noun immaculate|adj catholic|nadj church|noun
>>>> >
>>>> > While working without any problem for the following sentences :
>>>> >
>>>> > saint|noun mary|noun immaculate|adj catholic|noun church|noun
>>>> > saint|noun mary|noun immaculate|adj large|nadj church|noun
>>>> > saint|noun mary|noun immaculate|adj large|adj church|noun
>>>> >
>>>> >
>>>> > Here are the training parameters,
>>>> > --translation-factors 0-0+1-1 --generation-factors 1-0
>>>> --decoding-steps
>>>> > t0,t1,g0
>>>> >
>>>> > Factor 0 in both source and target are words
>>>> > Factor 1 in both source and target and part-of-speech tags
>>>> >
>>>> > Any suggestions about what I should be looking at to identify the
>>>> problem ?
>>>> > In the verbose mode, I see that for the problem sentences, the stage
>>>> of
>>>> > 'collection of translation options' doesn't finish.
>>>> >
>>>> > Thanks !
>>>> > - Sriram
>>>> >
>>>> >
>>>> > _______________________________________________
>>>> > Moses-support mailing list
>>>> > [email protected]
>>>> > http://mailman.mit.edu/mailman/listinfo/moses-support
>>>> >
>>>>
>>>
>>>
>>> _______________________________________________
>>> Moses-support mailing list
>>> [email protected]
>>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>>
>>>
>>
>>
>> --
>> Hieu Hoang
>> Research Associate
>> University of Edinburgh
>> http://www.hoang.co.uk/hieu
>>
>>
>
>
> --
> Hieu Hoang
> Research Associate
> University of Edinburgh
> http://www.hoang.co.uk/hieu
>
>
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to