Another batch MIRA question, perhaps for Colin this time: does kbmira
support only optimizing some feature weights (i.e., holding the other
weights constant)?

Cheers, Alex


On Mon, Feb 4, 2013 at 3:06 PM, Alexander Fraser
<[email protected]> wrote:
> That's great - thanks!
>
> On Mon, Feb 4, 2013 at 2:29 PM, Barry Haddow <[email protected]> 
> wrote:
>> Hi Alex
>>
>> Yes, you can use batch mira for training sparse features, it works the same
>> way as PRO does in Moses.
>>
>> Unfortunately documentation on sparse features is, well, sparse... But the
>> n-best format is much the same as for dense features, ie
>>
>>  name_1: value_1 name_2: value_2 ...
>>
>> Sparse features only get reported in the nbest if they are named in the
>> -report-sparse-features argument, otherwise their weighted sum will be
>> reported.
>>
>> cheers - Barry
>>
>>
>> On 04/02/13 13:13, Alexander Fraser wrote:
>>>
>>> Hi Folks,
>>>
>>> Can sparse features be used together with batch mira?
>>>
>>> Is there documentation for the n-best format of sparse features somewhere?
>>>
>>> Thanks!
>>>
>>> Cheers, Alex
>>>
>>
>>
>> --
>> The University of Edinburgh is a charitable body, registered in
>> Scotland, with registration number SC005336.
>>
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to