Hi,

I have a training corpus with multiple factors on the target side. I
experimented with various factor configurations - one for generation of
each target factor from the surface form and then use an LM over the factor
as a feature. The surface form to factor mappings are fairly deterministic,
so the LM over factors is where I hope to see benefits. Indeed I do obtain
significant improvements in output with each individual factor over a PBSMT
system. However, when I put together multiple factors along with multiple
language models, the results actually don't as much improvement as using
some of the individual factors. The performance generally better the
baseline PBSMT though in most cases.

Do you have some suggestions regarding why this would be so, and if  this
could be rectified?
I have attached the moses.ini for the factored system with three generation
factors.

Regards
Anoop.

-- 
I claim to be a simple individual liable to err like any other fellow
mortal. I own, however, that I have humility enough to confess my errors
and to retrace my steps.

http://flightsofthought.blogspot.com

Attachment: moses.ini
Description: Binary data

_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to