Hi Anoop, One of the possible reason could be that you model (with 3 generation factors) is too specific (the more factors you add, the more specific your model will be) and your data might be limited and sparse to cover all possible combination of factors.
You can try multiple decoding path in moses (for each decoding path you can specify one combination of factors) http://www.statmt.org/moses/?n=FactoredTraining.FactoredTraining Also you can find example in moses manual. On Thu, Oct 29, 2015 at 2:12 PM, Anoop (അനൂപ്) <[email protected] > wrote: > Hi, > > I have a training corpus with multiple factors on the target side. I > experimented with various factor configurations - one for generation of > each target factor from the surface form and then use an LM over the factor > as a feature. The surface form to factor mappings are fairly deterministic, > so the LM over factors is where I hope to see benefits. Indeed I do obtain > significant improvements in output with each individual factor over a PBSMT > system. However, when I put together multiple factors along with multiple > language models, the results actually don't as much improvement as using > some of the individual factors. The performance generally better the > baseline PBSMT though in most cases. > > Do you have some suggestions regarding why this would be so, and if this > could be rectified? > I have attached the moses.ini for the factored system with three > generation factors. > > Regards > Anoop. > > -- > I claim to be a simple individual liable to err like any other fellow > mortal. I own, however, that I have humility enough to confess my errors > and to retrace my steps. > > http://flightsofthought.blogspot.com > > _______________________________________________ > Moses-support mailing list > [email protected] > http://mailman.mit.edu/mailman/listinfo/moses-support > > -- -Regards, Rajen Chatterjee.
_______________________________________________ Moses-support mailing list [email protected] http://mailman.mit.edu/mailman/listinfo/moses-support
