Hi,

it is true that this is currently not generally possible, and there are
many possible ways to train multiple translation models. Currently
you have to build different models in different runs and then
specify a configuration file that points to the different models
and go from there (tuning, testing).

Note that having multiple translation models in a factored-model
backoff approach is already supported.

In the case of multiple domains and different translation models for
different domains, you will likely do word alignment on the entire corpus
and then only split during extraction / scoring.

-phi

On Thu, Aug 11, 2011 at 9:45 AM, John Morgan <[email protected]> wrote:
> I have a question about strategies for using multiple translation
> tables and backoff models.
> The following line appears in the experiment.meta file for the ems:
> [TRAINING] single
> I assume this means only one translation table can be trained.
> Is there a reason why this couldn't be changed to "multiple" and have
> multiple [TRAINING:...] stanzas in the experiment.perl configuration
> file, one for each translation table?
>  Thanks,
> John
>
>
> --
> Regards,
> John J Morgan
> _______________________________________________
> Moses-support mailing list
> [email protected]
> http://mailman.mit.edu/mailman/listinfo/moses-support
>

_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to