Hi,

the number of phrase tables should not matter much, but the number of
language models has a significant impact on speed. There are no general
hard numbers on this, since it depends on a lot of other settings, but
adding a second language model will slow down decoder around 30-50%.

The size of phrase tables and language models matter, too, but not
as much, and it seems that in your scenario you are just wondering
about splitting up a fixed pool of data.

-phi

On Wed, Apr 6, 2016 at 6:50 AM, Vincent Nguyen <[email protected]> wrote:

> Hi,
>
> What are (in terms of performance) the difference between the 3
> following solutions :
>
> 2 corpus, 2 LM, 2 weights calculated at tuning time
>
> 2 corpus merged into one, 1 LM
>
> 2 corpus, 2 LM interpolated into 1 LM with tuning
>
> Will the results be different in the end ?
>
> thanks.
> _______________________________________________
> Moses-support mailing list
> [email protected]
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to