RandLM now supports language models that are served on multiple
machines.  This means that language models can be very large, they now
have a zero-time start up when used in Moses and they can be shared
across multiple decoders.  As they say in the trade, not bad.

http://sourceforge.net/projects/randlm/

Note that batching in the decoder (ie changing the search strategy)
has not been implemented yet.  Significant effort has gone into making
the LM itself time and space efficient,

Miles and Oliver
-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to