For a long time now I've wanted to see Moses on a small device. Apart from
all of the extra functionality that isn't needed, one would also need to
work on shrinking the phrase table and perhaps also the search graph.
 KenLM / RandLM already deal with making the language model smaller.

An interesting research question would be as follows:  can we frame
decoding on a small device in terms of a budget and optimise that budget?
 We normally don't bother thinking this way and instead focus entirely on
quality.  But it might be possible to instead have a better connection with
the amount of space / search done and quality than we have already.  I'm
not sure if this is just a matter of fiddling with the beam size etc.
 Evince seems to suggest that this doesn't always give the expected
behaviour (ie the relationship between BLEU and beam size isn't linear).

Miles

-- 
The University of Edinburgh is a charitable body, registered in Scotland,
with registration number SC005336.
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to