Hi,

From the GitHub pages it appears that eflomal supercedes efmaral — is there any 
purpose therefore in using efmaral? Also, the linked PBML paper has no mention 
of eflomal — how does it perform in downstream BLEU tasks? Is it comparable to 
what you reported in Table 4?

matt


> On Dec 7, 2016, at 2:50 AM, Jorg Tiedemann <[email protected]> wrote:
> 
> 
> efmaral and eflomal are efficient Markov chain word aligners using Gibbs 
> sampling that can be used to replace GIZA++/fast_align in the typical Moses 
> training pipelines:
> 
> https://github.com/robertostling/efmaral
> https://github.com/robertostling/eflomal
> 
> Would anyone be interested in adding support in the Moses pipelines and 
> experiment.perl?
> Input and output formats are compatible with fast_align and Moses formats.
> 
> The tools could also be mentioned at statmt.org/moses
> 
> All the best,
> Jörg
> 
> —————————————————————————————————
> Jörg Tiedemann
> Department of Modern Languages
> University of Helsinki
> http://blogs.helsinki.fi/language-technology/
> —————————————————————————————————
> 
> 
> 
> 
> 
> 
> _______________________________________________
> Moses-support mailing list
> [email protected]
> http://mailman.mit.edu/mailman/listinfo/moses-support


_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to