On the other hand ... that sounds like a small MT-Marathon project.

W dniu 01.08.2014 o 19:52, Marcin Junczys-Dowmunt pisze:
Well, I agree :)
Anyone want to tackle this with me? At least something basic that emulates multiple input factors on the input sentence level should not be that hard... non-determinism is maybe an issue, but that would then look like an input confusion network?

W dniu 01.08.2014 o 19:45, Philipp Koehn pisze:
Hi,

there are not, but there should be.

-phi


On Fri, Aug 1, 2014 at 1:31 PM, Marcin Junczys-Dowmunt <[email protected]> wrote:
Hi,
does Moses support source generation steps before translation? I would like to accomplish something like that incredible ASCII art below, where t0 is a surface form phrase table, g0 is the source POS generation model, g1 taget POS generation model, lm0 is a surface language model, lm1 a POS language model, osm0 is a OSM is a defined over the generated POS tags. Is that possible?


0  src_word --t0--> trg_word --> lm0

     |                |
     g0               g1
     |                |
     V                v
1  src_pos          trg_pos --> lm1
         \          /
          \        /
           - osm0 -



_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support





_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support


_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to