Dear list members, I have a general theoretical question: if a word alignment model is only used to generate the viterbi alignment of the data for further usage (like it is the case with Moses'es phrase-based translation), is it necessary or at all useful to model reordering/distortion in the word alignment phase? Naturally if a word alignment model is later used for decoding to generate a new output, reordering is crucial; but how about in case of phrase-based translation used by Moses, where even lexicalized reordering is learned based on the symmetrical alignment matrices? Does modelling the reordering make the learning more robust/stable? Are there any experiments or articles dealing with this question?
Hope this isn't a troll question :) Mark Fishel Dept. of Computer Science University of Tartu _______________________________________________ Moses-support mailing list [email protected] http://mailman.mit.edu/mailman/listinfo/moses-support
