Hi, I have implemented the EM Model 1 algorithm as outlined in Koehn's lecture notes. I was surprised to find the raw output of the algorithm gives a translation table that given any particular source word the sum of the probabilities of each possible target word is far greater than 1.
Is this normal? Thanks James -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. _______________________________________________ Moses-support mailing list [email protected] http://mailman.mit.edu/mailman/listinfo/moses-support
