Taylor Rose wrote:

> I am looking at pruning phrase tables for the experiment I'm working on.
> I'm not sure if it would be a good idea to include the 'penalty' metric
> when calculating probability. It is my understanding that multiplying 4
> or 5 of the metrics from the phrase table would result in a probability
> of the phrase being correct. Is this a good understanding or am I
> missing something?

I don't think this is correct.  At runtime all the features from the phrase 
table and a number of other features, some only available during decoding, are 
combined in an inner product with a weight vector to score partial 
translations.  I believe it's fair to say that at no point is there an explicit 
modeling of "a probability of the phrase being correct", at least not in 
isolation from the partially translated sentence.  This is not to say you 
couldn't model this yourself, of course.

- John Burger
  MITRE

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to