Hi,

since we place hypotheses in the same stack that may cover
different input words, we need to account for the translation
cost of the rest of the sentence. It is not a admissible heuristic
in the A* sense, but it is fulfilling the same purpose.

The cost estimates are also used to filter the phrase translation
table.

-phi

On Tue, Feb 17, 2009 at 1:09 PM, Ergun Bicici <[email protected]> wrote:
>
> Hi Philipp,
>
> Thanks for the response. I was not asking why these scores are cached.
>
> My question is more about why calculate this way. Is this because of an
> admissible heuristic?
>
> Ergun Bicici
> Koc University
>
>
> On Wed, Feb 11, 2009 at 11:51 PM, Philipp Koehn <[email protected]> wrote:
>>
>> Hi,
>>
>> what is going here is a caching of phrase-internal
>> n-gram model scores, so they do not have to be
>> re-computed. Think about the output phrase
>> "the very big and funny man" - if you use a trigram
>> language model, then the computation of the language
>> model scores for the words "big", "and", "funny", "man"
>> are the same, no matter what the context. So, these are
>> cached.
>>
>> -phi
>>
>> > LanguageModel::CalcScore is adding ngram score to retFull score:
>> > fullScore += ngramScore;
>> >
>> > But then in TranslationOption::CalcScore, this is subtracted back:
>> > m_futureScore = retFullScore - ngramScore
>> >                 +
>> > m_scoreBreakdown.InnerProduct(StaticData::Instance().GetAllWeights()) -
>> > phraseSize * StaticData::Instance().GetWeightWordPenalty();
>> >
>> >
>> > - Is the n-gram order (3) fixed for LM cost calculations
>> > used in future cost? It does not look so.
>> >
>> >
>> > It would be helpful if someone could clarify the
>> > future cost calculation further.
>> >
>> > Thanks,
>> > Ergun
>> >
>> >
>> > Ergun Bicici
>> > Koc University
>> >
>> >
>> > On Wed, Sep 24, 2008 at 5:46 PM, Philipp Koehn <[email protected]>
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> the future cost estimates includes an estimate of the phrase
>> >> translation
>> >> cost
>> >> and language model cost, but not reordering costs. And yes, this is
>> >> implemented
>> >> as described in the Pharaoh manual.
>> >>
>> >> -phi
>> >>
>> >> On Wed, Sep 24, 2008 at 8:58 AM, Yee Seng Chan <[email protected]>
>> >> wrote:
>> >> > Hi list members,
>> >> >
>> >> >
>> >> >
>> >> > Inside TranslationOption.cpp::CalcScore(), m_futureScore is
>> >> > effectively:
>> >> > retFullScore - (PhraseSize*WordPenalty)
>> >> >
>> >> > (Kindly correct me if I'm wrong).
>> >> >
>> >> >
>> >> >
>> >> > What's the reasoning for using the above as futureScore? I know
>> >> > retFullScore
>> >> > is n-gram score. Btw, does the approach here follows "Section 3.5
>> >> > Future
>> >> > Cost Estimation" in the Pharaoh manual?
>> >> >
>> >> >
>> >> >
>> >> > Best regards,
>> >> >
>> >> > Yee Seng Chan
>> >> >
>> >> >
>> >> >
>> >> > _______________________________________________
>> >> > Moses-support mailing list
>> >> > [email protected]
>> >> > http://mailman.mit.edu/mailman/listinfo/moses-support
>> >> >
>> >> >
>> >> _______________________________________________
>> >> Moses-support mailing list
>> >> [email protected]
>> >> http://mailman.mit.edu/mailman/listinfo/moses-support
>> >>
>> >
>> >
>> > _______________________________________________
>> > Moses-support mailing list
>> > [email protected]
>> > http://mailman.mit.edu/mailman/listinfo/moses-support
>> >
>> >
>>
>
>
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to