I think there are actually some problems to fix with the Perceptron that I
will get to on Thursday or so after I get a few major things off my plate.
Will explain then.

On Tue, May 24, 2011 at 3:44 PM, Jörn Kottmann <[email protected]> wrote:

> Hi all,
>
> after experimenting with the perceptron sequence training for
> the name finder I found an issue with the normalization of
> the perceptron model.
>
> The perceptron models eval methods outputs scores which
> indicate how likely an even is, when they are normalized
> the scores should be between zero and one.
>
> I observed that the score also are Infinity, which does
> not work that well for beam search, depending on the scores outputted
> it is not able to find a sequence at all.
>
> Why is a score Infinity? They are normalized with the exponential
> function which returns Infinity if the value for example is 850.
>
> Any suggestions how we should fix the normalization?
>
> Thanks,
> Jörn
>



-- 
Jason Baldridge
Assistant Professor, Department of Linguistics
The University of Texas at Austin
http://www.jasonbaldridge.com
http://twitter.com/jasonbaldridge

Reply via email to