[ 
https://issues.apache.org/jira/browse/OPENNLP-154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13044736#comment-13044736
 ] 

Jörn Kottmann commented on OPENNLP-154:
---------------------------------------

Can we do the following to fix the issue:
- Determine the maximum absolute prior value
- Divide all priors by the maximum value

This will result in priors ranging from -1 to 1 on which
we can safely perform the current normalization.

> normalization in perceptron
> ---------------------------
>
>                 Key: OPENNLP-154
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-154
>             Project: OpenNLP
>          Issue Type: Bug
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jason Baldridge
>            Assignee: Jason Baldridge
>            Priority: Minor
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> I found some issues with the way perceptron output was normalized. It was 
> sort of a strange way to handle negative numbers that didn't really work.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to