[ 
https://issues.apache.org/jira/browse/OPENNLP-154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045114#comment-13045114
 ] 

Jason Baldridge commented on OPENNLP-154:
-----------------------------------------

I guess that was my Scala coming through, e.g. where I would have done the 
loops:

       for (int oid = 0; oid < numOutcomes; oid++)
        prior[oid] = Math.exp(prior[oid]/maxPrior);

      double normal = 0.0;
      for (int oid = 0; oid < numOutcomes; oid++)
        normal += prior[oid];

      for (int oid = 0; oid < numOutcomes; oid++)
        prior[oid] /= normal;

As something along the lines of :

      val priorExp = prior map (outcomePrior => Math.exp(outcomePrior/maxPrior))
      val normal = priorExp sum
      val priorNorm = priorExp map (_/normal)

Anyway, I've changed it to be one loop:

      double normal = 0.0;
      for (int oid = 0; oid < numOutcomes; oid++) {
        prior[oid] = Math.exp(prior[oid]/maxPrior);
        normal += prior[oid];
      }

Jason

> normalization in perceptron
> ---------------------------
>
>                 Key: OPENNLP-154
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-154
>             Project: OpenNLP
>          Issue Type: Bug
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jason Baldridge
>            Assignee: Jason Baldridge
>            Priority: Minor
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> I found some issues with the way perceptron output was normalized. It was 
> sort of a strange way to handle negative numbers that didn't really work.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to