[
https://issues.apache.org/jira/browse/OPENNLP-155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Baldridge resolved OPENNLP-155.
-------------------------------------
Resolution: Fixed
I changed this so that after each iteration, the training accuracy is scored
without changing the parameters. This gives a coherent value reported on every
iteration, and it also allows early stopping by checking whether the same
accuracy has been obtained for some number of times (e.g. 4) in a row. (This
could also be done by checking that parameter values haven't changed, which
would be better, but which I'd only want to do after refactoring.)
> unreliable training set accuracy in perceptron
> ----------------------------------------------
>
> Key: OPENNLP-155
> URL: https://issues.apache.org/jira/browse/OPENNLP-155
> Project: OpenNLP
> Issue Type: Improvement
> Components: Maxent
> Affects Versions: maxent-3.0.1-incubating
> Reporter: Jason Baldridge
> Assignee: Jason Baldridge
> Priority: Minor
> Original Estimate: 0h
> Remaining Estimate: 0h
>
> The training accuracies reported during perceptron training were much higher
> than final training accuracy, which turned out to be an artifact of the way
> training examples were ordered.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira