On 9/28/2011 12:20 PM, Jörn Kottmann wrote:
> On 9/28/11 5:24 PM, [email protected] wrote:
>> Hi,
>>
>> I am testing the Chunker, but I'm failing to get the same results as in
>> 1.5.1.
>>
>> 1.5.1:
>>
>> Precision: 0.9255923572240226
>> Recall: 0.9220610430991112
>> F-Measure: 0.9238233255623465
>>
>> 1.5.2:
>>
>> Precision: 0.9257575757575758
>> Recall: 0.9221868187154117
>> F-Measure: 0.9239687473746113
>>
>>
>> Maybe it is related to this
>> https://issues.apache.org/jira/browse/OPENNLP-242
>>
>> Or to this related to this:
>>
>> The results of the tagging performance may differ compared to the 1.5.1
>> release, since a bug was corrected in the event filtering.
>>
>> What should we do?
>>
>>
>
> I guess it is related to OPENNLP-242, I couldn't find the jira for the
> second one,
> but as far as I know it only affects the perceptron. Does anyone
> remember what this
> is about?
>
> Could you undo OPENNLP-242 and see if the result is identical again?
> You could also
> test the model from 1.5.2 with 1.5.1 to see if it was trained different.
>
> Anyway I doesn't look like we have a regression here.
>
> Jörn
Jorn,

I was going based on memory... I don't know it may have been 1.5.1 that
fixed that bug.  The training works the same as 1.5.1 but, I also get
different results with the 1.5.2 series for the outcomes.  I couldn't
find any reason, other than form my strange memory that remembers a fix
that someone did that changed the counts for the events.  I couldn't
find anything else that would be effecting the outcomes for the evaluations.

James

Reply via email to