The latest patch will be fine.
Thanks.
 2012. 9. 13. 오전 12:00에 "Jörn Kottmann" <kottm...@gmail.com>님이 작성:

> +1 to do that.
>
> We will name it MAXENT_QN_EXPERIMENTAL until
> the current problems are solved.
>
> Do you want to update the patch file to your latest version
> or should we commit the latest patch file attached to the issue?
>
> Jörn
>
>
> On 09/12/2012 04:05 PM, Hyosup Shim wrote:
>
>> Hi,
>>
>> It's fine that pulling it in as an experimental feature.
>> I think that it will be helpful that pulling it it, because following work
>> will be
>> more managable when the code is tracked by subversion.
>>
>> Thanks.
>>
>>
>> 2012/9/12 Jörn Kottmann <kottm...@gmail.com>
>>
>>  Hello,
>>>
>>> should we pull in the patch and mark it as experimental?
>>> Any opinions about that?
>>>
>>> Thanks,
>>> Jörn
>>>
>>>
>>> On 08/26/2012 06:43 AM, Hyosup Shim wrote:
>>>
>>>  Hi,
>>>>
>>>> I've been working on implmenting QNTrainer(L-bfgs maxent parameter
>>>> estimator) in recent few weeks.
>>>>
>>>> My first implementation on the issue gave me about 0.80 precision on
>>>> train/test set of PerceptronPrepAttach unit test.
>>>> Since other existing estimators in OpenNLP showed nearly same precision
>>>> on
>>>> that test set, I did submitted the patch.
>>>>
>>>> But on CONLL02 test set Jorn gave me, QNTrainer got dissappointing
>>>> result.
>>>> (less than 0.05 in precision, 0.30 in recall)
>>>>
>>>> I tried to fix it, and failed. Could anyone give me a clue?
>>>>
>>>> OPENNLP-338 
>>>> <https://issues.apache.org/****jira/browse/OPENNLP-338<https://issues.apache.org/**jira/browse/OPENNLP-338>
>>>> <https:**//issues.apache.org/jira/**browse/OPENNLP-338<https://issues.apache.org/jira/browse/OPENNLP-338>
>>>> >
>>>>
>>>>
>

Reply via email to