Hello, should we pull in the patch and mark it as experimental? Any opinions about that?
Thanks, Jörn On 08/26/2012 06:43 AM, Hyosup Shim wrote:
Hi, I've been working on implmenting QNTrainer(L-bfgs maxent parameter estimator) in recent few weeks. My first implementation on the issue gave me about 0.80 precision on train/test set of PerceptronPrepAttach unit test. Since other existing estimators in OpenNLP showed nearly same precision on that test set, I did submitted the patch. But on CONLL02 test set Jorn gave me, QNTrainer got dissappointing result. (less than 0.05 in precision, 0.30 in recall) I tried to fix it, and failed. Could anyone give me a clue? OPENNLP-338 <https://issues.apache.org/jira/browse/OPENNLP-338>