How that models were created at first time? I mean, is there a script to
create all models if we have the necessary copora?
It would be nice to have more details about each model, like its accuracy
and F1-score, and info about how it was trained: number of iterations,
cutoff. Maybe the script should get these data while training and prepare a
information page.

On Wed, Jan 19, 2011 at 7:34 PM, Jörn Kottmann <[email protected]> wrote:

> Hi all,
>
> as everyone knows OpenNLP needs statistical models. Over at sourceforge
> we simply had a model download page and offered the models there for
> download (we actually still do that).
>
> We might come up with a project internal process to test the quality of new
> models before we release them. Beside that are there any rules
> we have to follow ? E.g. a vote on the incubator mailing list, like we
> would do
> for a release of OpenNLP itself.
>
> Any insights to this issues are very welcome, maybe I just need to ask on a
> different
> mailing list. I think we should start releasing the old models we created
> over at sourceforge
> at Apache, since the models do not need to be changed for the 1.5.1
> release.
>
> Thanks,
> Jörn
>

Reply via email to