Hi,

I've been playing around with the opennlp wrappers and will probably
make use of the entity detection, but I was wondering about the sentence
and token detection.

It seems that a model (statistical) based approach may be overkill and
more of a pain to correct errors in.

I was wondering if there's any reason not to use a rule based
sentence/token detector that then feeds the opennlp pos and entity model
based annotators?

Any thought are welcome.

- Jonathan

Reply via email to