Hello,

you tokenized an example of my already tokenized training data for the
maxent tonenizer of open nlp. I asked about the transformation of those
texts as input to the train method of the open nlp tokenizer

Thanks for your reply

Andreas

Am 14.03.2013 12:40, schrieb Jim foo.bar:
> ps: I don't speak German, but the output seems reasonable to
> me...depending on your use case, this could be enough (or not!)...

-- 
Andreas Niekler, Dipl. Ing. (FH)
NLP Group | Department of Computer Science
University of Leipzig
Johannisgasse 26 | 04103 Leipzig

mail: [email protected]

Reply via email to