+1 to this in general, though I'm not into over-architecting things initially. Would be great to get things humming and then start supporting more pluggability.
On Sat, Jan 7, 2012 at 7:33 AM, Jörn Kottmann <[email protected]> wrote: > On 1/7/12 2:22 PM, Grant Ingersoll wrote: > >> Being able to take advantage of other classifiers seems like it would be >> a really nice thing to be able to do. I'd love to put OpenNLP over Mahout >> or others. >> >> Besides, for testing purposes, if you could plugin the existing >> capability versus your new rewrite (in Scala) then you could easily compare >> the two. I can't imagine the abstraction layer is more than a few >> interfaces or abstract classes plus a bit of configuration/injection/fill >> in the blank that allows one to specify the implementation. >> > > Yes, we need plug-able classifiers and support for extensive > modification/extension of > our existing components. You are welcome to help us with that. > > One way of implementing this is to specify a (optional) factory class > during training > which is used to create a model (classifier). A second type of factory > class could > be specified to modify a component. > > These factory class names will be stored in our zip model package, and can > then be used to instantiated the extensions which are necessary to run the > component. > > The disadvantage of this approach is that it might not work well with OSGi. > A big advantage is that OpenNLP itself will take care of configuring > everything > and the code needed to run an OpenNLP component is identical, even if the > model > uses "custom" extensions. These must only be on the class path. > > Jörn > -- Jason Baldridge Associate Professor, Department of Linguistics The University of Texas at Austin http://www.jasonbaldridge.com http://twitter.com/jasonbaldridge
