Thanks, but if we still have to use Hadoop, does anybody know who I can ask
questions about the document?
https://cwiki.apache.org/confluence/display/UIMA/Running+UIMA+Apps+on+Hadoop

Thanks,
Yi-Wen

On Sat, Oct 10, 2015 at 4:05 AM, Lou DeGenaro <[email protected]>
wrote:

> Perhaps you have no choice, but if you do then consider DUCC https://uima.
> apache.org/doc-uimaducc-whatitam.html for UIMA scale-out.
>
> Lou.
>
> On Fri, Oct 9, 2015 at 5:54 PM, Yi-Wen Liu <[email protected]> wrote:
>
> > Hi all,
> >
> > I am a USC student working on professor Mattmann's project "Integration
> of
> > cTAKES/UIMA and Apache Hadoop for the Shangridocs system", the proposal
> is
> > attached.
> > I searched many relevant resources of running UIMA on Hadoop, this is one
> > of them:
> >
> >
> https://cwiki.apache.org/confluence/display/UIMA/Running+UIMA+Apps+on+Hadoop
> >
> > but it only provides very general explanations.
> >
> > Has anybody gone through all the steps and succeeded? Hope you could
> > provide me some examples. Thanks!
> >
> > And in the *Important Considerations*,
> > "1. The jar file created should *shave* all the classes, descriptors of
> > the UIMA app along with the map/reduce and job main class"
> >
> > I think you mean "The jar file created should *have* all the classes...".
> > Is that correct?
> >
> >
> > Thanks,
> > Yi-Wen
> >
>

Reply via email to