Petr,
> > > (I'm somewhat tempted to cut my losses short (much too late) and > > > abandon UIMA flow control altogether, using only simple pipelines and > > > having custom glue code to connect these together, as it seems like > > > getting the flow to work in interesting cases is a huge time sink and > in > > > retrospect, it could never pay off any abstract advantage of easier > > > distributed processing (where you probably end up having to chop up the > > > pipeline manually anyway). I would probably never recommend new UIMA > > > users to strive for a single pipeline with CAS multipliers/mergers and > > > begin to consider these features an evolutionary dead end rather than > > > advantageous. Not sure if there even *are* any other real users using > > > advanced flows besides me and DeepQA. I'll be glad to hear any > opinions > > > on this!) > > > > > > > > Definitely the advantage to encapsulating analytics in standard UIMA > > components is easy scalability via the vertical and horizontal scale out > > options offered by UIMA-AS and DUCC. Flexibility in chopping up a > > pipeline into services as needed is another advantage. > > But as far as I understand, you need to explicitly define and deploy > AEs that are to be run on different machines anyway. So I'm not sure if > the extra value is really that large in the end? > > Well, yes. But with DUCC only the definition needs be explicitly done; the deployment and replicated scale out of all components are done automatically. Eddie
