We tackled this same issue.  Ultimately, since a UIMA process is usually 
concerned with a single document, it made more structural sense to wrap the 
UIMA task within a Mapper.  That keeps the entire process within the functional 
programming paradigm.  We also were concerned with how fragile the UIMA 
configuration can be and it was easier to control when embedded within a 
Mapper.  Similarly with Mahout, though we separated the two jobs.


On Feb 15, 2013, at 2:37 AM, Julien Nioche <[email protected]> 
wrote:

Hi

I suppose you could expose MapReduce jobs as UIMA components but it would
certainly be easier to do the other way round and use e.g. Behemoth  [1] to
run the UIMA PEARs on MapReduce.

HTH

Julien

[1]  https://github.com/DigitalPebble/behemoth

On 13 February 2013 22:47, Som Satpathy <[email protected]> wrote:

> Hi all,
> 
> I have been toying around with UIMA pipelines for some time now. I was
> wondering if UIMA can support analysis components written as mahout
> map-reduce jobs as part of a UIMA pipeline ?
> 
> I would appreciate any help/hints/pointers.
> 
> Thanks,
> Som
> 



-- 
*
*Open Source Solutions for Text Engineering

http://digitalpebble.blogspot.com/
http://www.digitalpebble.com
http://twitter.com/digitalpebble

Reply via email to