Hi,

> On 9. Dec 2020, at 15:03, John Doe <[email protected]> wrote:
> 
> I'm looking to scale out my NLP pipeline across a Spark cluster and was
> thinking UIMA-AS may work as a solution. 

you can find some resources from people who have been using UIMA with Spark
on the web, e.g. here:

- https://databricks.com/session/leveraging-uima-in-spark

- https://github.com/EDS-APHP/UimaOnSpark

- 
https://www.slideshare.net/DavidTalby/semantic-natural-language-understanding-with-spark-uima-machine-learned-ontologies

Maybe some of these help you. The common denominator seems to be that people
leverage uimaFIT to faciliate the creation and management of the analysis 
engines
in Java (as compared to juggling around XML descriptors) and then let Spark 
handle
the scaleout.

Let us know if you end up using any of these approaches or of other approaches 
you
might find or develop.

Cheers,

-- Richard

Reply via email to