All,

We have an use case in which 2 spark streaming jobs in same EMR cluster.

I am thinking of allowing multiple streaming contexts and run them as 2
separate spark-submit with wait for app completion set to false.

With this, the failure detection and monitoring seems obscure and doesn't
seem to be a correct option for production.

Is there any recommended strategy to execute this in production in EMR with
appropriate failure detection and monitoring setup?

-- 
Thanks,
Pandeeswaran

Reply via email to