Hi Spark Devs,

We are using Aurora (http://aurora.apache.org/) as our mesos framework for
running stateless services. We would like to use Aurora to deploy big data
and batch workloads as well. And for this we have forked Spark and
implement the ExternalClusterManager trait.

The reason for doing this and not running Spark on Mesos is to leverage the
existing roles and quotas provided by Aurora for admission control and also
leverage Aurora features such as priority and preemption. Additionally we
would like Aurora to be only deploy/orchestration system that our users
should interact with.

We have a working POC where Spark is launching jobs through as the
ClusterManager. Is this something that can be merged upstream ? If so I can
create a design document and create an associated jira ticket.

Thanks
Karthik

Reply via email to