Hello *,

We are trying to build some Batch jobs using Spark on Mesos. Mesos offer's
two main mode of deployment of Spark job.

1. Fine-grained
2. Coarse-grained


When we are running the spark jobs in fine grained mode then spark is using
max amount of offers from Mesos and running the job. Running batch jobs in
this mode can easily starve the high priority jobs in the cluster and one
job can easily use large part of the cluster. There is no way to specify a
max limit of resource which should be used by one particular framework.

Problem with coarse-grained model is that the cluster reserves the given
amount of resource at start and then run the spark job on those resources.
This becomes a problem as we have to reserve more resources then it might
need so that the job never fails. This will lead to the wastage of
resources and gives us static partitioning of resource on Mesos cluster.

Can anyone share their experience in managing multiple batch Spark job on
Mesos Cluster?

-- 

Regards,
Akash Mishra.


"Its not our abilities that make us, but our decisions."--Albus Dumbledore

Reply via email to