Hi All,

Is there a way to dynamically fair-share resources between the frameworks
that running on top of Mesos? What I would like to do is when running for
example MapReduce and Spark together, allocate 50% of the resources to each
of them (if they need them) and when running MapReduce, Spark, Storm 33% of
the resources to each.

What I observe is that when I am running Spark in fine grained mode and
MapReduce together, MR will gradually take over all the resources as the
task trackers of Spark are finished and in the meantime before Spark
manages to stage new TaskTrackers Hadoop gets more and more resources until
it actually takes over the whole cluster. The opposite happens with I run
Spark in coarse grained mode. In this case Spark is faster staging the task
trackers and it will manage to get 100% of the cluster before Hadoop gets
any.

I checked this:
http://mesos.apache.org/documentation/latest/framework-rate-limiting/
that might help but what I would really want to is to share the resources
equally between registered frameworks. Any ideas?

thanks!

Stratos

Reply via email to