Guaranteeing fairness in such situations requires pre-emption of running
tasks/executors, which is not yet provided in mesos.

For now, you can try reserving a minimum amount of resources for each
framework, note however that this may reduce your efficiency if you
over-estimate the minimum reservation needed. This is because reserved
resources cannot be used by other roles. In the future, to avoid the
efficiency issue, other roles will be able to use reserved resources in a
pre-emptable way.

On Thu, Mar 19, 2015 at 3:39 PM, Stratos Dimopoulos <
[email protected]> wrote:

> Hi All,
>
> Is there a way to dynamically fair-share resources between the frameworks
> that running on top of Mesos? What I would like to do is when running for
> example MapReduce and Spark together, allocate 50% of the resources to each
> of them (if they need them) and when running MapReduce, Spark, Storm 33% of
> the resources to each.
>
> What I observe is that when I am running Spark in fine grained mode and
> MapReduce together, MR will gradually take over all the resources as the
> task trackers of Spark are finished and in the meantime before Spark
> manages to stage new TaskTrackers Hadoop gets more and more resources until
> it actually takes over the whole cluster. The opposite happens with I run
> Spark in coarse grained mode. In this case Spark is faster staging the task
> trackers and it will manage to get 100% of the cluster before Hadoop gets
> any.
>
> I checked this:
> http://mesos.apache.org/documentation/latest/framework-rate-limiting/
> that might help but what I would really want to is to share the resources
> equally between registered frameworks. Any ideas?
>
> thanks!
>
> Stratos
>

Reply via email to