I see. This is good to know for current development work. Thanks for
clarifying, Guangya and Kevin.
Elizabeth Lingg
> On Jun 22, 2016, at 3:02 AM, Guangya Liu wrote:
>
> Hi Elizabeth,
>
> Just FYI, there is a JIRA tracing the resource revocation here
>
Hi Elizabeth,
Just FYI, there is a JIRA tracing the resource revocation here
https://issues.apache.org/jira/browse/MESOS-4967
And I'm also working on the short term solution of excluding the scarce
resources from allocator (https://reviews.apache.org/r/48906/), with this
feature and Kevin's
As an FYI, preliminary support to work around this issue for GPUs will
appear in the 1.0 release
https://reviews.apache.org/r/48914/
This doesn't solve the problem of scarce resources in general, but it
will at least keep non-GPU workloads from starving out GPU-based
workloads on GPU capable
Thanks, looking forward to discussion and review on your document. The main use
case I see here is that some of our frameworks will want to request the GPU
resources, and we want to make sure that those frameworks are able to
successfully launch tasks on agents with those resources. We want to
Had some discussion with Ben M, for the following two solutions:
1) Ben M: Create sub-pools of resources based on machine profile and
perform fair sharing / quota within each pool plus a framework
capability GPU_AWARE
to enable allocator filter out scarce resources for some frameworks.
2)
Thanks Du Fan. So you mean that we should have some clear rules in document
or somewhere else to tell or guide cluster admin which resources should be
classified as scarce resources, right?
On Sat, Jun 18, 2016 at 2:38 AM, Du, Fan wrote:
>
>
> On 2016/6/17 7:57, Guangya Liu
On 2016/6/17 7:57, Guangya Liu wrote:
@Fan Du,
Currently, I think that the scarce resources should be defined by cluster
admin, s/he can specify those scarce resources via a flag when master start
up.
This is not what I mean.
IMO, it's not cluster admin's call to decide what resources
Thanks all for the input here!
@Hans van den Bogert,
Yes, agree with Alex R, Mesos is now using coarse grained mode to allocate
resources and the minimum unit is a single host, so you will always get cpu
and memory.
@Alex,
Yes, I was only listing sorters here, ideally, I think that an indeal
@Fan,
In the community meeting a question was raised around which frameworks
might be ready to use this.
Can you provide some more context for immediate use cases on the framework
side?
—
*Joris Van Remoortere*
Mesosphere
On Fri, Jun 17, 2016 at 12:51 AM, Du, Fan wrote:
> A
A couple of rough thoughts in the early morning:
a. Is there any quantitative way to decide a resource is kind of scare?
I mean how to aid operator to make this decision to use/not use this
functionality when deploying mesos.
b. Scare resource extend from GPU to, name a few, Xeon Phi, FPGA,
+1 for leveraging `reqouestResources. I've also toyed this idea with
allocator groups offline. IMO Giving schedulers a way to specify resource
envelope size and/or constraints is a easier way to manage the resources.
On Thu, Jun 16, 2016 at 9:39 AM, Alex Rukletsov wrote:
>
We definitely don't want a 2-step scenario. In this case, a framework may
not be able to launch its tasks on GPU resources, while still holding them.
However, having a dedicated sorter for scarce resources does not mean we
should allocate them separately. Also, I'm not sure Guangya intended to
Hi all,
Maybe I’m missing context info on how something like a GPU as a resource should
work, but I assume that the general scenario would be that the GPU host
application would still need memory and cpu(s) co-located on the node.
In the case of,
> 4) scarceSorter include 1 agent with (gpus:1)
Thanks Joris, sorry, I forgot the case when the scarce resources was also
requested by quota.
But after a second thought, not only quota, but also reserved resources,
revocable resources can also be scarce resources, we may need to handle all
of those cases.
I think that in the future, the
With this 4th sorter approach, how does quota work for scarce resources?
—
*Joris Van Remoortere*
Mesosphere
On Thu, Jun 16, 2016 at 11:26 AM, Guangya Liu wrote:
> Hi Ben,
>
> The pre-condition for four stage allocation is that we need to put
> different resources to
Hi Ben,
The pre-condition for four stage allocation is that we need to put
different resources to different sorters:
1) roleSorter only include non scarce resources.
2) quotaRoleSorter only include non revocable & non scarce resources.
3) revocableSorter only include revocable & non scarce
Thanks Ben.M for initiative.
You already define the scarce resource and describe the problem clearly.
Just recap the requirements we want to resolved as follows
1) Scarce resource should not be treated as dominated resource during
allocating so that user could continue using non-scarce resource
Hi Ben,
For long term goal, instead of creating sub-pool, what about adding a new
sorter to handle **scare** resources? The current logic in allocator was
divided to two stages: allocation for quota, allocation for non quota
resources.
I think that the future logic in allocator would be divided
I wanted to start a discussion about the allocation of "scarce" resources.
"Scarce" in this context means resources that are not present on every
machine. GPUs are the first example of a scarce resource that we support as
a known resource type.
Consider the behavior when there are the following
19 matches
Mail list logo