[
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330719#comment-15330719
]
Karthik Kambatla commented on YARN-5215:
----------------------------------------
We happen to have a similar low-latency framework running alongside (and
occasionally on) YARN. So, I am quite sympathetic to the problem.
In the past, I have wondered if it makes sense to have a separate node-level
agent that these other (white-listed) services could register with to get
updates on each others' usage. That way, each framework is aware of others
running on the cluster and the resources can be handed off more gracefully.
If we are indeed looking to steal resources from these other services, I would
think those resources should be allocated only to OPPORTUNISTIC containers and
likely better handled through YARN-1011. For instance, in your earlier example,
we would actually set yarn.nodemanager.resource.memory-mb to 14 GB which is
allocated to GUARANTEED containers and YARN would also allocate OPPORTUNISTIC
containers upto 2 GB based on how much of it is used by other frameworks.
And, as Jason was mentioning earlier (IIUC), YARN-5202 provides this without
the support for special OPPORTUNISTIC containers. Am I missing something?
> Scheduling containers based on external load in the servers
> -----------------------------------------------------------
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
> Issue Type: Improvement
> Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch, YARN-5215.001.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the
> resources. The proposal is to use the utilization information in the node and
> the containers to estimate how much is consumed by external processes and
> schedule based on this estimation.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]