[
https://issues.apache.org/jira/browse/YARN-10169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17241917#comment-17241917
]
Wangda Tan commented on YARN-10169:
-----------------------------------
Thanks [~zhuqi] for working on this. We're currently making a bunch of changes
to the scheduler to make FairScheduler users can easier to migrate to
CapacityScheduler. In fairScheduler, it supports mixed weights and absolute
valued max capacity (such as X memory, Y vcores) for each queue.
I actually confused about the behavior in CapacityScheduler after seeing this
JIRA. For a queue structure like below:
{code:java}
root
\
a
/ \
a1 a2
/ \
a2_1 a2_2{code}
Do we allow scheduler max capacity like:
a.max (absolute), a1.max (percentage), a2.max (absolute), a2_1.max (percentage).
How we calculate a2_1.max (percentage below absolute) today?
cc: [~pbacsko], [~snemeth], [~sunilg], [~bteke]
> Mixed absolute resource value and percentage-based resource value in
> CapacityScheduler should fail
> --------------------------------------------------------------------------------------------------
>
> Key: YARN-10169
> URL: https://issues.apache.org/jira/browse/YARN-10169
> Project: Hadoop YARN
> Issue Type: Bug
> Reporter: Wangda Tan
> Assignee: zhuqi
> Priority: Blocker
> Attachments: YARN-10169.001.patch, YARN-10169.002.patch,
> YARN-10169.003.patch
>
>
> To me this is a bug: if there's a queue has capacity set to float, and
> maximum-capacity set to absolute value. Existing logic allows the behavior.
> For example:
> {code:java}
> queue.capacity = 0.8
> queue.maximum-capacity = [mem=x, vcore=y] {code}
> We should throw exception when configured like this.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]