[ 
https://issues.apache.org/jira/browse/YARN-6384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16135808#comment-16135808
 ] 

Miklos Szegedi commented on YARN-6384:
--------------------------------------

Thank you for the patch [~lolee_k], I have a few comments.
{code}
383             if (strictResourceUsageMode) {
384               limits = getOverallLimits(containerCPU);
385             } else {
386               limits = getOverallLimits(containerCPU * 
maxResourceUsagePercentInNonstrictMode);
382             }       387             }
{code}
We need to maintain backward compatibility. Strict resource usage classically 
means that the limit is enforced by cpu quota instead of cpu shares. That is, 
your change should follow this rule as well. I would suggest to set your 
percentage to 100% by default and apply it in all cases, when strict is set but 
only, if strict is set.
Also, CgroupsLCEResourcesHandler.java is deprecated. Please add your change to 
CGroupsCpuResourceHandlerImpl as well.


> Add configuratin to set max cpu usage when strict-resource-usage is false 
> with cgroups
> --------------------------------------------------------------------------------------
>
>                 Key: YARN-6384
>                 URL: https://issues.apache.org/jira/browse/YARN-6384
>             Project: Hadoop YARN
>          Issue Type: Improvement
>            Reporter: dengkai
>         Attachments: YARN-6384-0.patch
>
>
> When using cgroups on yarn, if 
> yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage is 
> false, user may get very more cpu time than expected based on the vcores. 
> There should be a upper limit even resource-usage is not strict, just like a 
> percentage which user can get more than promised by vcores. I think it's 
> important in a shared cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to