Thanks Frederic for providing more than ask. This explanation enough to 
understand how CPU shares working at run time and requests Vs Limits.

So basically limits not much help unless we want to throttle. Since CPU is 
uncompressed resource, is it better to use only requests and don’t depend or 
control via  limits for cluster planning and efficient utilization of CPU ( 
requests) configuration ?

Will CPU scheduling honor Qos or does it play any role with below explanation? 
Like guaranteed, burstable and best-effort? since pod B has limits, will it get 
more preference then pod A as it doesn’t  have limits?


--
Srinivas Kotaru
From: Frederic Giloux <[email protected]>
Date: Friday, March 23, 2018 at 12:21 AM
To: Srinivas Naga Kotaru <[email protected]>
Cc: users <[email protected]>
Subject: Re: Limits for CPU worth? Vs benefits

Srinivas,
Let me write the scenarios in a different way if you don't mind:
- pod A requests 7 cores and no limit
- pod B requests 1 core and 3 cores as limit
Node 1 has more than 8 cores available (additional cores may have been reserved 
for system and kubelet processes but we will ignore that) and no other pod 
running on it. Pod A and B can both be scheduled on node 1 (the requests fit). 
When there is contention pod A will get 7 cores and pod B 1 core as requests 
are guaranteed (and the scheduler takes care of not having more requests than 
cores available).
When there is no contention extra cycles will get allocated proportionally to 
the request ratio. Let say there is 1 additional core free. pod A will get 7/8 
out of 9 cores. pod B will get 1/8*9. Pod A uses 7.875 and pod B 1.125.
Now let say that the node has plenty of cores: 32.
According to the CPU shares configured pod A should get 7/8*32=28 cores and pod 
B should get 1/8*32=4 cores. But wait we set limit to 3 cores for pod B and it 
gets throttled to not consume more than the 3 cores. What happens to the cycles 
of the remaining 1 core? Idle? No, pod A can freely use them as CPU share is 
what is guaranteed to the process not a limit.
I hope this helps.
Regards,
Frédéric


On Thu, Mar 22, 2018 at 9:09 PM, Srinivas Naga Kotaru (skotaru) 
<[email protected]<mailto:[email protected]>> wrote:
Frederic , thanks for quick reply. You are touching QOS tier.

Let us take a scenario to better understand me.  pod  A has 7000 shared as 
requests ( --cpu-shares) but no limits. Pod B has 1000 shares as requests and 
3000 as limits. In CPU contention situation, how scheduling and QOS works in 
Kubernet world?

Will Pod A get more CPU time then Pod B? or POD B get its guaranteed cpu slices 
first before CPU scheduling pod A since it doesn’t have limits?


--
Srinivas Kotaru
From: Frederic Giloux <[email protected]<mailto:[email protected]>>
Date: Thursday, March 22, 2018 at 9:22 AM
To: Srinivas Naga Kotaru <[email protected]<mailto:[email protected]>>
Cc: users 
<[email protected]<mailto:[email protected]>>
Subject: Re: Limits for CPU worth? Vs benefits

Hi Srinivas,
here are a couple of scenarios where I find setting limits useful:
- When I do performance tests and want to compare results between runs, setting 
CPU limits=CPU requests give me confidence that the CPU cycles available 
between the runs were more or less the same. If you don't set a limit or have a 
higher limit anything between the two values is best effort and depend on what 
is happening on the node, including resources consumed by other pods.
- You may also set CPU limits when you want to differentiate between 
applications that are able to consume the "extra" CPU cycles, the ones that 
haven't been "requested". Or you may want to limit how much "extra" these 
applications can get. An example is batch processing, which can use lots of CPU 
cycles but you may not mind it to finish a bit earlier or later.
I hope this helps.
Regards,
Frédéric

On Thu, Mar 22, 2018 at 4:59 PM, Srinivas Naga Kotaru (skotaru) 
<[email protected]<mailto:[email protected]>> wrote:
CPU requests enforced using shares. Even in contention situation, kernel still 
scheduling based on shares and depending on shares, pods getting their own 
shares and never lead to cpu bottleneck or high load on the nodes. Basically it 
never cause noise Neighbour problem.

I understand cpu limits enforced using cpu quota and helps throttling.

Question or argument is do we still need when cpu shares already doing their 
job well both non-contention and contention situation? What extra benefits it 
bringing?

Need some clarity for in the context of noise neighbors problem and prevent 
node going down or prevent one or few bad pods disturbing every pod in node?

Basically looking for what is benefit of having or not having cpu limits for 
pods ?

Sent from my iPhone

_______________________________________________
users mailing list
[email protected]<mailto:[email protected]>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



--
Frédéric Giloux
Principal App Dev Consultant
Red Hat Germany

[email protected]<mailto:[email protected]>     M: 
+49-174-172-4661<tel:+49-174-172-4661>

redhat.com<http://edhat.com> | TRIED. TESTED. TRUSTED. | 
redhat.com/trusted<http://redhat.com/trusted>
________________________________________________________________________
Red Hat GmbH, http://www.de.redhat.com/ Sitz: Grasbrunn,
Handelsregister: Amtsgericht München, HRB 153243
Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham, Michael 
O'Neill



--
Frédéric Giloux
Principal App Dev Consultant
Red Hat Germany

[email protected]<mailto:[email protected]>     M: 
+49-174-172-4661<tel:+49-174-172-4661>

redhat.com<http://edhat.com> | TRIED. TESTED. TRUSTED. | 
redhat.com/trusted<http://redhat.com/trusted>
________________________________________________________________________
Red Hat GmbH, http://www.de.redhat.com/ Sitz: Grasbrunn,
Handelsregister: Amtsgericht München, HRB 153243
Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham, Michael 
O'Neill
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to