Yep, we are doing pretty much the same.
We creating limitrange for every projects with defaults
We creating quota for every project and using it as cost model for CPU, memory,
storage
We configured cluster overcommit 50 % for memory and 10 % for CPU
ClusterResourceOverride:
configuration:
apiVersion: v1
cpuRequestToLimitPercent: '10'
kind: ClusterResourceOverrideConfig
memoryRequestToLimitPercent: '50'
monitorin resources uage and adding more nodes
we are trying to tune CPU a little bit as CPU is little confussign and
uncompress resource
Thanks for your information. It was really useful to clear some doubts around
CPU request Vs limits
--
Srinivas Kotaru
From: Frederic Giloux <[email protected]>
Date: Friday, March 23, 2018 at 11:55 AM
To: Srinivas Naga Kotaru <[email protected]>
Cc: users <[email protected]>
Subject: Re: Limits for CPU worth? Vs benefits
In the previous example we looked at setting a limit at the pod having the
lower request but you may rather want it for the pod having the higher request.
In this extreme scenario (node with 32 cores) pod A was gaining 21 cores on top
of its request where pod B only 3 when no limit was set. You may find that out
of proportion and may want to cap what a pod with high request may get.
Another aspect is resource fragmentation, CPU in this case. Basically you get
better density (you can place more pods/containers not just in number but also
as a some of requested CPU) with smaller CPU request/limit chunks. The rests
are smaller. A cluster administrator may want to address this aspect by
creating a limit range with a max CPU and potentially MaxLimitRequestRatio. If
a max CPU limit range is set then you have to set a CPU limit.
That said my advice would be not to over engineer it. Start with simple and
tolerant settings: make requests mandatory or provide defaults for all your
pods or containers (otherwise the scheduler has a hard time), have quotas so
that a single project does not starve your complete cluster. Monitor your
cluster, the resource consumption and its pattern and react where needed.
Regards,
Frédéric
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users