The scheduler is configurable. Most likely, your scheduler is configured to
use "MostRequestedPriority" (see #1), which tries to put all the pods pods
on the smallest number of nodes (so you can shutdown the excess nodes).
This priority function is used when running on a cloud provider, where you
can lower your infrastructure costs by deleting unnecessary nodes/VMs).

#1 https://docs.openshift.com/container-platform/3.6/admin_
guide/scheduling/scheduler.html#other-priorities


On Thu, May 24, 2018 at 1:00 AM, Lionel Orellana <[email protected]> wrote:

> Hi All,
>
> We have 20 worker nodes all with the same labels (they all have the same
> specs). Our pods don't have any node selectors so all nodes are available
> to all pods.
>
> What we are seeing is the scheduler constantly placing pods on nodes that
> are already heavily usitised (in terms of memory and/or cpu) while other
> nodes have plenty of capacity.
>
> We have places resource request on some pods and they continue to be
> placed on the busy nodes.
>
> How can we help the scheduler make better decisions?
>
> -bash-4.2$ oc version
> oc v3.6.0+c4dd4cf
> kubernetes v1.6.1+5115d708d7
> features: Basic-Auth GSSAPI Kerberos SPNEGO
>
> openshift v3.6.173.0.21
> kubernetes v1.6.1+5115d708d7
>
>
>
> Thanks
>
>
>
>
>
>
> _______________________________________________
> users mailing list
> [email protected]
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to