Re: scheduler policy to spread pods

2018-07-09 Thread Tim Dudgeon
Hi, thanks for that suggestion. I took a look, but tit seems it isn't 
quite what's needed.
It looks likes pod (anti)affinity is a binary thing. It works for the 
first pod on the node with/without the specified label, but it doesn't 
ensure an even spread when you schedule multiple pods.


In my case I scheduled pods using an antiaffinity 
preferredDuringSchedulingIgnoredDuringExecution rule applying across 3 
nodes and that made sure that the first 3 pods went to separate nodes as 
expected, but after that the rule seemed to not be applied (there were 
no nodes that satisfied the rule, but as the rule was 'preferred' not 
'required' the pod was scheduled without any further preference). So 
that by the time I had 6 pods running 3 other them were on one node, 2 
on another and only 1 on the third.


So I suppose the anti-affinity rule is working as designed, but that its 
not designed to ensure an even spread when you have multiple pods on the 
nodes.



On 04/07/18 12:16, Joel Pearson wrote:

Here’s an OpenShift reference for the same thing.

https://docs.openshift.com/container-platform/3.6/admin_guide/scheduling/pod_affinity.html
On Wed, 4 Jul 2018 at 9:14 pm, Joel Pearson 
mailto:japear...@agiledigital.com.au>> 
wrote:


You’re probably after pod anti-affinity?

https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity

That lets you tell the scheduler that the pods aren’t allowed to
be on the same node for example.
On Wed, 4 Jul 2018 at 8:51 pm, Tim Dudgeon mailto:tdudgeon...@gmail.com>> wrote:

I've got a process the fires up a number of pods (bare pods,
not backed
by replication controller) to execute a computationally
demanding job in
parallel.
What I find is that the pods do not spread effectively across the
available nodes. In my case I have a node selector that restricts
execution to 3 nodes, and the pods run mostly on the first
node, a few
run on the second node, and none run on the third node.

I know that I could specify cpu resource requests and limits
to help
with this, but for other reasons I'm currently unable to do this.

It looks like this is controllable through the scheduler, but the
options for controlling this look pretty complex.
Could someone advise on how best to allow pods to spread
evenly across
nodes rather than execute preferentially on one node?

___
users mailing list
users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: scheduler policy to spread pods

2018-07-04 Thread Joel Pearson
Here’s an OpenShift reference for the same thing.

https://docs.openshift.com/container-platform/3.6/admin_guide/scheduling/pod_affinity.html
On Wed, 4 Jul 2018 at 9:14 pm, Joel Pearson 
wrote:

> You’re probably after pod anti-affinity?
> https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
>
> That lets you tell the scheduler that the pods aren’t allowed to be on the
> same node for example.
> On Wed, 4 Jul 2018 at 8:51 pm, Tim Dudgeon  wrote:
>
>> I've got a process the fires up a number of pods (bare pods, not backed
>> by replication controller) to execute a computationally demanding job in
>> parallel.
>> What I find is that the pods do not spread effectively across the
>> available nodes. In my case I have a node selector that restricts
>> execution to 3 nodes, and the pods run mostly on the first node, a few
>> run on the second node, and none run on the third node.
>>
>> I know that I could specify cpu resource requests and limits to help
>> with this, but for other reasons I'm currently unable to do this.
>>
>> It looks like this is controllable through the scheduler, but the
>> options for controlling this look pretty complex.
>> Could someone advise on how best to allow pods to spread evenly across
>> nodes rather than execute preferentially on one node?
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: scheduler policy to spread pods

2018-07-04 Thread Joel Pearson
You’re probably after pod anti-affinity?
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity

That lets you tell the scheduler that the pods aren’t allowed to be on the
same node for example.
On Wed, 4 Jul 2018 at 8:51 pm, Tim Dudgeon  wrote:

> I've got a process the fires up a number of pods (bare pods, not backed
> by replication controller) to execute a computationally demanding job in
> parallel.
> What I find is that the pods do not spread effectively across the
> available nodes. In my case I have a node selector that restricts
> execution to 3 nodes, and the pods run mostly on the first node, a few
> run on the second node, and none run on the third node.
>
> I know that I could specify cpu resource requests and limits to help
> with this, but for other reasons I'm currently unable to do this.
>
> It looks like this is controllable through the scheduler, but the
> options for controlling this look pretty complex.
> Could someone advise on how best to allow pods to spread evenly across
> nodes rather than execute preferentially on one node?
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


scheduler policy to spread pods

2018-07-04 Thread Tim Dudgeon
I've got a process the fires up a number of pods (bare pods, not backed 
by replication controller) to execute a computationally demanding job in 
parallel.
What I find is that the pods do not spread effectively across the 
available nodes. In my case I have a node selector that restricts 
execution to 3 nodes, and the pods run mostly on the first node, a few 
run on the second node, and none run on the third node.


I know that I could specify cpu resource requests and limits to help 
with this, but for other reasons I'm currently unable to do this.


It looks like this is controllable through the scheduler, but the 
options for controlling this look pretty complex.
Could someone advise on how best to allow pods to spread evenly across 
nodes rather than execute preferentially on one node?


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users