The scheduler makes the decision trying to spread the pods on nodes as you
say. But that is just a "signal", other things are taken into account (pods
availability zone, in case of AWS, for example, to spread across AZs too)
node's resources, etc.

So, the default will try to do that, taking into account other variables
too. But it is not a hard requirement to not have 2 pods on a single node,
so it can (and will) happen.

You can force that requirement in several ways, using the hostPort option
for example. This is, I think, the simplest. But also using some other
functionality the default sched provides (like pod affinity, etc that has
been said) and you can even write your own sched for that deployment.

But scheduling is hard and unless you have a hard requirement that can't
ever happen, I think you probably want just the default.

The default is quite reasonable, maybe with more resources it's more likely
to happen what you want (or tuning the deployment options to first kill the
pod and then create a new one). But the default in my experience, works
just fine.

Also, take into account that if you add the hard requirement, some not nice
side effects might happen. For example, if two pods can never run in the
same node, then if some node crashes you better still have enough nodes to
run all the pods in different nodes, or some pods won't be scheduled. This,
of course, is not a problem if you really want them not ever to run on the
same node.

On Monday, December 4, 2017, <mderos...@gmail.com> wrote:

> Hi all!
>
> I would like to know if there is a way to force Kubernetes, during a
> deploy, to use every node in the cluster.
> The question is due some attempts that I have done where I noticed a
> situation like this:
>
> - a cluster of 3 nodes
> - I update a deployment with a command like: kubectl set image
> deployment/deployment_name my_repo:v2.1.2
> - Kubernetes updates the cluster
>
> At the end I execute kubectl get pod and I notice that 2 pods have been
> deployed in the same node.
> So after the update, the cluster has this configuration:
>
> - one node with 2 pods
> - one node with 1 pod
> - one node without any pod (totally without any workload)
>
>
> Thanks for any suggestion
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q&A" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com <javascript:;>.
> To post to this group, send email to kubernetes-users@googlegroups.com
> <javascript:;>.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to