I'm running a Kubernetes 1.6.1 cluster on GKE and recently tried to add anti 
affinity to some of our deployments. I'm running a 3 node cluster of a piece of 
software, currently using 3 separate deployments (though it would be a good fit 
for stateful set).

As this software needs a 2 node quorum, I want to schedule each of these pods 
on a separte node, so that I can avoid having that cluster go down in the event 
of nodes dying (including when doing a rolling upgrade of the cluster).

Problem is, when trying to schedule these pods after adding anti affinity, the 
Kubernetes node was not big enough to fit them, presumably since a node it was 
previously running on was now blacklisted.

This would normally cause the Kubernetes auto scaler to kick in, but for this 
scenario, this did not happen. It claims that adding more nodes would not make 
the new pod fit (even though I know it would, as the new node would not have 
been tainted by the anti affinity rule).

Is this a known issue? Is there a way of working around it, or a better way of 
expressing my condition (keeping the instances on separate nodes)?

Cheers,
Kristian

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
  • [kubernetes... kristian
    • Re: [k... 'Filip Grzadkowski' via Kubernetes user discussion and Q&A

Reply via email to