Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-05-01 Thread 'EJ Campbell' via Kubernetes user discussion and Q
Does it require a single stable IP address or a range? You could possibly have 
a set of dedicated nodes for you outbound proxy; that way you can still use 
Kubernetes machinery for deployment, pod lifecycles, etc., but still present a 
stable CIDR to the outside world.
-EJ
On Monday, May 1, 2017, 7:12:40 PM PDT, Evan Jones  
wrote:It turns out I've just run into a requirement to have a stable outbound 
IP address as well. In looking into this: I think we will likely some kind of 
proxy server running outside of Kubernetes. This will allow services "opt in" 
to this special handling, rather than doing it for everything in the cluster. 
It seems like the simplest way to make this work.
Honestly, this seems like enough of a rare case that I'm not sure Kubernetes 
should really support anything "natively" to solve this problem (at least not 
at the moment when there are more common things that still need work).



-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-05-01 Thread Evan Jones
It turns out I've just run into a requirement to have a stable outbound IP 
address as well. In looking into this: I think we will likely some kind of 
proxy server running outside of Kubernetes. This will allow services "opt 
in" to this special handling, rather than doing it for everything in the 
cluster. It seems like the simplest way to make this work.

Honestly, this seems like enough of a rare case that I'm not sure 
Kubernetes should really support anything "natively" to solve this problem 
(at least not at the moment when there are more common things that still 
need work).


-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Cluster auto scaling does not kick in when using anti affinity

2017-05-01 Thread kristian
I'm running a Kubernetes 1.6.1 cluster on GKE and recently tried to add anti 
affinity to some of our deployments. I'm running a 3 node cluster of a piece of 
software, currently using 3 separate deployments (though it would be a good fit 
for stateful set).

As this software needs a 2 node quorum, I want to schedule each of these pods 
on a separte node, so that I can avoid having that cluster go down in the event 
of nodes dying (including when doing a rolling upgrade of the cluster).

Problem is, when trying to schedule these pods after adding anti affinity, the 
Kubernetes node was not big enough to fit them, presumably since a node it was 
previously running on was now blacklisted.

This would normally cause the Kubernetes auto scaler to kick in, but for this 
scenario, this did not happen. It claims that adding more nodes would not make 
the new pod fit (even though I know it would, as the new node would not have 
been tainted by the anti affinity rule).

Is this a known issue? Is there a way of working around it, or a better way of 
expressing my condition (keeping the instances on separate nodes)?

Cheers,
Kristian

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.