An update: I was able to do this with the standard add-access-config mechanism 
here:
https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address

No guarantees around when GKE will rebuild those nodes and lose the node IPs, 
but it works for now.

On Sunday, May 20, 2018 at 12:13:30 PM UTC-7, mi...@percy.io wrote:
> Evan,
> 
> Did you figure out a way to assign reserved static IP addresses to a few 
> specific nodes in a GKE pool?
> 
> We are also fine with doing this manually for a couple of specific nodes for 
> the time being (rather than building a NAT gateway), but I cannot find 
> reliable information about how to assign a reserved static IP to a GKE node.
> 
> Cheers,
> Mike
> 
> On Wednesday, May 3, 2017 at 12:13:42 PM UTC-7, Evan Jones wrote:
> > Correct, but at least at the moment we aren't using auto-resizing, and I've 
> > never seen nodes get removed without us manually taking some action (e.g. 
> > upgrading Kubernetes releases or similar). Are there automated events that 
> > can delete a VM and remove it, without us having done something? Certainly 
> > I've observed machines rebooting, but that also preserves dedicated IPs. I 
> > can live with having to take some manual configuration action periodically, 
> > if we are changing something with our cluster, but I would like to know if 
> > there is something I've overlooked. Thanks!
> > 
> > 
> > 
> > 
> > 
> > On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady <pa...@qwil.co> wrote:
> > 
> > The public IP is not stable in GKE. You can manually assign a static IP to 
> > a GKE node, but then if the node goes away (e.g. your cluster was resized) 
> > the IP will be detached, and you'll have to manually reassign. I'd guess 
> > this is also true on an AWS managed equivalent like CoreOS's CloudFormation 
> > scripts.
> > 
> > 
> > On Wed, May 3, 2017 at 8:52 AM, Evan Jones <evan....@triggermail.io> wrote:
> > 
> > As Rodrigo described, we are using Container Engine. I haven't fully tested 
> > this yet, but my plan is to assign "dedicated IPs" to a set of nodes, 
> > probably in their own Node Pool as part of the cluster. Those are the IPs 
> > used by outbound connections from pods running those nodes, if I recalling 
> > correctly from a previous experiment. Then I will use Rodrigo's taint 
> > suggestion to schedule Pods on those nodes.
> > 
> > If for whatever reason we need to remove those nodes from that pool, or 
> > delete and recreate them, we can move the dedicated IP and taints to new 
> > nodes, and the jobs should end up in the right place again.
> > 
> > 
> > In short: I'm pretty sure this is going to solve our problem.
> > 
> > 
> > Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to