Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2018-05-20 Thread 'Tim Hockin' via Kubernetes user discussion and Q
You can build that as a controller that runs in-cluster, picks one of the
nodes, and assigns the static IP.  It will still be racy, though, in that
it will never be instantaneous.

On Sun, May 20, 2018, 3:28 PM  wrote:

> An update: I was able to do this with the standard add-access-config
> mechanism here:
>
> https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address
>
> No guarantees around when GKE will rebuild those nodes and lose the node
> IPs, but it works for now.
>
> On Sunday, May 20, 2018 at 12:13:30 PM UTC-7, mi...@percy.io wrote:
> > Evan,
> >
> > Did you figure out a way to assign reserved static IP addresses to a few
> specific nodes in a GKE pool?
> >
> > We are also fine with doing this manually for a couple of specific nodes
> for the time being (rather than building a NAT gateway), but I cannot find
> reliable information about how to assign a reserved static IP to a GKE node.
> >
> > Cheers,
> > Mike
> >
> > On Wednesday, May 3, 2017 at 12:13:42 PM UTC-7, Evan Jones wrote:
> > > Correct, but at least at the moment we aren't using auto-resizing, and
> I've never seen nodes get removed without us manually taking some action
> (e.g. upgrading Kubernetes releases or similar). Are there automated events
> that can delete a VM and remove it, without us having done something?
> Certainly I've observed machines rebooting, but that also preserves
> dedicated IPs. I can live with having to take some manual configuration
> action periodically, if we are changing something with our cluster, but I
> would like to know if there is something I've overlooked. Thanks!
> > >
> > >
> > >
> > >
> > >
> > > On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady  wrote:
> > >
> > > The public IP is not stable in GKE. You can manually assign a static
> IP to a GKE node, but then if the node goes away (e.g. your cluster was
> resized) the IP will be detached, and you'll have to manually reassign. I'd
> guess this is also true on an AWS managed equivalent like CoreOS's
> CloudFormation scripts.
> > >
> > >
> > > On Wed, May 3, 2017 at 8:52 AM, Evan Jones 
> wrote:
> > >
> > > As Rodrigo described, we are using Container Engine. I haven't fully
> tested this yet, but my plan is to assign "dedicated IPs" to a set of
> nodes, probably in their own Node Pool as part of the cluster. Those are
> the IPs used by outbound connections from pods running those nodes, if I
> recalling correctly from a previous experiment. Then I will use Rodrigo's
> taint suggestion to schedule Pods on those nodes.
> > >
> > > If for whatever reason we need to remove those nodes from that pool,
> or delete and recreate them, we can move the dedicated IP and taints to new
> nodes, and the jobs should end up in the right place again.
> > >
> > >
> > > In short: I'm pretty sure this is going to solve our problem.
> > >
> > >
> > > Thanks!
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2018-05-20 Thread mike
An update: I was able to do this with the standard add-access-config mechanism 
here:
https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address

No guarantees around when GKE will rebuild those nodes and lose the node IPs, 
but it works for now.

On Sunday, May 20, 2018 at 12:13:30 PM UTC-7, mi...@percy.io wrote:
> Evan,
> 
> Did you figure out a way to assign reserved static IP addresses to a few 
> specific nodes in a GKE pool?
> 
> We are also fine with doing this manually for a couple of specific nodes for 
> the time being (rather than building a NAT gateway), but I cannot find 
> reliable information about how to assign a reserved static IP to a GKE node.
> 
> Cheers,
> Mike
> 
> On Wednesday, May 3, 2017 at 12:13:42 PM UTC-7, Evan Jones wrote:
> > Correct, but at least at the moment we aren't using auto-resizing, and I've 
> > never seen nodes get removed without us manually taking some action (e.g. 
> > upgrading Kubernetes releases or similar). Are there automated events that 
> > can delete a VM and remove it, without us having done something? Certainly 
> > I've observed machines rebooting, but that also preserves dedicated IPs. I 
> > can live with having to take some manual configuration action periodically, 
> > if we are changing something with our cluster, but I would like to know if 
> > there is something I've overlooked. Thanks!
> > 
> > 
> > 
> > 
> > 
> > On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady  wrote:
> > 
> > The public IP is not stable in GKE. You can manually assign a static IP to 
> > a GKE node, but then if the node goes away (e.g. your cluster was resized) 
> > the IP will be detached, and you'll have to manually reassign. I'd guess 
> > this is also true on an AWS managed equivalent like CoreOS's CloudFormation 
> > scripts.
> > 
> > 
> > On Wed, May 3, 2017 at 8:52 AM, Evan Jones  wrote:
> > 
> > As Rodrigo described, we are using Container Engine. I haven't fully tested 
> > this yet, but my plan is to assign "dedicated IPs" to a set of nodes, 
> > probably in their own Node Pool as part of the cluster. Those are the IPs 
> > used by outbound connections from pods running those nodes, if I recalling 
> > correctly from a previous experiment. Then I will use Rodrigo's taint 
> > suggestion to schedule Pods on those nodes.
> > 
> > If for whatever reason we need to remove those nodes from that pool, or 
> > delete and recreate them, we can move the dedicated IP and taints to new 
> > nodes, and the jobs should end up in the right place again.
> > 
> > 
> > In short: I'm pretty sure this is going to solve our problem.
> > 
> > 
> > Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2018-05-20 Thread mike
Evan,

Did you figure out a way to assign reserved static IP addresses to a few 
specific nodes in a GKE pool?

We are also fine with doing this manually for a couple of specific nodes for 
the time being (rather than building a NAT gateway), but I cannot find reliable 
information about how to assign a reserved static IP to a GKE node.

Cheers,
Mike

On Wednesday, May 3, 2017 at 12:13:42 PM UTC-7, Evan Jones wrote:
> Correct, but at least at the moment we aren't using auto-resizing, and I've 
> never seen nodes get removed without us manually taking some action (e.g. 
> upgrading Kubernetes releases or similar). Are there automated events that 
> can delete a VM and remove it, without us having done something? Certainly 
> I've observed machines rebooting, but that also preserves dedicated IPs. I 
> can live with having to take some manual configuration action periodically, 
> if we are changing something with our cluster, but I would like to know if 
> there is something I've overlooked. Thanks!
> 
> 
> 
> 
> 
> On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady  wrote:
> 
> The public IP is not stable in GKE. You can manually assign a static IP to a 
> GKE node, but then if the node goes away (e.g. your cluster was resized) the 
> IP will be detached, and you'll have to manually reassign. I'd guess this is 
> also true on an AWS managed equivalent like CoreOS's CloudFormation scripts.
> 
> 
> On Wed, May 3, 2017 at 8:52 AM, Evan Jones  wrote:
> 
> As Rodrigo described, we are using Container Engine. I haven't fully tested 
> this yet, but my plan is to assign "dedicated IPs" to a set of nodes, 
> probably in their own Node Pool as part of the cluster. Those are the IPs 
> used by outbound connections from pods running those nodes, if I recalling 
> correctly from a previous experiment. Then I will use Rodrigo's taint 
> suggestion to schedule Pods on those nodes.
> 
> If for whatever reason we need to remove those nodes from that pool, or 
> delete and recreate them, we can move the dedicated IP and taints to new 
> nodes, and the jobs should end up in the right place again.
> 
> 
> In short: I'm pretty sure this is going to solve our problem.
> 
> 
> Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-12-22 Thread 'Tim Hockin' via Kubernetes user discussion and Q
AFAIK we need CloudNAT to become available, at which point we can use
it pretty much transparently.

On Wed, Dec 20, 2017 at 6:56 AM,   wrote:
> On Thursday, August 10, 2017 at 1:03:42 AM UTC-5, Tim Hockin wrote:
>> The GKE team has heard the desire for this and is looking at possible
>> ways to provide it.
>>
>> On Wed, Aug 9, 2017 at 3:56 PM,   wrote:
>> > On Friday, June 16, 2017 at 11:24:15 AM UTC-5, pa...@qwil.co wrote:
>> >> Yes, this is the right approach -- here's a detailed walk-through:
>> >>
>> >> https://github.com/johnlabarge/gke-nat-example
>> >>
>> >> On Friday, June 16, 2017 at 8:36:13 AM UTC-7, giorgio...@beinnova.it 
>> >> wrote:
>> >> > Hello, I've the same problem described there. I have a GKE cluster and 
>> >> > I need to connect to an external service. I find the NAT solution is 
>> >> > right for my needs, my cluster resizes automatically. @Paul Tiplady 
>> >> > have you config the external NAT? Can you share your experiences? I 
>> >> > tried following this guide 
>> >> > https://cloud.google.com/compute/docs/vpc/special-configurations#natgateway
>> >> >  but seems it doesn't work.
>> >> >
>> >> > Thanks,
>> >> > Giorgio
>> >> > Il giorno mercoledì 3 maggio 2017 22:08:50 UTC+2, Paul Tiplady ha 
>> >> > scritto:
>> >> > > Yes, my reply was more directed to Rodrigo. In my use-case I do 
>> >> > > resize clusters often (as part of the node upgrade process), so I 
>> >> > > want a solution that's going to handle that case automatically. The 
>> >> > > NAT Gateway approach appears to be the best (only?) option that 
>> >> > > handles all cases seamlessly at this point.
>> >> > >
>> >> > >
>> >> > > I don't know in which cases a VM could be destroyed, I'd also be 
>> >> > > interested in seeing an enumeration of those cases. I'm taking a 
>> >> > > conservative stance as the consequences of dropping traffic through 
>> >> > > changing source-IP is quite severe in my case, and because I want to 
>> >> > > keep the process for upgrading the cluster as simple as possible.  
>> >> > > From 
>> >> > > https://cloudplatform.googleblog.com/2015/03/Google-Compute-Engine-uses-Live-Migration-technology-to-service-infrastructure-without-application-downtime.html
>> >> > >  it sounds like VM termination should not be caused by planned 
>> >> > > maintenance, but I assume it could be caused by unexpected failures 
>> >> > > in the datacenter. It doesn't seem reckless to manually set the IPs 
>> >> > > as part of the upgrade process as you're suggesting.
>> >> > >
>> >> > >
>> >> > > On Wed, May 3, 2017 at 12:13 PM, Evan Jones  
>> >> > > wrote:
>> >> > >
>> >> > > Correct, but at least at the moment we aren't using auto-resizing, 
>> >> > > and I've never seen nodes get removed without us manually taking some 
>> >> > > action (e.g. upgrading Kubernetes releases or similar). Are there 
>> >> > > automated events that can delete a VM and remove it, without us 
>> >> > > having done something? Certainly I've observed machines rebooting, 
>> >> > > but that also preserves dedicated IPs. I can live with having to take 
>> >> > > some manual configuration action periodically, if we are changing 
>> >> > > something with our cluster, but I would like to know if there is 
>> >> > > something I've overlooked. Thanks!
>> >> > >
>> >> > >
>> >> > >
>> >> > >
>> >> > >
>> >> > >
>> >> > >
>> >> > > On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady  wrote:
>> >> > >
>> >> > > The public IP is not stable in GKE. You can manually assign a static 
>> >> > > IP to a GKE node, but then if the node goes away (e.g. your cluster 
>> >> > > was resized) the IP will be detached, and you'll have to manually 
>> >> > > reassign. I'd guess this is also true on an AWS managed equivalent 
>> >> > > like CoreOS's CloudFormation scripts.
>> >> > >
>> >> > >
>> >> > > On Wed, May 3, 2017 at 8:52 AM, Evan Jones  
>> >> > > wrote:
>> >> > >
>> >> > > As Rodrigo described, we are using Container Engine. I haven't fully 
>> >> > > tested this yet, but my plan is to assign "dedicated IPs" to a set of 
>> >> > > nodes, probably in their own Node Pool as part of the cluster. Those 
>> >> > > are the IPs used by outbound connections from pods running those 
>> >> > > nodes, if I recalling correctly from a previous experiment. Then I 
>> >> > > will use Rodrigo's taint suggestion to schedule Pods on those nodes.
>> >> > >
>> >> > > If for whatever reason we need to remove those nodes from that pool, 
>> >> > > or delete and recreate them, we can move the dedicated IP and taints 
>> >> > > to new nodes, and the jobs should end up in the right place again.
>> >> > >
>> >> > >
>> >> > > In short: I'm pretty sure this is going to solve our problem.
>> >> > >
>> >> > >
>> >> > > Thanks!
>> >
>> > The approach of configuring a NAT works but it has 2 major drawbacks:
>> >
>> > 1. It creates a single point of failure (if the VM that runs the NAT 

Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-12-20 Thread csalazar
On Thursday, August 10, 2017 at 1:03:42 AM UTC-5, Tim Hockin wrote:
> The GKE team has heard the desire for this and is looking at possible
> ways to provide it.
> 
> On Wed, Aug 9, 2017 at 3:56 PM,   wrote:
> > On Friday, June 16, 2017 at 11:24:15 AM UTC-5, pa...@qwil.co wrote:
> >> Yes, this is the right approach -- here's a detailed walk-through:
> >>
> >> https://github.com/johnlabarge/gke-nat-example
> >>
> >> On Friday, June 16, 2017 at 8:36:13 AM UTC-7, giorgio...@beinnova.it wrote:
> >> > Hello, I've the same problem described there. I have a GKE cluster and I 
> >> > need to connect to an external service. I find the NAT solution is right 
> >> > for my needs, my cluster resizes automatically. @Paul Tiplady have you 
> >> > config the external NAT? Can you share your experiences? I tried 
> >> > following this guide 
> >> > https://cloud.google.com/compute/docs/vpc/special-configurations#natgateway
> >> >  but seems it doesn't work.
> >> >
> >> > Thanks,
> >> > Giorgio
> >> > Il giorno mercoledì 3 maggio 2017 22:08:50 UTC+2, Paul Tiplady ha 
> >> > scritto:
> >> > > Yes, my reply was more directed to Rodrigo. In my use-case I do resize 
> >> > > clusters often (as part of the node upgrade process), so I want a 
> >> > > solution that's going to handle that case automatically. The NAT 
> >> > > Gateway approach appears to be the best (only?) option that handles 
> >> > > all cases seamlessly at this point.
> >> > >
> >> > >
> >> > > I don't know in which cases a VM could be destroyed, I'd also be 
> >> > > interested in seeing an enumeration of those cases. I'm taking a 
> >> > > conservative stance as the consequences of dropping traffic through 
> >> > > changing source-IP is quite severe in my case, and because I want to 
> >> > > keep the process for upgrading the cluster as simple as possible.  
> >> > > From 
> >> > > https://cloudplatform.googleblog.com/2015/03/Google-Compute-Engine-uses-Live-Migration-technology-to-service-infrastructure-without-application-downtime.html
> >> > >  it sounds like VM termination should not be caused by planned 
> >> > > maintenance, but I assume it could be caused by unexpected failures in 
> >> > > the datacenter. It doesn't seem reckless to manually set the IPs as 
> >> > > part of the upgrade process as you're suggesting.
> >> > >
> >> > >
> >> > > On Wed, May 3, 2017 at 12:13 PM, Evan Jones  
> >> > > wrote:
> >> > >
> >> > > Correct, but at least at the moment we aren't using auto-resizing, and 
> >> > > I've never seen nodes get removed without us manually taking some 
> >> > > action (e.g. upgrading Kubernetes releases or similar). Are there 
> >> > > automated events that can delete a VM and remove it, without us having 
> >> > > done something? Certainly I've observed machines rebooting, but that 
> >> > > also preserves dedicated IPs. I can live with having to take some 
> >> > > manual configuration action periodically, if we are changing something 
> >> > > with our cluster, but I would like to know if there is something I've 
> >> > > overlooked. Thanks!
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady  wrote:
> >> > >
> >> > > The public IP is not stable in GKE. You can manually assign a static 
> >> > > IP to a GKE node, but then if the node goes away (e.g. your cluster 
> >> > > was resized) the IP will be detached, and you'll have to manually 
> >> > > reassign. I'd guess this is also true on an AWS managed equivalent 
> >> > > like CoreOS's CloudFormation scripts.
> >> > >
> >> > >
> >> > > On Wed, May 3, 2017 at 8:52 AM, Evan Jones  
> >> > > wrote:
> >> > >
> >> > > As Rodrigo described, we are using Container Engine. I haven't fully 
> >> > > tested this yet, but my plan is to assign "dedicated IPs" to a set of 
> >> > > nodes, probably in their own Node Pool as part of the cluster. Those 
> >> > > are the IPs used by outbound connections from pods running those 
> >> > > nodes, if I recalling correctly from a previous experiment. Then I 
> >> > > will use Rodrigo's taint suggestion to schedule Pods on those nodes.
> >> > >
> >> > > If for whatever reason we need to remove those nodes from that pool, 
> >> > > or delete and recreate them, we can move the dedicated IP and taints 
> >> > > to new nodes, and the jobs should end up in the right place again.
> >> > >
> >> > >
> >> > > In short: I'm pretty sure this is going to solve our problem.
> >> > >
> >> > >
> >> > > Thanks!
> >
> > The approach of configuring a NAT works but it has 2 major drawbacks:
> >
> > 1. It creates a single point of failure (if the VM that runs the NAT fails)
> > 2. It's too complex!
> >
> > In my use case I don't need Auto-scaling enabled right now, so I think it's 
> > better to just change the IPs of the VMs to be static. Anyways in the 
> > future I know I will need this feature.
> >
> > Does somebody know if 

Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-10-19 Thread alex . zamai
Hi everyone, great thread and special thanks to Paul for the gcloud example! 
I'm struggling with the same issue but for AWS, would anyone have an idea how 
that might work? 

Since this thread is on the first Google page for the "egress stable Ip traffic 
kubernetes" I think this might be useful for others. 

Thanks! 

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-08-10 Thread 'Tim Hockin' via Kubernetes user discussion and Q
The GKE team has heard the desire for this and is looking at possible
ways to provide it.

On Wed, Aug 9, 2017 at 3:56 PM,   wrote:
> On Friday, June 16, 2017 at 11:24:15 AM UTC-5, pa...@qwil.co wrote:
>> Yes, this is the right approach -- here's a detailed walk-through:
>>
>> https://github.com/johnlabarge/gke-nat-example
>>
>> On Friday, June 16, 2017 at 8:36:13 AM UTC-7, giorgio...@beinnova.it wrote:
>> > Hello, I've the same problem described there. I have a GKE cluster and I 
>> > need to connect to an external service. I find the NAT solution is right 
>> > for my needs, my cluster resizes automatically. @Paul Tiplady have you 
>> > config the external NAT? Can you share your experiences? I tried following 
>> > this guide 
>> > https://cloud.google.com/compute/docs/vpc/special-configurations#natgateway
>> >  but seems it doesn't work.
>> >
>> > Thanks,
>> > Giorgio
>> > Il giorno mercoledì 3 maggio 2017 22:08:50 UTC+2, Paul Tiplady ha scritto:
>> > > Yes, my reply was more directed to Rodrigo. In my use-case I do resize 
>> > > clusters often (as part of the node upgrade process), so I want a 
>> > > solution that's going to handle that case automatically. The NAT Gateway 
>> > > approach appears to be the best (only?) option that handles all cases 
>> > > seamlessly at this point.
>> > >
>> > >
>> > > I don't know in which cases a VM could be destroyed, I'd also be 
>> > > interested in seeing an enumeration of those cases. I'm taking a 
>> > > conservative stance as the consequences of dropping traffic through 
>> > > changing source-IP is quite severe in my case, and because I want to 
>> > > keep the process for upgrading the cluster as simple as possible.  From 
>> > > https://cloudplatform.googleblog.com/2015/03/Google-Compute-Engine-uses-Live-Migration-technology-to-service-infrastructure-without-application-downtime.html
>> > >  it sounds like VM termination should not be caused by planned 
>> > > maintenance, but I assume it could be caused by unexpected failures in 
>> > > the datacenter. It doesn't seem reckless to manually set the IPs as part 
>> > > of the upgrade process as you're suggesting.
>> > >
>> > >
>> > > On Wed, May 3, 2017 at 12:13 PM, Evan Jones  
>> > > wrote:
>> > >
>> > > Correct, but at least at the moment we aren't using auto-resizing, and 
>> > > I've never seen nodes get removed without us manually taking some action 
>> > > (e.g. upgrading Kubernetes releases or similar). Are there automated 
>> > > events that can delete a VM and remove it, without us having done 
>> > > something? Certainly I've observed machines rebooting, but that also 
>> > > preserves dedicated IPs. I can live with having to take some manual 
>> > > configuration action periodically, if we are changing something with our 
>> > > cluster, but I would like to know if there is something I've overlooked. 
>> > > Thanks!
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady  wrote:
>> > >
>> > > The public IP is not stable in GKE. You can manually assign a static IP 
>> > > to a GKE node, but then if the node goes away (e.g. your cluster was 
>> > > resized) the IP will be detached, and you'll have to manually reassign. 
>> > > I'd guess this is also true on an AWS managed equivalent like CoreOS's 
>> > > CloudFormation scripts.
>> > >
>> > >
>> > > On Wed, May 3, 2017 at 8:52 AM, Evan Jones  
>> > > wrote:
>> > >
>> > > As Rodrigo described, we are using Container Engine. I haven't fully 
>> > > tested this yet, but my plan is to assign "dedicated IPs" to a set of 
>> > > nodes, probably in their own Node Pool as part of the cluster. Those are 
>> > > the IPs used by outbound connections from pods running those nodes, if I 
>> > > recalling correctly from a previous experiment. Then I will use 
>> > > Rodrigo's taint suggestion to schedule Pods on those nodes.
>> > >
>> > > If for whatever reason we need to remove those nodes from that pool, or 
>> > > delete and recreate them, we can move the dedicated IP and taints to new 
>> > > nodes, and the jobs should end up in the right place again.
>> > >
>> > >
>> > > In short: I'm pretty sure this is going to solve our problem.
>> > >
>> > >
>> > > Thanks!
>
> The approach of configuring a NAT works but it has 2 major drawbacks:
>
> 1. It creates a single point of failure (if the VM that runs the NAT fails)
> 2. It's too complex!
>
> In my use case I don't need Auto-scaling enabled right now, so I think it's 
> better to just change the IPs of the VMs to be static. Anyways in the future 
> I know I will need this feature.
>
> Does somebody know if there are there any plans to provide this feature in 
> GKE?
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to 

Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-08-09 Thread csalazar
On Friday, June 16, 2017 at 11:24:15 AM UTC-5, pa...@qwil.co wrote:
> Yes, this is the right approach -- here's a detailed walk-through:
> 
> https://github.com/johnlabarge/gke-nat-example
> 
> On Friday, June 16, 2017 at 8:36:13 AM UTC-7, giorgio...@beinnova.it wrote:
> > Hello, I've the same problem described there. I have a GKE cluster and I 
> > need to connect to an external service. I find the NAT solution is right 
> > for my needs, my cluster resizes automatically. @Paul Tiplady have you 
> > config the external NAT? Can you share your experiences? I tried following 
> > this guide 
> > https://cloud.google.com/compute/docs/vpc/special-configurations#natgateway 
> > but seems it doesn't work.
> > 
> > Thanks,
> > Giorgio
> > Il giorno mercoledì 3 maggio 2017 22:08:50 UTC+2, Paul Tiplady ha scritto:
> > > Yes, my reply was more directed to Rodrigo. In my use-case I do resize 
> > > clusters often (as part of the node upgrade process), so I want a 
> > > solution that's going to handle that case automatically. The NAT Gateway 
> > > approach appears to be the best (only?) option that handles all cases 
> > > seamlessly at this point.
> > > 
> > > 
> > > I don't know in which cases a VM could be destroyed, I'd also be 
> > > interested in seeing an enumeration of those cases. I'm taking a 
> > > conservative stance as the consequences of dropping traffic through 
> > > changing source-IP is quite severe in my case, and because I want to keep 
> > > the process for upgrading the cluster as simple as possible.  From 
> > > https://cloudplatform.googleblog.com/2015/03/Google-Compute-Engine-uses-Live-Migration-technology-to-service-infrastructure-without-application-downtime.html
> > >  it sounds like VM termination should not be caused by planned 
> > > maintenance, but I assume it could be caused by unexpected failures in 
> > > the datacenter. It doesn't seem reckless to manually set the IPs as part 
> > > of the upgrade process as you're suggesting.
> > > 
> > > 
> > > On Wed, May 3, 2017 at 12:13 PM, Evan Jones  wrote:
> > > 
> > > Correct, but at least at the moment we aren't using auto-resizing, and 
> > > I've never seen nodes get removed without us manually taking some action 
> > > (e.g. upgrading Kubernetes releases or similar). Are there automated 
> > > events that can delete a VM and remove it, without us having done 
> > > something? Certainly I've observed machines rebooting, but that also 
> > > preserves dedicated IPs. I can live with having to take some manual 
> > > configuration action periodically, if we are changing something with our 
> > > cluster, but I would like to know if there is something I've overlooked. 
> > > Thanks!
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady  wrote:
> > > 
> > > The public IP is not stable in GKE. You can manually assign a static IP 
> > > to a GKE node, but then if the node goes away (e.g. your cluster was 
> > > resized) the IP will be detached, and you'll have to manually reassign. 
> > > I'd guess this is also true on an AWS managed equivalent like CoreOS's 
> > > CloudFormation scripts.
> > > 
> > > 
> > > On Wed, May 3, 2017 at 8:52 AM, Evan Jones  
> > > wrote:
> > > 
> > > As Rodrigo described, we are using Container Engine. I haven't fully 
> > > tested this yet, but my plan is to assign "dedicated IPs" to a set of 
> > > nodes, probably in their own Node Pool as part of the cluster. Those are 
> > > the IPs used by outbound connections from pods running those nodes, if I 
> > > recalling correctly from a previous experiment. Then I will use Rodrigo's 
> > > taint suggestion to schedule Pods on those nodes.
> > > 
> > > If for whatever reason we need to remove those nodes from that pool, or 
> > > delete and recreate them, we can move the dedicated IP and taints to new 
> > > nodes, and the jobs should end up in the right place again.
> > > 
> > > 
> > > In short: I'm pretty sure this is going to solve our problem.
> > > 
> > > 
> > > Thanks!

The approach of configuring a NAT works but it has 2 major drawbacks:

1. It creates a single point of failure (if the VM that runs the NAT fails)
2. It's too complex!

In my use case I don't need Auto-scaling enabled right now, so I think it's 
better to just change the IPs of the VMs to be static. Anyways in the future I 
know I will need this feature.

Does somebody know if there are there any plans to provide this feature in GKE?

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-06-19 Thread Giorgio Cerruti
Thank you Paul!

Il giorno ven 16 giu 2017 alle ore 18:24  ha scritto:

> Yes, this is the right approach -- here's a detailed walk-through:
>
> https://github.com/johnlabarge/gke-nat-example
>
> On Friday, June 16, 2017 at 8:36:13 AM UTC-7, giorgio...@beinnova.it
> wrote:
> > Hello, I've the same problem described there. I have a GKE cluster and I
> need to connect to an external service. I find the NAT solution is right
> for my needs, my cluster resizes automatically. @Paul Tiplady have you
> config the external NAT? Can you share your experiences? I tried following
> this guide
> https://cloud.google.com/compute/docs/vpc/special-configurations#natgateway
> but seems it doesn't work.
> >
> > Thanks,
> > Giorgio
> > Il giorno mercoledì 3 maggio 2017 22:08:50 UTC+2, Paul Tiplady ha
> scritto:
> > > Yes, my reply was more directed to Rodrigo. In my use-case I do resize
> clusters often (as part of the node upgrade process), so I want a solution
> that's going to handle that case automatically. The NAT Gateway approach
> appears to be the best (only?) option that handles all cases seamlessly at
> this point.
> > >
> > >
> > > I don't know in which cases a VM could be destroyed, I'd also be
> interested in seeing an enumeration of those cases. I'm taking a
> conservative stance as the consequences of dropping traffic through
> changing source-IP is quite severe in my case, and because I want to keep
> the process for upgrading the cluster as simple as possible.  From
> https://cloudplatform.googleblog.com/2015/03/Google-Compute-Engine-uses-Live-Migration-technology-to-service-infrastructure-without-application-downtime.html
> it sounds like VM termination should not be caused by planned maintenance,
> but I assume it could be caused by unexpected failures in the datacenter.
> It doesn't seem reckless to manually set the IPs as part of the upgrade
> process as you're suggesting.
> > >
> > >
> > > On Wed, May 3, 2017 at 12:13 PM, Evan Jones 
> wrote:
> > >
> > > Correct, but at least at the moment we aren't using auto-resizing, and
> I've never seen nodes get removed without us manually taking some action
> (e.g. upgrading Kubernetes releases or similar). Are there automated events
> that can delete a VM and remove it, without us having done something?
> Certainly I've observed machines rebooting, but that also preserves
> dedicated IPs. I can live with having to take some manual configuration
> action periodically, if we are changing something with our cluster, but I
> would like to know if there is something I've overlooked. Thanks!
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady  wrote:
> > >
> > > The public IP is not stable in GKE. You can manually assign a static
> IP to a GKE node, but then if the node goes away (e.g. your cluster was
> resized) the IP will be detached, and you'll have to manually reassign. I'd
> guess this is also true on an AWS managed equivalent like CoreOS's
> CloudFormation scripts.
> > >
> > >
> > > On Wed, May 3, 2017 at 8:52 AM, Evan Jones 
> wrote:
> > >
> > > As Rodrigo described, we are using Container Engine. I haven't fully
> tested this yet, but my plan is to assign "dedicated IPs" to a set of
> nodes, probably in their own Node Pool as part of the cluster. Those are
> the IPs used by outbound connections from pods running those nodes, if I
> recalling correctly from a previous experiment. Then I will use Rodrigo's
> taint suggestion to schedule Pods on those nodes.
> > >
> > > If for whatever reason we need to remove those nodes from that pool,
> or delete and recreate them, we can move the dedicated IP and taints to new
> nodes, and the jobs should end up in the right place again.
> > >
> > >
> > > In short: I'm pretty sure this is going to solve our problem.
> > >
> > >
> > > Thanks!
>
> --

Kind Regards,
Giorgio Cerruti

*Be**innova* di *Giorgio Cerruti*
*WebSites | IT consultant | Web Developer*
*Phone: +39 340/87.68.326 :: Skype: beinnova*
*P.IVA: 10755560017*
*Visit Beinnova  Site*

   - Giorgio Cerruti
   

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-06-16 Thread paul
Yes, this is the right approach -- here's a detailed walk-through:

https://github.com/johnlabarge/gke-nat-example

On Friday, June 16, 2017 at 8:36:13 AM UTC-7, giorgio...@beinnova.it wrote:
> Hello, I've the same problem described there. I have a GKE cluster and I need 
> to connect to an external service. I find the NAT solution is right for my 
> needs, my cluster resizes automatically. @Paul Tiplady have you config the 
> external NAT? Can you share your experiences? I tried following this guide 
> https://cloud.google.com/compute/docs/vpc/special-configurations#natgateway 
> but seems it doesn't work.
> 
> Thanks,
> Giorgio
> Il giorno mercoledì 3 maggio 2017 22:08:50 UTC+2, Paul Tiplady ha scritto:
> > Yes, my reply was more directed to Rodrigo. In my use-case I do resize 
> > clusters often (as part of the node upgrade process), so I want a solution 
> > that's going to handle that case automatically. The NAT Gateway approach 
> > appears to be the best (only?) option that handles all cases seamlessly at 
> > this point.
> > 
> > 
> > I don't know in which cases a VM could be destroyed, I'd also be interested 
> > in seeing an enumeration of those cases. I'm taking a conservative stance 
> > as the consequences of dropping traffic through changing source-IP is quite 
> > severe in my case, and because I want to keep the process for upgrading the 
> > cluster as simple as possible.  From 
> > https://cloudplatform.googleblog.com/2015/03/Google-Compute-Engine-uses-Live-Migration-technology-to-service-infrastructure-without-application-downtime.html
> >  it sounds like VM termination should not be caused by planned maintenance, 
> > but I assume it could be caused by unexpected failures in the datacenter. 
> > It doesn't seem reckless to manually set the IPs as part of the upgrade 
> > process as you're suggesting.
> > 
> > 
> > On Wed, May 3, 2017 at 12:13 PM, Evan Jones  wrote:
> > 
> > Correct, but at least at the moment we aren't using auto-resizing, and I've 
> > never seen nodes get removed without us manually taking some action (e.g. 
> > upgrading Kubernetes releases or similar). Are there automated events that 
> > can delete a VM and remove it, without us having done something? Certainly 
> > I've observed machines rebooting, but that also preserves dedicated IPs. I 
> > can live with having to take some manual configuration action periodically, 
> > if we are changing something with our cluster, but I would like to know if 
> > there is something I've overlooked. Thanks!
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady  wrote:
> > 
> > The public IP is not stable in GKE. You can manually assign a static IP to 
> > a GKE node, but then if the node goes away (e.g. your cluster was resized) 
> > the IP will be detached, and you'll have to manually reassign. I'd guess 
> > this is also true on an AWS managed equivalent like CoreOS's CloudFormation 
> > scripts.
> > 
> > 
> > On Wed, May 3, 2017 at 8:52 AM, Evan Jones  wrote:
> > 
> > As Rodrigo described, we are using Container Engine. I haven't fully tested 
> > this yet, but my plan is to assign "dedicated IPs" to a set of nodes, 
> > probably in their own Node Pool as part of the cluster. Those are the IPs 
> > used by outbound connections from pods running those nodes, if I recalling 
> > correctly from a previous experiment. Then I will use Rodrigo's taint 
> > suggestion to schedule Pods on those nodes.
> > 
> > If for whatever reason we need to remove those nodes from that pool, or 
> > delete and recreate them, we can move the dedicated IP and taints to new 
> > nodes, and the jobs should end up in the right place again.
> > 
> > 
> > In short: I'm pretty sure this is going to solve our problem.
> > 
> > 
> > Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-06-16 Thread giorgio . cerruti
Hello, I've the same problem described there. I have a GKE cluster and I need 
to connect to an external service. I find the NAT solution is right for my 
needs, my cluster resizes automatically. @Paul Tiplady have you config the 
external NAT? Can you share your experiences? I tried following this guide 
https://cloud.google.com/compute/docs/vpc/special-configurations#natgateway but 
seems it doesn't work.

Thanks,
Giorgio
Il giorno mercoledì 3 maggio 2017 22:08:50 UTC+2, Paul Tiplady ha scritto:
> Yes, my reply was more directed to Rodrigo. In my use-case I do resize 
> clusters often (as part of the node upgrade process), so I want a solution 
> that's going to handle that case automatically. The NAT Gateway approach 
> appears to be the best (only?) option that handles all cases seamlessly at 
> this point.
> 
> 
> I don't know in which cases a VM could be destroyed, I'd also be interested 
> in seeing an enumeration of those cases. I'm taking a conservative stance as 
> the consequences of dropping traffic through changing source-IP is quite 
> severe in my case, and because I want to keep the process for upgrading the 
> cluster as simple as possible.  From 
> https://cloudplatform.googleblog.com/2015/03/Google-Compute-Engine-uses-Live-Migration-technology-to-service-infrastructure-without-application-downtime.html
>  it sounds like VM termination should not be caused by planned maintenance, 
> but I assume it could be caused by unexpected failures in the datacenter. It 
> doesn't seem reckless to manually set the IPs as part of the upgrade process 
> as you're suggesting.
> 
> 
> On Wed, May 3, 2017 at 12:13 PM, Evan Jones  wrote:
> 
> Correct, but at least at the moment we aren't using auto-resizing, and I've 
> never seen nodes get removed without us manually taking some action (e.g. 
> upgrading Kubernetes releases or similar). Are there automated events that 
> can delete a VM and remove it, without us having done something? Certainly 
> I've observed machines rebooting, but that also preserves dedicated IPs. I 
> can live with having to take some manual configuration action periodically, 
> if we are changing something with our cluster, but I would like to know if 
> there is something I've overlooked. Thanks!
> 
> 
> 
> 
> 
> 
> 
> On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady  wrote:
> 
> The public IP is not stable in GKE. You can manually assign a static IP to a 
> GKE node, but then if the node goes away (e.g. your cluster was resized) the 
> IP will be detached, and you'll have to manually reassign. I'd guess this is 
> also true on an AWS managed equivalent like CoreOS's CloudFormation scripts.
> 
> 
> On Wed, May 3, 2017 at 8:52 AM, Evan Jones  wrote:
> 
> As Rodrigo described, we are using Container Engine. I haven't fully tested 
> this yet, but my plan is to assign "dedicated IPs" to a set of nodes, 
> probably in their own Node Pool as part of the cluster. Those are the IPs 
> used by outbound connections from pods running those nodes, if I recalling 
> correctly from a previous experiment. Then I will use Rodrigo's taint 
> suggestion to schedule Pods on those nodes.
> 
> If for whatever reason we need to remove those nodes from that pool, or 
> delete and recreate them, we can move the dedicated IP and taints to new 
> nodes, and the jobs should end up in the right place again.
> 
> 
> In short: I'm pretty sure this is going to solve our problem.
> 
> 
> Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-05-03 Thread Paul Tiplady
Yes, my reply was more directed to Rodrigo. In my use-case I do resize
clusters often (as part of the node upgrade process), so I want a solution
that's going to handle that case automatically. The NAT Gateway approach
appears to be the best (only?) option that handles all cases seamlessly at
this point.

I don't know in which cases a VM could be destroyed, I'd also be interested
in seeing an enumeration of those cases. I'm taking a conservative stance
as the consequences of dropping traffic through changing source-IP is quite
severe in my case, and because I want to keep the process for upgrading the
cluster as simple as possible.  From
https://cloudplatform.googleblog.com/2015/03/Google-Compute-Engine-uses-Live-Migration-technology-to-service-infrastructure-without-application-downtime.html
it sounds like VM termination should not be caused by planned maintenance,
but I assume it could be caused by unexpected failures in the datacenter.
It doesn't seem reckless to manually set the IPs as part of the upgrade
process as you're suggesting.

On Wed, May 3, 2017 at 12:13 PM, Evan Jones  wrote:

> Correct, but at least at the moment we aren't using auto-resizing, and
> I've never seen nodes get removed without us manually taking some action
> (e.g. upgrading Kubernetes releases or similar). Are there automated events
> that can delete a VM and remove it, without us having done something?
> Certainly I've observed machines rebooting, but that also preserves
> dedicated IPs. I can live with having to take some manual configuration
> action periodically, if we are changing something with our cluster, but I
> would like to know if there is something I've overlooked. Thanks!
>
>
> On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady  wrote:
>
>> The public IP is not stable in GKE. You can manually assign a static IP
>> to a GKE node, but then if the node goes away (e.g. your cluster was
>> resized) the IP will be detached, and you'll have to manually reassign. I'd
>> guess this is also true on an AWS managed equivalent like CoreOS's
>> CloudFormation scripts.
>>
>> On Wed, May 3, 2017 at 8:52 AM, Evan Jones 
>> wrote:
>>
>>> As Rodrigo described, we are using Container Engine. I haven't fully
>>> tested this yet, but my plan is to assign "dedicated IPs" to a set of
>>> nodes, probably in their own Node Pool as part of the cluster. Those are
>>> the IPs used by outbound connections from pods running those nodes, if I
>>> recalling correctly from a previous experiment. Then I will use Rodrigo's
>>> taint suggestion to schedule Pods on those nodes.
>>>
>>> If for whatever reason we need to remove those nodes from that pool, or
>>> delete and recreate them, we can move the dedicated IP and taints to new
>>> nodes, and the jobs should end up in the right place again.
>>>
>>> In short: I'm pretty sure this is going to solve our problem.
>>>
>>> Thanks!
>>>
>>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-05-03 Thread Evan Jones
Correct, but at least at the moment we aren't using auto-resizing, and I've
never seen nodes get removed without us manually taking some action (e.g.
upgrading Kubernetes releases or similar). Are there automated events that
can delete a VM and remove it, without us having done something? Certainly
I've observed machines rebooting, but that also preserves dedicated IPs. I
can live with having to take some manual configuration action periodically,
if we are changing something with our cluster, but I would like to know if
there is something I've overlooked. Thanks!


On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady  wrote:

> The public IP is not stable in GKE. You can manually assign a static IP to
> a GKE node, but then if the node goes away (e.g. your cluster was resized)
> the IP will be detached, and you'll have to manually reassign. I'd guess
> this is also true on an AWS managed equivalent like CoreOS's CloudFormation
> scripts.
>
> On Wed, May 3, 2017 at 8:52 AM, Evan Jones 
> wrote:
>
>> As Rodrigo described, we are using Container Engine. I haven't fully
>> tested this yet, but my plan is to assign "dedicated IPs" to a set of
>> nodes, probably in their own Node Pool as part of the cluster. Those are
>> the IPs used by outbound connections from pods running those nodes, if I
>> recalling correctly from a previous experiment. Then I will use Rodrigo's
>> taint suggestion to schedule Pods on those nodes.
>>
>> If for whatever reason we need to remove those nodes from that pool, or
>> delete and recreate them, we can move the dedicated IP and taints to new
>> nodes, and the jobs should end up in the right place again.
>>
>> In short: I'm pretty sure this is going to solve our problem.
>>
>> Thanks!
>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-05-03 Thread Paul Tiplady
The public IP is not stable in GKE. You can manually assign a static IP to
a GKE node, but then if the node goes away (e.g. your cluster was resized)
the IP will be detached, and you'll have to manually reassign. I'd guess
this is also true on an AWS managed equivalent like CoreOS's CloudFormation
scripts.

On Wed, May 3, 2017 at 8:52 AM, Evan Jones 
wrote:

> As Rodrigo described, we are using Container Engine. I haven't fully
> tested this yet, but my plan is to assign "dedicated IPs" to a set of
> nodes, probably in their own Node Pool as part of the cluster. Those are
> the IPs used by outbound connections from pods running those nodes, if I
> recalling correctly from a previous experiment. Then I will use Rodrigo's
> taint suggestion to schedule Pods on those nodes.
>
> If for whatever reason we need to remove those nodes from that pool, or
> delete and recreate them, we can move the dedicated IP and taints to new
> nodes, and the jobs should end up in the right place again.
>
> In short: I'm pretty sure this is going to solve our problem.
>
> Thanks!
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-05-03 Thread Evan Jones
As Rodrigo described, we are using Container Engine. I haven't fully tested 
this yet, but my plan is to assign "dedicated IPs" to a set of nodes, 
probably in their own Node Pool as part of the cluster. Those are the IPs 
used by outbound connections from pods running those nodes, if I recalling 
correctly from a previous experiment. Then I will use Rodrigo's taint 
suggestion to schedule Pods on those nodes.

If for whatever reason we need to remove those nodes from that pool, or 
delete and recreate them, we can move the dedicated IP and taints to new 
nodes, and the jobs should end up in the right place again.

In short: I'm pretty sure this is going to solve our problem.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-05-02 Thread Rodrigo Campos
Oh, sorry. Then use the public IP of the node. With dedicated nodes, as
explained, you can get them. Am I missing something?

In AWS, you can have elastic IP. What is the problem? Manually assign on
node crash? Not sure if you will need to do set the labels again in those
cases too, depends how your provisioning works

On Tuesday, May 2, 2017, Paul Tiplady  wrote:

> 10.0.0.0/8 is a private subnet, it's not routable from the internet.
>
> On Tue, May 2, 2017 at 5:23 PM, Rodrigo Campos  > wrote:
>
>> Your nodes IP are in some subnet, usually. Something like 10.0.0.0/16 or
>> something. If you can accept a subnet, also, that should work.
>>
>>
>> On Tuesday, May 2, 2017, Paul Tiplady > > wrote:
>>
>>> Looks like the taint-based approach will only work on bare metal, where
>>> your node IPs are fixed -- or am I missing a detail here?
>>>
>>> On Tue, May 2, 2017 at 9:29 AM, Tony Li  wrote:
>>>
 If you have any example repo to show how this works (what kind of
 proxy/how to opt in), that would be greatly appreciated!


 Tony Li

 On Tue, May 2, 2017 at 8:30 AM, Evan Jones 
 wrote:

> Thank you! I had forgotten about that feature, since we previously
> have not needed it. That will absolutely solve our problem, and be much
> better than needing an "exceptional" thing outside of Kubernetes.
>
> You are correct about what we need: We have a small number of services
> where their outbound requests need to come from known IPs. This will solve
> the issue for us.
>
> Thanks again.
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Kubernetes user discussion and Q" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/kubernetes-users/C34yKt0qKtY/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com
> .
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>


>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Kubernetes user discussion and Q" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to kubernetes-users+unsubscr...@googlegroups.com.
>>> To post to this group, send email to kubernetes-users@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/kubernetes-users.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Kubernetes user discussion and Q" group.
>> To unsubscribe from this topic, visit https://groups.google.com/d/to
>> pic/kubernetes-users/C34yKt0qKtY/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> kubernetes-users+unsubscr...@googlegroups.com
>> 
>> .
>> To post to this group, send email to kubernetes-users@googlegroups.com
>> .
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com
> 
> .
> To post to this group, send email to kubernetes-users@googlegroups.com
> .
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-05-02 Thread Paul Tiplady
10.0.0.0/8 is a private subnet, it's not routable from the internet.

On Tue, May 2, 2017 at 5:23 PM, Rodrigo Campos  wrote:

> Your nodes IP are in some subnet, usually. Something like 10.0.0.0/16 or
> something. If you can accept a subnet, also, that should work.
>
>
> On Tuesday, May 2, 2017, Paul Tiplady  wrote:
>
>> Looks like the taint-based approach will only work on bare metal, where
>> your node IPs are fixed -- or am I missing a detail here?
>>
>> On Tue, May 2, 2017 at 9:29 AM, Tony Li  wrote:
>>
>>> If you have any example repo to show how this works (what kind of
>>> proxy/how to opt in), that would be greatly appreciated!
>>>
>>>
>>> Tony Li
>>>
>>> On Tue, May 2, 2017 at 8:30 AM, Evan Jones 
>>> wrote:
>>>
 Thank you! I had forgotten about that feature, since we previously have
 not needed it. That will absolutely solve our problem, and be much better
 than needing an "exceptional" thing outside of Kubernetes.

 You are correct about what we need: We have a small number of services
 where their outbound requests need to come from known IPs. This will solve
 the issue for us.

 Thanks again.

 --
 You received this message because you are subscribed to a topic in the
 Google Groups "Kubernetes user discussion and Q" group.
 To unsubscribe from this topic, visit https://groups.google.com/d/to
 pic/kubernetes-users/C34yKt0qKtY/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 kubernetes-users+unsubscr...@googlegroups.com.
 To post to this group, send email to kubernetes-users@googlegroups.com.
 Visit this group at https://groups.google.com/group/kubernetes-users.
 For more options, visit https://groups.google.com/d/optout.

>>>
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Kubernetes user discussion and Q" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/kubernetes-users/C34yKt0qKtY/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-05-02 Thread Rodrigo Campos
Your nodes IP are in some subnet, usually. Something like 10.0.0.0/16 or
something. If you can accept a subnet, also, that should work.

On Tuesday, May 2, 2017, Paul Tiplady  wrote:

> Looks like the taint-based approach will only work on bare metal, where
> your node IPs are fixed -- or am I missing a detail here?
>
> On Tue, May 2, 2017 at 9:29 AM, Tony Li  > wrote:
>
>> If you have any example repo to show how this works (what kind of
>> proxy/how to opt in), that would be greatly appreciated!
>>
>>
>> Tony Li
>>
>> On Tue, May 2, 2017 at 8:30 AM, Evan Jones > > wrote:
>>
>>> Thank you! I had forgotten about that feature, since we previously have
>>> not needed it. That will absolutely solve our problem, and be much better
>>> than needing an "exceptional" thing outside of Kubernetes.
>>>
>>> You are correct about what we need: We have a small number of services
>>> where their outbound requests need to come from known IPs. This will solve
>>> the issue for us.
>>>
>>> Thanks again.
>>>
>>> --
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "Kubernetes user discussion and Q" group.
>>> To unsubscribe from this topic, visit https://groups.google.com/d/to
>>> pic/kubernetes-users/C34yKt0qKtY/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> kubernetes-users+unsubscr...@googlegroups.com
>>> 
>>> .
>>> To post to this group, send email to kubernetes-users@googlegroups.com
>>> .
>>> Visit this group at https://groups.google.com/group/kubernetes-users.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com
> 
> .
> To post to this group, send email to kubernetes-users@googlegroups.com
> .
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-05-02 Thread Evan Jones
Thank you! I had forgotten about that feature, since we previously have not 
needed it. That will absolutely solve our problem, and be much better than 
needing an "exceptional" thing outside of Kubernetes.

You are correct about what we need: We have a small number of services 
where their outbound requests need to come from known IPs. This will solve 
the issue for us.

Thanks again.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-05-01 Thread 'EJ Campbell' via Kubernetes user discussion and Q
Does it require a single stable IP address or a range? You could possibly have 
a set of dedicated nodes for you outbound proxy; that way you can still use 
Kubernetes machinery for deployment, pod lifecycles, etc., but still present a 
stable CIDR to the outside world.
-EJ
On Monday, May 1, 2017, 7:12:40 PM PDT, Evan Jones  
wrote:It turns out I've just run into a requirement to have a stable outbound 
IP address as well. In looking into this: I think we will likely some kind of 
proxy server running outside of Kubernetes. This will allow services "opt in" 
to this special handling, rather than doing it for everything in the cluster. 
It seems like the simplest way to make this work.
Honestly, this seems like enough of a rare case that I'm not sure Kubernetes 
should really support anything "natively" to solve this problem (at least not 
at the moment when there are more common things that still need work).



-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-05-01 Thread Evan Jones
It turns out I've just run into a requirement to have a stable outbound IP 
address as well. In looking into this: I think we will likely some kind of 
proxy server running outside of Kubernetes. This will allow services "opt 
in" to this special handling, rather than doing it for everything in the 
cluster. It seems like the simplest way to make this work.

Honestly, this seems like enough of a rare case that I'm not sure 
Kubernetes should really support anything "natively" to solve this problem 
(at least not at the moment when there are more common things that still 
need work).


-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Finding a way to get stable public IP for outbound connections

2017-04-14 Thread paul
Please correct me if I'm wrong, but it looks like using a simple NAT gateway 
breaks inbound traffic to the cluster; by configuring a default outbound route 
that goes through the gateway, it's now impossible to make an inbound 
connection directly to the cluster, as the response from the cluster gets sent 
via the gateway (and therefore gets dropped at the sender, as the response IP 
doesn't match). This is not good, as it breaks GLB Services.

The best workaround I can come up with is to set the NAT gateway route to just 
apply to the specific remote IP addresses that require a fixed source; that is 
more brittle than I'd like though, as any change in remote IP will result in my 
route not matching any more.

Am I missing something?

What I really want is for GKE to be able to assign an existing (or allocate a 
new) static IP to each node in the cluster, and make an effort to move them 
across node upgrades and cluster resizing. i.e. as long as I have at least one 
node in my cluster, there should be a static IP "gke-cluster-node-1".

Cheers,
Paul

On Thursday, January 19, 2017 at 8:40:51 PM UTC-8, Tim Hockin wrote:
> For now the only way to get a static IP is to use a custom NAT
> gateway.  https://cloud.google.com/compute/docs/networking#natgateway
> 
> On Fri, Jan 6, 2017 at 5:29 AM, Romain Vrignaud  wrote:
> > Hello,
> >
> > I'm running in a GKE cluster (1.4.x) some application that need to connect
> > to a third party API. This third party API has mandatory IP filtering. So in
> > order to get API authorized I need to declare what are the public IP that
> > I'll use to connect to the API.
> > My problem is that public IPs of GKE nodes are not stable accros upgrade and
> > it would prevent the use of node autoscaling.
> >
> > Is there any way to have a stable outbound public IP on GKE ?
> >
> > Thanks
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Kubernetes user discussion and Q" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kubernetes-users+unsubscr...@googlegroups.com.
> > To post to this group, send email to kubernetes-users@googlegroups.com.
> > Visit this group at https://groups.google.com/group/kubernetes-users.
> > For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.