Using a DNS provider that provides DNS health checks/failover controls (like Route 53), that is...

Joe Auty <mailto:joea...@gmail.com>
May 17, 2017 at 12:38 PM
Hmmm, I suppose it would be possible to have two HAProxy VMs and alternate between them via DNS round-robin. Sounds cleaner than the setup you describe, which seems far more complex and iffy.

Thanks for following me through on this exercise! I don't know if I'm unique in thinking through hypotheticals this way to aid my overall understanding, but I really appreciate your indulging me this way :)


'Tim Hockin' via Kubernetes user discussion and Q&A <mailto:kubernetes-users@googlegroups.com>
May 17, 2017 at 11:53 AM
On Wed, May 17, 2017 at 8:33 AM, Joe Auty<joea...@gmail.com>  wrote:
If the LB is in the same pod, it's not really an LB, is it?  It's
providing some other form of proxy service, right?


True, perhaps I used the wrong term.

I want redundancy/HA with these socket.io servers, and each require session
affinity.

To come at this a different way, here is one way I can think of that would
support this hypothetical architecture. Does this seem like the best way?

Internet ->  HAProxy on a VM (not behind GCLB) ->  backends as cluster IPs and
NodePort provided ports for the service

One problem with this setup is that there is a single point of failure with
the HAProxy VM. So, I'm trying to figure out how a setup like this would
work where perhaps HAProxy goes inside the cluster so that this service can
have redundancy.

If you really need the haproxy in there, somthing like this might be better:

Internet ->  Service LB (L4 with clientIP affinity) ->  N* HAProxy (max
one per node, set with OnlyLocal) ->  podIPs

You'll need something to watch kube endpoints and expand that into pod
IPs in haproxy config, but it avoids the "dumbest" layer of LB.  I am
sure that code already exists.

Now that I think on it, it might be possible to use affinity and
OnlyLocal together without the max-one trick.  I'd have to try it and
take a look to be sure.  In fact, I am pretty sure it would work.  76%
sure.

'Tim Hockin' via Kubernetes user discussion and Q&A
May 17, 2017 at 1:06 AM

On Tue, May 16, 2017 at 11:14 AM, Joe Auty<joea...@gmail.com>  wrote:

Thanks again Tim!

What would a recommended architecture look like for a socket.io sort of
setup?

I don't know socket.io per se, but I can speak abstractly...

Needs:

- session affinity (I think either L4 or 7!?) FWIW the provided socket.io
examples are for HAProxy and NGinx

If you're already running an L7 proxy, you can skip the GCLB.  The
rule of thumb is that once you go through a proxy, you're crossing
from L4 to L7.  Once you introduce L7 constructs, you can't go back to
L3/L4.  If you introduce an XFF header (L7), you can no longer rely on
the IP (L3).

- if we use HAProxy/NGinx, redundancy of these services would be great
- LB is in the same pod as the socket.io server

If the LB is in the same pod, it's not really an LB, is it?  It's
providing some other form of proxy service, right?

'Tim Hockin' via Kubernetes user discussion and Q&A
May 16, 2017 at 11:53 AM

On Tue, May 16, 2017 at 7:06 AM, Joe Auty<joea...@gmail.com>  wrote:

Hi Tim,

I have a couple of different use cases actually, but at this point I'm just
trying to understand the architecture to know where my LB fits. Options:

- haproxy/nginx outside of the cluster pointing to NodePort/LoadBalancer
ports

-  haproxy/nginx outside of the cluster pointing to pod IPs (the point
being that the LB doesn't have to be literally inside the cluster,
just able to reach the master and teh pods)

- haproxy/nginx inside the cluster
- Using just the Google LB and Kubernetes without haproxy/nginx

One use case involves a need for IP whitelisting and the other session
affinity, so I'm mostly just trying to straighten out my understanding so
that I can put all of these pieces together.

Google's L7 LB has L7 affinity, but only as far as a VM.  If you have
more than one backend pod on a single VM, that breaks down.  Google's
L7 LB doesn't have IP firewalling built in, though.

If you want L7 affinity and IP whitelisting, you probably need to DiY for
now.

Something like:
* Run a deployment of nginx/haproxy
   - use a hostPort to force it to be max 1 per node (for best balancing)
* Expose via a Service LB (L4) with ClientIP affinity and source
ranges configured
   - use the OnlyLocal annotation to retain client IP
* Configure nginx to target pod IPs directly (I know this logic exists
as part of the Ingress controller, not sure if it is standalone).

You are not alone asking for this sort of setup - I'd be surprised if
there are not better docs out there.  I don't have them at hand,
though.


'Tim Hockin' via Kubernetes user discussion and Q&A
May 15, 2017 at 11:59 PM
You could maybe start with what you want to achieve, and what your
requirements are?

Joe Auty
May 14, 2017 at 1:28 PM
Sorry for such a vague subject, but I think I need some help breaking things
down here.

I think I understand how the Google layer 7 LBs work (this diagram helped
me:
https://storage.googleapis.com/static.ianlewis.org/prod/img/750/gcp-lb-objects2.png)
, I understand NGinx and HAProxy LBs independently, and I believe I also
understand the concepts of NodePort, Ingress controllers, services, etc.

What I don't understand is why when I research things like socket.io
architectures in Kubernetes (for example), or features like IP whitelisting,
session affinity, etc. I see people putting NGinx or HAProxy into their
clusters. It is hard for me to keep straight all of the different levels of
load balancing and their controls:

Google backend services (i.e. Google LB)
Kubernetes service LB
HAProxy/NGinx


The rationale for HAProxy and NGinx seems to involve compensating for
missing features and/or bugs (kube-proxy, etc.) and it is hard to keep
straight what is a reality today and what the best path is?

Google's LBs support session affinity, and there are session affinity
Kubernetes service settings, so for starters, when and why is NGinx or
HAProxy necessary, and are there outstanding issues with tracking source IPs
and setting/respecting proper headers?

I'm happy to get into what sort of features I need if this will help steer
the discussion, but at this point I'm thinking maybe it is best to start at
a more basic level where you treat me like I'm 6 years old :)

Thanks in advance!
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Joe Auty
May 16, 2017 at 10:06 AM
Hi Tim,

I have a couple of different use cases actually, but at this point I'm just
trying to understand the architecture to know where my LB fits. Options:

- haproxy/nginx outside of the cluster pointing to NodePort/LoadBalancer
ports
- haproxy/nginx inside the cluster
- Using just the Google LB and Kubernetes without haproxy/nginx

One use case involves a need for IP whitelisting and the other session
affinity, so I'm mostly just trying to straighten out my understanding so
that I can put all of these pieces together.


'Tim Hockin' via Kubernetes user discussion and Q&A
May 15, 2017 at 11:59 PM
You could maybe start with what you want to achieve, and what your
requirements are?

Joe Auty
May 14, 2017 at 1:28 PM
Sorry for such a vague subject, but I think I need some help breaking things
down here.

I think I understand how the Google layer 7 LBs work (this diagram helped
me:
https://storage.googleapis.com/static.ianlewis.org/prod/img/750/gcp-lb-objects2.png)
, I understand NGinx and HAProxy LBs independently, and I believe I also
understand the concepts of NodePort, Ingress controllers, services, etc.

What I don't understand is why when I research things like socket.io
architectures in Kubernetes (for example), or features like IP whitelisting,
session affinity, etc. I see people putting NGinx or HAProxy into their
clusters. It is hard for me to keep straight all of the different levels of
load balancing and their controls:

Google backend services (i.e. Google LB)
Kubernetes service LB
HAProxy/NGinx


The rationale for HAProxy and NGinx seems to involve compensating for
missing features and/or bugs (kube-proxy, etc.) and it is hard to keep
straight what is a reality today and what the best path is?

Google's LBs support session affinity, and there are session affinity
Kubernetes service settings, so for starters, when and why is NGinx or
HAProxy necessary, and are there outstanding issues with tracking source IPs
and setting/respecting proper headers?

I'm happy to get into what sort of features I need if this will help steer
the discussion, but at this point I'm thinking maybe it is best to start at
a more basic level where you treat me like I'm 6 years old :)

Thanks in advance!
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Joe Auty
May 16, 2017 at 2:14 PM
Thanks again Tim!

What would a recommended architecture look like for a socket.io sort of
setup?

Needs:

- session affinity (I think either L4 or 7!?) FWIW the provided socket.io
examples are for HAProxy and NGinx
- if we use HAProxy/NGinx, redundancy of these services would be great
- LB is in the same pod as the socket.io server


'Tim Hockin' via Kubernetes user discussion and Q&A
May 16, 2017 at 11:53 AM

On Tue, May 16, 2017 at 7:06 AM, Joe Auty<joea...@gmail.com>  wrote:

Hi Tim,

I have a couple of different use cases actually, but at this point I'm just
trying to understand the architecture to know where my LB fits. Options:

- haproxy/nginx outside of the cluster pointing to NodePort/LoadBalancer
ports

-  haproxy/nginx outside of the cluster pointing to pod IPs (the point
being that the LB doesn't have to be literally inside the cluster,
just able to reach the master and teh pods)

- haproxy/nginx inside the cluster
- Using just the Google LB and Kubernetes without haproxy/nginx

One use case involves a need for IP whitelisting and the other session
affinity, so I'm mostly just trying to straighten out my understanding so
that I can put all of these pieces together.

Google's L7 LB has L7 affinity, but only as far as a VM.  If you have
more than one backend pod on a single VM, that breaks down.  Google's
L7 LB doesn't have IP firewalling built in, though.

If you want L7 affinity and IP whitelisting, you probably need to DiY for
now.

Something like:
* Run a deployment of nginx/haproxy
   - use a hostPort to force it to be max 1 per node (for best balancing)
* Expose via a Service LB (L4) with ClientIP affinity and source
ranges configured
   - use the OnlyLocal annotation to retain client IP
* Configure nginx to target pod IPs directly (I know this logic exists
as part of the Ingress controller, not sure if it is standalone).

You are not alone asking for this sort of setup - I'd be surprised if
there are not better docs out there.  I don't have them at hand,
though.


'Tim Hockin' via Kubernetes user discussion and Q&A
May 15, 2017 at 11:59 PM
You could maybe start with what you want to achieve, and what your
requirements are?

Joe Auty
May 14, 2017 at 1:28 PM
Sorry for such a vague subject, but I think I need some help breaking things
down here.

I think I understand how the Google layer 7 LBs work (this diagram helped
me:
https://storage.googleapis.com/static.ianlewis.org/prod/img/750/gcp-lb-objects2.png)
, I understand NGinx and HAProxy LBs independently, and I believe I also
understand the concepts of NodePort, Ingress controllers, services, etc.

What I don't understand is why when I research things like socket.io
architectures in Kubernetes (for example), or features like IP whitelisting,
session affinity, etc. I see people putting NGinx or HAProxy into their
clusters. It is hard for me to keep straight all of the different levels of
load balancing and their controls:

Google backend services (i.e. Google LB)
Kubernetes service LB
HAProxy/NGinx


The rationale for HAProxy and NGinx seems to involve compensating for
missing features and/or bugs (kube-proxy, etc.) and it is hard to keep
straight what is a reality today and what the best path is?

Google's LBs support session affinity, and there are session affinity
Kubernetes service settings, so for starters, when and why is NGinx or
HAProxy necessary, and are there outstanding issues with tracking source IPs
and setting/respecting proper headers?

I'm happy to get into what sort of features I need if this will help steer
the discussion, but at this point I'm thinking maybe it is best to start at
a more basic level where you treat me like I'm 6 years old :)

Thanks in advance!
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Joe Auty
May 16, 2017 at 10:06 AM
Hi Tim,

I have a couple of different use cases actually, but at this point I'm just
trying to understand the architecture to know where my LB fits. Options:

- haproxy/nginx outside of the cluster pointing to NodePort/LoadBalancer
ports
- haproxy/nginx inside the cluster
- Using just the Google LB and Kubernetes without haproxy/nginx

One use case involves a need for IP whitelisting and the other session
affinity, so I'm mostly just trying to straighten out my understanding so
that I can put all of these pieces together.


'Tim Hockin' via Kubernetes user discussion and Q&A
May 15, 2017 at 11:59 PM
You could maybe start with what you want to achieve, and what your
requirements are?


--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Joe Auty <mailto:joea...@gmail.com>
May 17, 2017 at 11:33 AM
If the LB is in the same pod, it's not really an LB, is it?  It's
providing some other form of proxy service, right?

True, perhaps I used the wrong term.

I want redundancy/HA with these socket.io servers, and each require session affinity.

To come at this a different way, here is one way I can think of that would support this hypothetical architecture. Does this seem like the best way?

Internet -> HAProxy on a VM (not behind GCLB) -> backends as cluster IPs and NodePort provided ports for the service

One problem with this setup is that there is a single point of failure with the HAProxy VM. So, I'm trying to figure out how a setup like this would work where perhaps HAProxy goes inside the cluster so that this service can have redundancy.

'Tim Hockin' via Kubernetes user discussion and Q&A <mailto:kubernetes-users@googlegroups.com>
May 17, 2017 at 1:06 AM
On Tue, May 16, 2017 at 11:14 AM, Joe Auty<joea...@gmail.com>  wrote:
Thanks again Tim!

What would a recommended architecture look like for a socket.io sort of
setup?

I don't know socket.io per se, but I can speak abstractly...

Needs:

- session affinity (I think either L4 or 7!?) FWIW the provided socket.io
examples are for HAProxy and NGinx

If you're already running an L7 proxy, you can skip the GCLB.  The
rule of thumb is that once you go through a proxy, you're crossing
from L4 to L7.  Once you introduce L7 constructs, you can't go back to
L3/L4.  If you introduce an XFF header (L7), you can no longer rely on
the IP (L3).

- if we use HAProxy/NGinx, redundancy of these services would be great
- LB is in the same pod as the socket.io server

If the LB is in the same pod, it's not really an LB, is it?  It's
providing some other form of proxy service, right?

'Tim Hockin' via Kubernetes user discussion and Q&A
May 16, 2017 at 11:53 AM

On Tue, May 16, 2017 at 7:06 AM, Joe Auty<joea...@gmail.com>  wrote:

Hi Tim,

I have a couple of different use cases actually, but at this point I'm just
trying to understand the architecture to know where my LB fits. Options:

- haproxy/nginx outside of the cluster pointing to NodePort/LoadBalancer
ports

-  haproxy/nginx outside of the cluster pointing to pod IPs (the point
being that the LB doesn't have to be literally inside the cluster,
just able to reach the master and teh pods)

- haproxy/nginx inside the cluster
- Using just the Google LB and Kubernetes without haproxy/nginx

One use case involves a need for IP whitelisting and the other session
affinity, so I'm mostly just trying to straighten out my understanding so
that I can put all of these pieces together.

Google's L7 LB has L7 affinity, but only as far as a VM.  If you have
more than one backend pod on a single VM, that breaks down.  Google's
L7 LB doesn't have IP firewalling built in, though.

If you want L7 affinity and IP whitelisting, you probably need to DiY for
now.

Something like:
* Run a deployment of nginx/haproxy
   - use a hostPort to force it to be max 1 per node (for best balancing)
* Expose via a Service LB (L4) with ClientIP affinity and source
ranges configured
   - use the OnlyLocal annotation to retain client IP
* Configure nginx to target pod IPs directly (I know this logic exists
as part of the Ingress controller, not sure if it is standalone).

You are not alone asking for this sort of setup - I'd be surprised if
there are not better docs out there.  I don't have them at hand,
though.


'Tim Hockin' via Kubernetes user discussion and Q&A
May 15, 2017 at 11:59 PM
You could maybe start with what you want to achieve, and what your
requirements are?

Joe Auty
May 14, 2017 at 1:28 PM
Sorry for such a vague subject, but I think I need some help breaking things
down here.

I think I understand how the Google layer 7 LBs work (this diagram helped
me:
https://storage.googleapis.com/static.ianlewis.org/prod/img/750/gcp-lb-objects2.png)
, I understand NGinx and HAProxy LBs independently, and I believe I also
understand the concepts of NodePort, Ingress controllers, services, etc.

What I don't understand is why when I research things like socket.io
architectures in Kubernetes (for example), or features like IP whitelisting,
session affinity, etc. I see people putting NGinx or HAProxy into their
clusters. It is hard for me to keep straight all of the different levels of
load balancing and their controls:

Google backend services (i.e. Google LB)
Kubernetes service LB
HAProxy/NGinx


The rationale for HAProxy and NGinx seems to involve compensating for
missing features and/or bugs (kube-proxy, etc.) and it is hard to keep
straight what is a reality today and what the best path is?

Google's LBs support session affinity, and there are session affinity
Kubernetes service settings, so for starters, when and why is NGinx or
HAProxy necessary, and are there outstanding issues with tracking source IPs
and setting/respecting proper headers?

I'm happy to get into what sort of features I need if this will help steer
the discussion, but at this point I'm thinking maybe it is best to start at
a more basic level where you treat me like I'm 6 years old :)

Thanks in advance!
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email tokubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email tokubernetes-us...@googlegroups.com.
Visit this group athttps://groups.google.com/group/kubernetes-users.
For more options, visithttps://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email tokubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email tokubernetes-us...@googlegroups.com.
Visit this group athttps://groups.google.com/group/kubernetes-users.
For more options, visithttps://groups.google.com/d/optout.

Joe Auty
May 16, 2017 at 10:06 AM
Hi Tim,

I have a couple of different use cases actually, but at this point I'm just
trying to understand the architecture to know where my LB fits. Options:

- haproxy/nginx outside of the cluster pointing to NodePort/LoadBalancer
ports
- haproxy/nginx inside the cluster
- Using just the Google LB and Kubernetes without haproxy/nginx

One use case involves a need for IP whitelisting and the other session
affinity, so I'm mostly just trying to straighten out my understanding so
that I can put all of these pieces together.


'Tim Hockin' via Kubernetes user discussion and Q&A
May 15, 2017 at 11:59 PM
You could maybe start with what you want to achieve, and what your
requirements are?

Joe Auty
May 14, 2017 at 1:28 PM
Sorry for such a vague subject, but I think I need some help breaking things
down here.

I think I understand how the Google layer 7 LBs work (this diagram helped
me:
https://storage.googleapis.com/static.ianlewis.org/prod/img/750/gcp-lb-objects2.png)
, I understand NGinx and HAProxy LBs independently, and I believe I also
understand the concepts of NodePort, Ingress controllers, services, etc.

What I don't understand is why when I research things like socket.io
architectures in Kubernetes (for example), or features like IP whitelisting,
session affinity, etc. I see people putting NGinx or HAProxy into their
clusters. It is hard for me to keep straight all of the different levels of
load balancing and their controls:

Google backend services (i.e. Google LB)
Kubernetes service LB
HAProxy/NGinx


The rationale for HAProxy and NGinx seems to involve compensating for
missing features and/or bugs (kube-proxy, etc.) and it is hard to keep
straight what is a reality today and what the best path is?

Google's LBs support session affinity, and there are session affinity
Kubernetes service settings, so for starters, when and why is NGinx or
HAProxy necessary, and are there outstanding issues with tracking source IPs
and setting/respecting proper headers?

I'm happy to get into what sort of features I need if this will help steer
the discussion, but at this point I'm thinking maybe it is best to start at
a more basic level where you treat me like I'm 6 years old :)

Thanks in advance!
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email tokubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email tokubernetes-us...@googlegroups.com.
Visit this group athttps://groups.google.com/group/kubernetes-users.
For more options, visithttps://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email tokubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email tokubernetes-us...@googlegroups.com.
Visit this group athttps://groups.google.com/group/kubernetes-users.
For more options, visithttps://groups.google.com/d/optout.

Joe Auty <mailto:joea...@gmail.com>
May 16, 2017 at 2:14 PM
Thanks again Tim!

What would a recommended architecture look like for a socket.io sort of setup?

Needs:

- session affinity (I think either L4 or 7!?) FWIW the provided socket.io examples are for HAProxy and NGinx
- if we use HAProxy/NGinx, redundancy of these services would be great
- LB is in the same pod as the socket.io server


'Tim Hockin' via Kubernetes user discussion and Q&A <mailto:kubernetes-users@googlegroups.com>
May 16, 2017 at 11:53 AM
On Tue, May 16, 2017 at 7:06 AM, Joe Auty<joea...@gmail.com>  wrote:
Hi Tim,

I have a couple of different use cases actually, but at this point I'm just
trying to understand the architecture to know where my LB fits. Options:

- haproxy/nginx outside of the cluster pointing to NodePort/LoadBalancer
ports

-  haproxy/nginx outside of the cluster pointing to pod IPs (the point
being that the LB doesn't have to be literally inside the cluster,
just able to reach the master and teh pods)

- haproxy/nginx inside the cluster
- Using just the Google LB and Kubernetes without haproxy/nginx

One use case involves a need for IP whitelisting and the other session
affinity, so I'm mostly just trying to straighten out my understanding so
that I can put all of these pieces together.

Google's L7 LB has L7 affinity, but only as far as a VM.  If you have
more than one backend pod on a single VM, that breaks down.  Google's
L7 LB doesn't have IP firewalling built in, though.

If you want L7 affinity and IP whitelisting, you probably need to DiY for now.

Something like:
* Run a deployment of nginx/haproxy
   - use a hostPort to force it to be max 1 per node (for best balancing)
* Expose via a Service LB (L4) with ClientIP affinity and source
ranges configured
   - use the OnlyLocal annotation to retain client IP
* Configure nginx to target pod IPs directly (I know this logic exists
as part of the Ingress controller, not sure if it is standalone).

You are not alone asking for this sort of setup - I'd be surprised if
there are not better docs out there.  I don't have them at hand,
though.


'Tim Hockin' via Kubernetes user discussion and Q&A
May 15, 2017 at 11:59 PM
You could maybe start with what you want to achieve, and what your
requirements are?

Joe Auty
May 14, 2017 at 1:28 PM
Sorry for such a vague subject, but I think I need some help breaking things
down here.

I think I understand how the Google layer 7 LBs work (this diagram helped
me:
https://storage.googleapis.com/static.ianlewis.org/prod/img/750/gcp-lb-objects2.png)
, I understand NGinx and HAProxy LBs independently, and I believe I also
understand the concepts of NodePort, Ingress controllers, services, etc.

What I don't understand is why when I research things like socket.io
architectures in Kubernetes (for example), or features like IP whitelisting,
session affinity, etc. I see people putting NGinx or HAProxy into their
clusters. It is hard for me to keep straight all of the different levels of
load balancing and their controls:

Google backend services (i.e. Google LB)
Kubernetes service LB
HAProxy/NGinx


The rationale for HAProxy and NGinx seems to involve compensating for
missing features and/or bugs (kube-proxy, etc.) and it is hard to keep
straight what is a reality today and what the best path is?

Google's LBs support session affinity, and there are session affinity
Kubernetes service settings, so for starters, when and why is NGinx or
HAProxy necessary, and are there outstanding issues with tracking source IPs
and setting/respecting proper headers?

I'm happy to get into what sort of features I need if this will help steer
the discussion, but at this point I'm thinking maybe it is best to start at
a more basic level where you treat me like I'm 6 years old :)

Thanks in advance!
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email tokubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email tokubernetes-us...@googlegroups.com.
Visit this group athttps://groups.google.com/group/kubernetes-users.
For more options, visithttps://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email tokubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email tokubernetes-us...@googlegroups.com.
Visit this group athttps://groups.google.com/group/kubernetes-users.
For more options, visithttps://groups.google.com/d/optout.

Joe Auty <mailto:joea...@gmail.com>
May 16, 2017 at 10:06 AM
Hi Tim,

I have a couple of different use cases actually, but at this point I'm just trying to understand the architecture to know where my LB fits. Options:

- haproxy/nginx outside of the cluster pointing to NodePort/LoadBalancer ports
- haproxy/nginx inside the cluster
- Using just the Google LB and Kubernetes without haproxy/nginx

One use case involves a need for IP whitelisting and the other session affinity, so I'm mostly just trying to straighten out my understanding so that I can put all of these pieces together.


'Tim Hockin' via Kubernetes user discussion and Q&A <mailto:kubernetes-users@googlegroups.com>
May 15, 2017 at 11:59 PM
You could maybe start with what you want to achieve, and what your
requirements are?


'Tim Hockin' via Kubernetes user discussion and Q&A <mailto:kubernetes-users@googlegroups.com>
May 17, 2017 at 1:06 AM
On Tue, May 16, 2017 at 11:14 AM, Joe Auty<joea...@gmail.com>  wrote:
Thanks again Tim!

What would a recommended architecture look like for a socket.io sort of
setup?

I don't know socket.io per se, but I can speak abstractly...

Needs:

- session affinity (I think either L4 or 7!?) FWIW the provided socket.io
examples are for HAProxy and NGinx

If you're already running an L7 proxy, you can skip the GCLB.  The
rule of thumb is that once you go through a proxy, you're crossing
from L4 to L7.  Once you introduce L7 constructs, you can't go back to
L3/L4.  If you introduce an XFF header (L7), you can no longer rely on
the IP (L3).

- if we use HAProxy/NGinx, redundancy of these services would be great
- LB is in the same pod as the socket.io server

If the LB is in the same pod, it's not really an LB, is it?  It's
providing some other form of proxy service, right?

'Tim Hockin' via Kubernetes user discussion and Q&A
May 16, 2017 at 11:53 AM

On Tue, May 16, 2017 at 7:06 AM, Joe Auty<joea...@gmail.com>  wrote:

Hi Tim,

I have a couple of different use cases actually, but at this point I'm just
trying to understand the architecture to know where my LB fits. Options:

- haproxy/nginx outside of the cluster pointing to NodePort/LoadBalancer
ports

-  haproxy/nginx outside of the cluster pointing to pod IPs (the point
being that the LB doesn't have to be literally inside the cluster,
just able to reach the master and teh pods)

- haproxy/nginx inside the cluster
- Using just the Google LB and Kubernetes without haproxy/nginx

One use case involves a need for IP whitelisting and the other session
affinity, so I'm mostly just trying to straighten out my understanding so
that I can put all of these pieces together.

Google's L7 LB has L7 affinity, but only as far as a VM.  If you have
more than one backend pod on a single VM, that breaks down.  Google's
L7 LB doesn't have IP firewalling built in, though.

If you want L7 affinity and IP whitelisting, you probably need to DiY for
now.

Something like:
* Run a deployment of nginx/haproxy
   - use a hostPort to force it to be max 1 per node (for best balancing)
* Expose via a Service LB (L4) with ClientIP affinity and source
ranges configured
   - use the OnlyLocal annotation to retain client IP
* Configure nginx to target pod IPs directly (I know this logic exists
as part of the Ingress controller, not sure if it is standalone).

You are not alone asking for this sort of setup - I'd be surprised if
there are not better docs out there.  I don't have them at hand,
though.


'Tim Hockin' via Kubernetes user discussion and Q&A
May 15, 2017 at 11:59 PM
You could maybe start with what you want to achieve, and what your
requirements are?

Joe Auty
May 14, 2017 at 1:28 PM
Sorry for such a vague subject, but I think I need some help breaking things
down here.

I think I understand how the Google layer 7 LBs work (this diagram helped
me:
https://storage.googleapis.com/static.ianlewis.org/prod/img/750/gcp-lb-objects2.png)
, I understand NGinx and HAProxy LBs independently, and I believe I also
understand the concepts of NodePort, Ingress controllers, services, etc.

What I don't understand is why when I research things like socket.io
architectures in Kubernetes (for example), or features like IP whitelisting,
session affinity, etc. I see people putting NGinx or HAProxy into their
clusters. It is hard for me to keep straight all of the different levels of
load balancing and their controls:

Google backend services (i.e. Google LB)
Kubernetes service LB
HAProxy/NGinx


The rationale for HAProxy and NGinx seems to involve compensating for
missing features and/or bugs (kube-proxy, etc.) and it is hard to keep
straight what is a reality today and what the best path is?

Google's LBs support session affinity, and there are session affinity
Kubernetes service settings, so for starters, when and why is NGinx or
HAProxy necessary, and are there outstanding issues with tracking source IPs
and setting/respecting proper headers?

I'm happy to get into what sort of features I need if this will help steer
the discussion, but at this point I'm thinking maybe it is best to start at
a more basic level where you treat me like I'm 6 years old :)

Thanks in advance!
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Joe Auty
May 16, 2017 at 10:06 AM
Hi Tim,

I have a couple of different use cases actually, but at this point I'm just
trying to understand the architecture to know where my LB fits. Options:

- haproxy/nginx outside of the cluster pointing to NodePort/LoadBalancer
ports
- haproxy/nginx inside the cluster
- Using just the Google LB and Kubernetes without haproxy/nginx

One use case involves a need for IP whitelisting and the other session
affinity, so I'm mostly just trying to straighten out my understanding so
that I can put all of these pieces together.


'Tim Hockin' via Kubernetes user discussion and Q&A
May 15, 2017 at 11:59 PM
You could maybe start with what you want to achieve, and what your
requirements are?

Joe Auty
May 14, 2017 at 1:28 PM
Sorry for such a vague subject, but I think I need some help breaking things
down here.

I think I understand how the Google layer 7 LBs work (this diagram helped
me:
https://storage.googleapis.com/static.ianlewis.org/prod/img/750/gcp-lb-objects2.png)
, I understand NGinx and HAProxy LBs independently, and I believe I also
understand the concepts of NodePort, Ingress controllers, services, etc.

What I don't understand is why when I research things like socket.io
architectures in Kubernetes (for example), or features like IP whitelisting,
session affinity, etc. I see people putting NGinx or HAProxy into their
clusters. It is hard for me to keep straight all of the different levels of
load balancing and their controls:

Google backend services (i.e. Google LB)
Kubernetes service LB
HAProxy/NGinx


The rationale for HAProxy and NGinx seems to involve compensating for
missing features and/or bugs (kube-proxy, etc.) and it is hard to keep
straight what is a reality today and what the best path is?

Google's LBs support session affinity, and there are session affinity
Kubernetes service settings, so for starters, when and why is NGinx or
HAProxy necessary, and are there outstanding issues with tracking source IPs
and setting/respecting proper headers?

I'm happy to get into what sort of features I need if this will help steer
the discussion, but at this point I'm thinking maybe it is best to start at
a more basic level where you treat me like I'm 6 years old :)

Thanks in advance!
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Joe Auty <mailto:joea...@gmail.com>
May 16, 2017 at 2:14 PM
Thanks again Tim!

What would a recommended architecture look like for a socket.io sort of setup?

Needs:

- session affinity (I think either L4 or 7!?) FWIW the provided socket.io examples are for HAProxy and NGinx
- if we use HAProxy/NGinx, redundancy of these services would be great
- LB is in the same pod as the socket.io server



--
You received this message because you are subscribed to the Google Groups "Kubernetes 
user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
            • ... Joe Auty
              • ... 'Tim Hockin' via Kubernetes user discussion and Q&A
  • Re: [kubernetes-us... 'Tim Hockin' via Kubernetes user discussion and Q&A
    • Re: [kubernet... Joe Auty
      • Re: [kube... 'Tim Hockin' via Kubernetes user discussion and Q&A
        • Re: [... Joe Auty
          • R... 'Tim Hockin' via Kubernetes user discussion and Q&A
            • ... Joe Auty
              • ... 'Tim Hockin' via Kubernetes user discussion and Q&A
              • ... Joe Auty
              • ... Joe Auty
              • ... 'Tim Hockin' via Kubernetes user discussion and Q&A

Reply via email to