Hello.

I would have made this shorter if I could.  Sorry.  My context is
Kubernetes, but my immediate questions are around clusters I configure on
Google Compute Engine (GCE).  Someone out there is bound to be in my 
situation, so I feel
comfortable coming here, having been here a few times over the years.

I am in pre-production research mode for running Kubernetes clusters
on regular GCE  VMs.  I know about Google Container Engine, but I'm not 
ready to take that step.

My work history: I have a couple years of working with Kubernetes
in various ways, but I'm still far from an expert.

The very recent past:

I've successfully setup a k8s cluster on GCE where the control plane
VMs (master, scheduler, controller, kubelet, kube-proxy) resided
on a GCE-Custom VPC network 10.0.0.0/24  (I'm avoiding the regional
default networks because I'm in learning mode and I want and learn
from that control).  In this k8s cluster, I created a second VPC
"podVPC" 172.16.0.0/16 from which pod IPs are allocated.  On each
node's kubelet, I configure a /24 from the podVPC for pods.  I know
the controller-manager *can* be involved in pod CIDR management,
but I have chosen that it not be.  I tell the kubelet what pod cidr,
via the kubelet param --pod_cidr, it can use, not the controller.
I followed what I call the "cbr0" model in crafting the cluster
config, found here:
https://kubernetes.io/docs/getting-started-guides/scratch/.  That
guide is dated, but I pieced it together.

In this model, to make pod IPs routable within the cluster you have
to create GCE VPC Routes that route pod IPs through their respective
nodes.  Did that, and it works fine.   You also need GCE firewall rules so 
the control plane members on net-10 can 
talk to each other; did that, works fine.

This cluster works as intended.

Now, the problem with this network approach is that if you want to route
pod IPs across a VPN to your corp network via, say, BGP + Cloud
Router, this design won't work because GCE just won't do that routing
yet.

So, enter GCE IP Aliases: https://cloud.google.com/compute/docs/alias-ip/

The present:

I need those pod IPs routed to my corp network, so I need to evolve my 
design.

Keep the cluster configuration the same as the cluster above.
Meaning, no changes to the kubelet or controller manager.

However, the GCE VM configs *do* change.  Now you create VMs with
GCE-secondary subnets, aka IP Aliases.  Out of these per-VM secondary
ranges, you allocate pod IPs.  This means you do not create a second
podVPC as above and manually route pod CIDRs to their respective
nodes.  When you define a secondary subnet on a network, GCE will
setup those routes for you, and announce those routes over VPN to
your corp network.

My first problem:  if I bring up a couple of nodes with IP Alias
ranges defined on them, without any pods running at all, I can
already ping addresses where the pods will be allocated.  This makes
me think two things:  1) I've read the IP Alias docs carefully but
I've already screwed up my VM config, 2) my node VM config is correct
and nodes are supposed to masquerade as secondary workloads.  And
if 2 obtains, when a real pod does come up, how do I tell (via some
k8s control plane flag??) the GCE fabric to stop masquerading as
the pod?

Thanks for reading this far.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to