On Mon, Sep 18, 2017 at 1:21 PM, Mark Petrovic <mspetro...@gmail.com> wrote:

>
> On Mon, Sep 18, 2017 at 10:28 AM, 'Tim Hockin' via Kubernetes user
> discussion and Q&A <kubernetes-users@googlegroups.com> wrote:
>
>> On Fri, Sep 15, 2017 at 4:13 PM, Mark Petrovic <mspetro...@gmail.com>
>> wrote:
>> > Hello.
>> >
>> > I would have made this shorter if I could.  Sorry.  My context is
>> > Kubernetes, but my immediate questions are around clusters I configure
>> on
>> > Google Compute Engine (GCE).  Someone out there is bound to be in my
>> > situation, so I feel
>> > comfortable coming here, having been here a few times over the years.
>> >
>> > I am in pre-production research mode for running Kubernetes clusters
>> > on regular GCE  VMs.  I know about Google Container Engine, but I'm not
>> > ready to take that step.
>> >
>> > My work history: I have a couple years of working with Kubernetes
>> > in various ways, but I'm still far from an expert.
>> >
>> > The very recent past:
>> >
>> > I've successfully setup a k8s cluster on GCE where the control plane
>> > VMs (master, scheduler, controller, kubelet, kube-proxy) resided
>> > on a GCE-Custom VPC network 10.0.0.0/24  (I'm avoiding the regional
>> > default networks because I'm in learning mode and I want and learn
>> > from that control).  In this k8s cluster, I created a second VPC
>> > "podVPC" 172.16.0.0/16 from which pod IPs are allocated.  On each
>> > node's kubelet, I configure a /24 from the podVPC for pods.  I know
>> > the controller-manager *can* be involved in pod CIDR management,
>> > but I have chosen that it not be.  I tell the kubelet what pod cidr,
>> > via the kubelet param --pod_cidr, it can use, not the controller.
>> > I followed what I call the "cbr0" model in crafting the cluster
>> > config, found here:
>> > https://kubernetes.io/docs/getting-started-guides/scratch/.  That
>> > guide is dated, but I pieced it together.
>> >
>> > In this model, to make pod IPs routable within the cluster you have
>> > to create GCE VPC Routes that route pod IPs through their respective
>> > nodes.  Did that, and it works fine.   You also need GCE firewall rules
>> so
>> > the control plane members on net-10 can
>> > talk to each other; did that, works fine.
>> >
>> > This cluster works as intended.
>> >
>> > Now, the problem with this network approach is that if you want to route
>> > pod IPs across a VPN to your corp network via, say, BGP + Cloud
>> > Router, this design won't work because GCE just won't do that routing
>> > yet.
>>
>> I am not clear what doesn't work for you.  As far as I know GCP routes
>> work with *almost* everything else GCP offers (Peering being an
>> exception, for now).  I am pretty convinced that Pods + VPN works.
>>
>
> I could not find a way to articulate this in the GCP web UI.  To route the
> control plane VM VPC and the Pod VPC across VPN, I felt like I was being
> forced into creating *two* VPNs: one for the control plane and one for the
> pods, since a VPN on the GCP side can only source one VPC.
>
>
>
>>
>> > So, enter GCE IP Aliases: https://cloud.google.com/compu
>> te/docs/alias-ip/
>> >
>> > The present:
>> >
>> > I need those pod IPs routed to my corp network, so I need to evolve my
>> > design.
>> >
>> > Keep the cluster configuration the same as the cluster above.
>> > Meaning, no changes to the kubelet or controller manager.
>> >
>> > However, the GCE VM configs *do* change.  Now you create VMs with
>> > GCE-secondary subnets, aka IP Aliases.  Out of these per-VM secondary
>> > ranges, you allocate pod IPs.  This means you do not create a second
>> > podVPC as above and manually route pod CIDRs to their respective
>> > nodes.  When you define a secondary subnet on a network, GCE will
>> > setup those routes for you, and announce those routes over VPN to
>> > your corp network.
>> >
>> > My first problem:  if I bring up a couple of nodes with IP Alias
>> > ranges defined on them, without any pods running at all, I can
>> > already ping addresses where the pods will be allocated.  This makes
>> > me think two things:  1) I've read the IP Alias docs carefully but
>> > I've already screwed up my VM config, 2) my node VM config is correct
>> > and nodes are supposed to masquerade as secondary workloads.  And
>> > if 2 obtains, when a real pod does come up, how do I tell (via some
>> > k8s control plane flag??) the GCE fabric to stop masquerading as
>> > the pod?
>>
>> The default GCP VM images assign IP alias ranges to the local vNIC.
>> You need to turn that off in one of the google daemons, or you need to
>> run our COS image which does that.
>>
>
> This is new magic to me, but based on your comment I was able to suppress
> what I call the masquerading by setting ip_forwarding_daemon==false in
> /etc/default/instance_configs.cfg on the guest (the GCP CentOS7 image).
> Such a host no longer responds to pings to its IP Aliases.  Just curious:
>  if this forwarding were left enabled, and a real workload was listening on
> an alias IP, would the workload respond to a prospective TCP connection, or
> would the host respond?  If the workload responds, how does the host know
> to not masquerade, as it seems not to know when I ping a 'workload'?
>
>
Success!

By setting  ip_forwarding_daemon==false on my GCP CentOS7 VMs that host the
control plane, I have proofed the entire cluster config, with connectivity
where I need it inside the cluster, as well as announcing VM and pod routes
across the VPN to corp-like environment so that dev-workstation-like hosts
can consume pods.


>
>
>
>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Kubernetes user discussion and Q&A" group.
>> To unsubscribe from this topic, visit https://groups.google.com/d/to
>> pic/kubernetes-users/mW6LGb9lIZM/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
> Mark
>



-- 
Mark

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to