Hello,

I'm using a Load Balancer created by an Ingress kubernetes resource, and 
I'm having trouble routing traffic to my services.

What I'm trying to do in brief: I have multiple services running privately 
in my cluster, and I want to expose some in whole or in part (a subset of 
routes) to the public internet. My working idea is to have a Load Balancer 
set up with a backend for each service, and to control which services / 
paths to expose using a Url Map. Any paths not matched in the Url Map 
whitelist would return 404s.

I was successful in setting this up manually in the console, and now I'm 
trying to replicate it using an Ingress resource instead of directly adding 
GCP resources, out of a maybe-too-optimistic ideal of keeping our tooling 
tied to Kubernetes and not GCP.

I have a minimal load balancer set up in my cluster:

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

  name: public-ingress

  namespace: default

spec:

  backend:

    serviceName: [SERVICE]

    servicePort: 80

  rules:

  - host: [SERVICE].stg-1.[DOMAIN]

    http:

      paths:

      - backend:

          serviceName: [SERVICE]

          servicePort: 80

        path: /*

After about a minute, the Ingress finds the services I asked for, but I 
also get the following Event:

Name: public-ingress

Namespace: default

Address: 

Default backend: [SERVICE]:80 (10.188.4.15:3000)

Rules:

  Host                     Path Backends

  ----                     ---- --------

  [SERVICE].stg-1.[DOMAIN]

                           /*   [SERVICE]:80 (10.188.4.15:3000)

Annotations:

Events:

  FirstSeen LastSeen Count From SubobjectPath Type Reason Message

  --------- -------- ----- ---- ------------- -------- ------ -------

  20m 20m 1 {loadbalancer-controller } Normal ADD default/public-ingress

  19m 1s 1924 {loadbalancer-controller } Warning GCE googleapi: Error 400: 
Validation failed for instance 
'projects/[PROJECT]/zones/us-central1-a/instances/gke-stg-1-pool-1-6ec85343-16s5':
 
instance may belong to at most one load-balanced instance group., 
instanceInMultipleLoadBalancedIgs

My best guess is this is because all backend services that get created as 
associated with a new instance group that got automatically created, 
"k8s-ig" (or in my case "k8s-ig--stg-1" because I have a --cluster-uid set 
on my GLBC rc), so I wind up with one managed instance group that my 
cluster was created in, and another for the ingress services (guessing GCE 
tried to add the node running my service to this new group and failed doing 
so).

gke-stg-1-pool-1-6ec85343-grp        us-central1-a  default  Yes      2

k8s-ig--stg-1                        us-central1-a           No       0

When I set up my proof of concept in the console, I pointed the backend 
services to the "gke-stg-1..." instance group that was created with my 
cluster, and everything just worked. I've tried setting the IG back to 
"gke-stg-1" in the console, but Kubernetes will try to fix itself and 
change the IG back to it's "k8s-ig" group.

Any ideas on if I'm the right track with my general approach? Is there a 
reason for creating the new IG instead of detecting the IG the pod is 
running in?

Thanks!
Michael

-- 
You received this message because you are subscribed to the Google Groups 
"Containers at Google" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/google-containers.
For more options, visit https://groups.google.com/d/optout.

Reply via email to