On 2017-08-14 12:13 pm, 'Tim Hockin' via Kubernetes user discussion and
Q&A wrote:
On Mon, Aug 14, 2017 at 9:03 AM, David Rosenstrauch <dar...@darose.net>
wrote:
So, for example, I have a k8s setup with 4 machines: a master, 2
worker
nodes, and a "driver" machine. All 4 machines are on the flannel
network.
I have a nginx service defined like so:
$ kubectl get svc nginx; kubectl get ep nginx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx 10.128.105.78 <nodes> 80:30207/TCP 2d
NAME ENDPOINTS AGE
nginx 10.240.14.5:80,10.240.27.2:80 2d
Now "curl 10.128.105.78" only succeeds on the 2 worker node machines,
while
"curl 10.240.14.5" succeeds on all 4.
I'm guessing this is expected / makes sense, since 10.240.0.0/12
addresses
are accessible to any machine on the flannel network, whereas
10.128.0.0/16
addresses can only be reached via iptables rules - i.e., only
accessible on
machines running kube-proxy, aka the worker nodes.
Right. To get to Services you need to either route the Service range
to your VMs (and use them as gateways) or expose them via some other
form of traffic director (e.g. a load-balancer).
Can you clarify what you mean by "route the Service range to your VMs"?
I'm familiar with the load balancer approach you mentioned - i.e., to
get outside machines to access your service you could set up a load
balancer that points to the NodePort of each machine that's running the
service. How would it work to route the service range?
Thanks,
DR
--
You received this message because you are subscribed to the Google Groups "Kubernetes
user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.