I rebuilt my dev cluster from HEAD recently and pods were having DNS
problems. I'm set up with dnsmasq at port 53 on the master, forwarding
cluster requests to SkyDNS running at port 8053, per
https://developerblog.redhat.com/2015/11/19/dns-your-openshift-v3-cluster/

I discovered that pods are now getting the kubernetes service IP
(172.30.0.1 by default) instead of the master IP like they used to. If I
inspect that service, I see this:

$ oc describe service/kubernetes --namespace default
Name:                   kubernetes
Namespace:              default
Labels:                 component=apiserver,provider=kubernetes
Selector:               <none>
Type:                   ClusterIP
IP:                     172.30.0.1
Port:                   https   443/TCP
Endpoints:              172.16.4.29:8443
Port:                   dns     53/UDP
Endpoints:              172.16.4.29:8053
Port:                   dns-tcp 53/TCP
Endpoints:              172.16.4.29:8053
Session Affinity:       None
No events.

So there's my problem - DNS requests are presumably being forwarded to the
master IP, but at port 8053. This port isn't open, but even if I add a
firewall rule to open it, it doesn't seem to connect (dig request times
out). Also I didn't really want to make requests directly against SkyDNS,
because I want my dnsmasq server to answer queries (from node or pod) about
my rogue domain names as well as cluster addresses.

I think I could solve it by just running dnsmasq on a different server and
including it in /etc/resolv.conf everywhere. I'll try that. But that seems
like it shouldn't be necessary. Any thoughts on this change? Why was it
necessary?
_______________________________________________
dev mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

Reply via email to