Each master still needs an IP registered that then backs the Kubernetes
service that clients use to talk to the API.  So verify that each master is
reporting the correct IP that is reachable from all nodes to "oc get
endpoints kubernetes -n defaults"

On Dec 7, 2016, at 9:39 AM, Den Cowboy <[email protected]> wrote:

We've installed OpenShift origin with the advanced playbook. There we used
public ip's. But after the installation we've deleted the public ip's. The
master and nodes are in a VLAN. I'm able to create a user, authenticate,
visite the webconsole. restart node, master configs. I'm able to pull
images from our local registry but I'm not able to do a deployment.


couldn't get deployment default/router-5: Get
https://172.30.0.1:443/api/v1/namespaces/default/replicationcontrollers/router-5:
dial tcp 172.30.0.1:443: getsockopt: network is unreachable

I'm even not able to curl the kubernetes service. What did we forgot/did
wrong?

In our configs the dnsIP: option is in comment. So we did not specifiy it.
The docker, origin-node, origin-master and openvswitch services are all
running.

Logs of our origin-node show:
pkg/proxy/config/api.go:60: Failed to watch *api.Endpoints: Get
https://master.xxx...ction refused
pkg/kubelet/kubelet.go:259: Failed to watch *api.Node: Get
https://master.xxx:8443/..
pkg/kubelet/config/apiserver.go:43: Failed to watch *api.Pod
pkg/proxy/config/api.go:47: Failed to watch *api.Service: Get
https://master.xxx refused


_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to