I managed to install OKD 3.11 as of today. I'll share my experience:


My setup:


NAME    STATUS    ROLES          AGE       VERSION


node1   Ready     infra,master   12h       v1.11.0+d4cacc0


node2   Ready     infra          12h       v1.11.0+d4cacc0


node3   Ready     compute        12h       v1.11.0+d4cacc0



I needed to do a lot of workarounds including restarting all nodes.


I bumped into this error:


- FAILED - RETRYING: Wait for sync DS to set annotations on master nodes (180 retries left).

- FAILED - RETRYING: Wait for sync DS to set annotations on master nodes (179 retries left).


and into this error


- TASK [openshift_cluster_monitoring_operator : Wait for the ServiceMonitor CRD to be created] ********************************

- Monday 26 November 2018  19:31:50 -0500 (0:00:01.251)       0:11:17.509 *******

- FAILED - RETRYING: Wait for the ServiceMonitor CRD to be created (30 retries left).

- FAILED - RETRYING: Wait for the ServiceMonitor CRD to be created (29 retries left).


and i noticed that 


- No networks found in /etc/cni/net.d


There was no file in this directory. In my other successful installations of an OpenShift enterprise cluster v3.11, I have a file at that directory called 80-openshift-network.conf


To workaround this, I created the file /etc/cni/net.d/80-openshift-network.conf with contents


{

  "cniVersion": "0.2.0",

  "name": "openshift-sdn",

  "type": "openshift-sdn"

}




I then copied this file to all nodes in the cluster.


Then I restarted all the nodes and I did ansible-playbook deploy_cluster again. This time I was lucky. The install proceeded.


However, I have to check that:


1. The file

/etc/dnsmasq.d/origin-upstream-dns.conf


is present and has the information of my upstream dns:


For example,


cat  /etc/dnsmasq.d/origin-upstream-dns.conf


server=8.8.8.8




If not present, create I created it and restarted dnsmasq


2. That i'm able to resolve using the dnsmasq in all nodes,

For example,


ansible nodes -a 'dig yahoo.com'


3. The file /etc/resolv.conf is correct


cat /etc/resolv.conf


# nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh


search cluster.local

nameserver xxx.xxx.xxx.xxx


where xxx.xxx.xxx.xxx is the ip of the current node.


4. The file /etc/cni/net.d/80-openshift-network.conf is present and has contents


{

  "cniVersion": "0.2.0",

  "name": "openshift-sdn",

  "type": "openshift-sdn"

}


5. My grafana pod did not run. The error was "No API token found for service account "grafana", retry after the token is automatically created and added to the service account"


oc describe sa grafana -n openshift-monitoring

Name:                grafana

Namespace:           openshift-monitoring

Labels:              <none>

Annotations:         serviceaccounts.openshift.io/oauth-redirectreference.grafana={"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"grafana"}}

Image pull secrets:  grafana-dockercfg-dnvvm

Mountable secrets:   grafana-dockercfg-dnvvm

                     grafana-token-6sw2j

Tokens:              grafana-token-6sw2j

Events:              <none>


There was one token and I deleted it using:


oc delete secret grafana-token-6sw2j


Two new tokens were generated and grafana pod was successfully run.


Best regards,


Bobby


 
On Nov 26, 2018, at 09:46 PM, Erekle Magradze <erekle.magra...@recogizer.de> wrote:

Hello Guys,

Did anyone face the similar problem? it says that network component of K8S has some problem in installation.

So, I am failing at the final steps of command

ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml

problem looks like this

TASK [openshift_node_group : Wait for sync DS to set annotations on master nodes] ***********************************************************************************************************************************************************
FAILED - RETRYING: Wait for sync DS to set annotations on master nodes (180 retries left).
FAILED - RETRYING: Wait for sync DS to set annotations on master nodes (179 retries left).
...
...
...

The final message looks like this

fatal: [os-master.apps.mydomain.net]: FAILED! => {"attempts": 180, "changed": false, "results": {"cmd": "/usr/bin/oc get node --selector= -o json -n default", "results": [{"apiVersion": "v1", "items": [{"apiVersion": "v1", "kind": "Node", "metadata": {"annotations": {"volumes.kubernetes.io/controller-managed-attach-detach": "true"}, "creationTimestamp": "2018-11-25T21:49:08Z", "labels": {"beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "os-master.apps.mydomain.net"}, "name": "os-master.apps.mydomain.net", "namespace": "", "resourceVersion": "33274", "selfLink": "/api/v1/nodes/os-master.apps.mydomain.net", "uid": "efd782f8-f0fb-11e8-a72f-001a4a160102"}, "spec": {}, "status": {"addresses": [{"address": "172.31.1.71", "type": "InternalIP"}, {"address": "os-master.apps.mydomain.net", "type": "Hostname"}], "allocatable": {"cpu": "16", "hugepages-2Mi": "0", "memory": "32676788Ki", "pods": "250"}, "capacity": {"cpu": "16", "hugepages-2Mi": "0", "memory": "32779188Ki", "pods": "250"}, "conditions": [{"lastHeartbeatTime": "2018-11-26T05:44:12Z", "lastTransitionTime": "2018-11-25T21:49:08Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk"}, {"lastHeartbeatTime": "2018-11-26T05:44:12Z", "lastTransitionTime": "2018-11-25T21:49:08Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure"}, {"lastHeartbeatTime": "2018-11-26T05:44:12Z", "lastTransitionTime": "2018-11-25T21:49:08Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure"}, {"lastHeartbeatTime": "2018-11-26T05:44:12Z", "lastTransitionTime": "2018-11-25T21:49:08Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure"}, {"lastHeartbeatTime": "2018-11-26T05:44:12Z", "lastTransitionTime": "2018-11-25T21:49:08Z", "message": "runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized", "reason": "KubeletNotReady", "status": "False", "type": "Ready"}], "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"names": ["docker.io/openshift/origin-node@sha256:8a8e6341cc3af32953ee2a313a5f0973bed538f0326591c48188f4de9617c992", "docker.io/openshift/origin-node:v3.11.0"], "sizeBytes": 1157253404}, {"names": ["docker.io/openshift/origin-control-plane@sha256:181069f5d67cc2ba8d9b3c80efab8eda107eb24140f2d9f4a394cdb164c28a86", "docker.io/openshift/origin-control-plane:v3.11.0"], "sizeBytes": 818390387}, {"names": ["docker.io/openshift/origin-pod@sha256:1641b78e32c100938b2db51088e284568a056a3716492db78335a3e35be03853", "docker.io/openshift/origin-pod:v3.11.0"], "sizeBytes": 253795602}, {"names": ["quay.io/coreos/etcd@sha256:43fbc8a457aa0cb887da63d74a48659e13947cb74b96a53ba8f47abb6172a948", "quay.io/coreos/etcd:v3.2.22"], "sizeBytes": 37269372}], "nodeInfo": {"architecture": "amd64", "bootID": "2c873547-4546-45ae-af3b-bd685a7d556f", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-862.14.4.el7.x86_64", "kubeProxyVersion": "v1.11.0+d4cacc0", "kubeletVersion": "v1.11.0+d4cacc0", "machineID": "159ec545080a4b849d70b5b10694bd1a", "operatingSystem": "linux", "osImage": "CentOS Linux 7 (Core)", "systemUUID": "159EC545-080A-4B84-9D70-B5B10694BD1A"}}}], "kind": "List", "metadata": {"resourceVersion": "", "selfLink": ""}}], "returncode": 0}, "state": "list"}

In /var/logs/messages of the master node I see the following

Nov 26 06:55:55 os-master origin-node: E1126 06:55:53.495294    5353 kubelet.go:2101] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 26 06:55:58 os-master origin-node: W1126 06:55:58.496250    5353 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 26 06:55:58 os-master origin-node: E1126 06:55:58.496391    5353 kubelet.go:2101] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 26 06:56:03 os-master origin-node: W1126 06:56:03.497529    5353 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 26 06:56:03 os-master origin-node: E1126 06:56:03.497667    5353 kubelet.go:2101] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 26 06:56:09 os-master origin-node: I1126 06:56:07.931551    5353 container_manager_linux.go:428] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Nov 26 06:56:09 os-master origin-node: W1126 06:56:08.498812    5353 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 26 06:56:09 os-master origin-node: E1126 06:56:08.498929    5353 kubelet.go:2101] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Nov 26 06:56:13 os-master origin-node: W1126 06:56:13.499781    5353 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 26 06:56:13 os-master origin-node: E1126 06:56:13.500567    5353 kubelet.go:2101] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Can you please advise what to do in this case and how to solve the problem?

Many Thanks in advance

Best Regards

Erekle

_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to