Hello,
I am running a small OKD 3.10 cluster on CentOS 7.6 and just noticed that the
controller-manager on my master node does not seem to start properly as you can
see from the "oc get pods" output below:
NAMESPACE NAME
READY STATUS RESTARTS AGE
kube-service-catalog controller-manager-cw69k
0/1 CrashLoopBackOff 10905 117d
The relevant log entry from /var/log/messages is the following:
Dec 12 09:17:49 master origin-node: I1212 09:17:49.977148 3626
kuberuntime_manager.go:513] Container {Name:controller-manager
Image:docker.io/openshift/origin-service-catalog:v3.10.0
Command:[/usr/bin/service-catalog] Args:[controller-manager --secure-port 6443
-v 3 --leader-election-namespace kube-service-catalog
--leader-elect-resource-lock configmaps
--cluster-id-configmap-namespace=kube-service-catalog --broker-relist-interval
5m --feature-gates OriginatingIdentity=true --feature-gates
AsyncBindingOperations=true] WorkingDir: Ports:[{Name: HostPort:0
ContainerPort:6443 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:K8S_NAMESPACE
Value:
ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}]
Resources:{Limits:map[] Requests:map[]}
VolumeMounts:[{Name:service-catalog-ssl ReadOnly:true
MountPath:/var/run/kubernetes-service-catalog SubPath: MountPropagation:<nil>}
{Name:service-c
atalog-controller-token-8mvfj ReadOnly:true
MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:
MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil
Lifecycle:nil TerminationMessagePath:/dev/termination-log
TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent
SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL
MKNOD SETGID
SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000100000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,}
Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we
should restart it.
Dec 12 09:17:49 master origin-node: I1212 09:17:49.977314 3626
kuberuntime_manager.go:757] checking backoff for container "controller-manager"
in pod
"controller-manager-cw69k_kube-service-catalog(6634c5e5-a132-11e8-8c17-00163e0302cc)"
Dec 12 09:17:49 masterorigin-node: I1212 09:17:49.977490 3626
kuberuntime_manager.go:767] Back-off 5m0s restarting failed
container=controller-manager
pod=controller-manager-cw69k_kube-service-catalog(6634c5e5-a132-11e8-8c17-00163e0302cc)
Dec 12 09:17:49 master origin-node: E1212 09:17:49.977551 3626
pod_workers.go:186] Error syncing pod 6634c5e5-a132-11e8-8c17-00163e0302cc
("controller-manager-cw69k_kube-service-catalog(6634c5e5-a132-11e8-8c17-00163e0302cc)"),
skipping: failed to "StartContainer" for "controller-manager" with
CrashLoopBackOff: "Back-off 5m0s restarting failed container=controller-manager
pod=controller-manager-cw69k_kube-service-catalog(6634c5e5-a132-11e8-8c17-00163e0302cc)"
Going into more log details by running "master logs controllers controllers"
show the following errors:
I1212 08:17:35.011521 1 healthz.go:72] /healthz/log check
W1212 08:17:37.585956 1 reflector.go:341]
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
watch of *v1beta1.PodDisruptionBudget ended with: The resourceVersion for the
provided watch is too old.
W1212 08:17:38.587939 1 reflector.go:341]
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
watch of *v1beta1.StatefulSet ended with: The resourceVersion for the provided
watch is too old.
W1212 08:17:38.688482 1 reflector.go:341]
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
watch of *v1beta1.CronJob ended with: The resourceVersion for the provided
watch is too old.
I1212 08:17:45.010840 1 healthz.go:72] /healthz/log check
W1212 08:17:50.609257 1 reflector.go:341]
github.com/openshift/origin/vendor/github.com/openshift/client-go/security/informers/externalversions/factory.go:58:
watch of *v1.RangeAllocation ended with: The resourceVersion for the provided
watch is too old.
Does anyone have an idea what could be going wrong here?
Here is the output of "oc version":
oc v3.10.0+0c4577e-1
kubernetes v1.10.0+b81c8f8
features: Basic-Auth GSSAPI Kerberos SPNEGO
Regards,
Mab
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users