whichdew commented on issue #2503:
URL: 
https://github.com/apache/apisix-dashboard/issues/2503#issuecomment-1179905326

   
   
![image](https://user-images.githubusercontent.com/20438962/178179014-a15b5ee5-baa5-4c1c-a2f8-a24376d06063.png)
   
   ```shell
    -> kubectl describe pod -n apisix apisix-dashboard-77555778bd-2z8l2
   Name:         apisix-dashboard-77555778bd-2z8l2
   Namespace:    apisix
   Priority:     0
   Node:         pve/192.168.1.4
   Start Time:   Sat, 09 Jul 2022 13:31:21 +0800
   Labels:       app.kubernetes.io/instance=apisix-dashboard
                 app.kubernetes.io/name=apisix-dashboard
                 pod-template-hash=77555778bd
   Annotations:  checksum/config: 
7be84307e80ce79558684303e5db2de21b7b3582afc5878e6635a966e54e3301
   Status:       Running
   IP:           10.244.0.51
   IPs:
     IP:           10.244.0.51
   Controlled By:  ReplicaSet/apisix-dashboard-77555778bd
   Containers:
     apisix-dashboard:
       Container ID:   
containerd://774a026eedf7639d18d3408b4dc0dc7f581dc77c4fff7444a2352674dab8afef
       Image:          apache/apisix-dashboard:2.13-alpine
       Image ID:       
docker.io/apache/apisix-dashboard@sha256:7ce2f9517a7472a17c32244b75effdbebb0f9296815b5a675f591fd220a868ec
       Port:           9000/TCP
       Host Port:      0/TCP
       State:          Waiting
         Reason:       CrashLoopBackOff
       Last State:     Terminated
         Reason:       Completed
         Exit Code:    0
         Started:      Mon, 11 Jul 2022 10:17:51 +0800
         Finished:     Mon, 11 Jul 2022 10:17:56 +0800
       Ready:          False
       Restart Count:  608
       Liveness:       http-get http://:http/ping delay=0s timeout=1s 
period=10s #success=1 #failure=3
       Readiness:      http-get http://:http/ping delay=0s timeout=1s 
period=10s #success=1 #failure=3
       Environment:    <none>
       Mounts:
         /usr/local/apisix-dashboard/conf/conf.yaml from 
apisix-dashboard-config (rw,path="conf.yaml")
         /var/run/secrets/kubernetes.io/serviceaccount from 
kube-api-access-p6llk (ro)
   Conditions:
     Type              Status
     Initialized       True
     Ready             False
     ContainersReady   False
     PodScheduled      True
   Volumes:
     apisix-dashboard-config:
       Type:      ConfigMap (a volume populated by a ConfigMap)
       Name:      apisix-dashboard
       Optional:  false
     kube-api-access-p6llk:
       Type:                    Projected (a volume that contains injected data 
from multiple sources)
       TokenExpirationSeconds:  3607
       ConfigMapName:           kube-root-ca.crt
       ConfigMapOptional:       <nil>
       DownwardAPI:             true
   QoS Class:                   BestEffort
   Node-Selectors:              <none>
   Tolerations:                 node.kubernetes.io/not-ready:NoExecute 
op=Exists for 300s
                                node.kubernetes.io/unreachable:NoExecute 
op=Exists for 300s
   Events:
     Type     Reason   Age                     From     Message
     ----     ------   ----                    ----     -------
     Normal   Pulled   32m (x601 over 44h)     kubelet  Container image 
"apache/apisix-dashboard:2.13-alpine" already present on machine
     Warning  BackOff  2m1s (x12467 over 44h)  kubelet  Back-off restarting 
failed container
   ```
   
   ```shell
   Events:
     Type     Reason            Age                  From               Message
     ----     ------            ----                 ----               -------
     Warning  FailedScheduling  22m (x538 over 45h)  default-scheduler  0/1 
nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. 
preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
   ```
   
   It seems that etcd is not ready, how to reslove it? I am a k8S novice, 
asking for advice.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to