techcheckri opened a new issue, #10774:
URL: https://github.com/apache/apisix/issues/10774

   ### Description
   
   My APISIX seems to work normally. When I apply some pods they are created.
   
   But since two days ago the apisix-etcd-2 pod says:
   
   ```
   NAME                                         READY   STATUS      RESTARTS
   apisix-etcd-2                                0/1     Completed   0
   ```
   
   ```
   kubectl -n ingress-apisix describe pod apisix-etcd-2
   Name:             apisix-etcd-2
   Namespace:        ingress-apisix
   Priority:         0
   Service Account:  default
   Node:             aks-userpool-xxxxxxx-vmss000000/xx.xxx.x.xxxx
   Start Time:       Sat, 06 Jan 2024 15:40:14 +0100
   Labels:           app.kubernetes.io/instance=apisix
                     app.kubernetes.io/managed-by=Helm
                     app.kubernetes.io/name=etcd
                     controller-revision-hash=apisix-etcd-xxxx
                     helm.sh/chart=etcd-8.7.7
                     statefulset.kubernetes.io/pod-name=apisix-etcd-2
   Annotations:      checksum/token-secret: xxxxxxxx
   Status:           Succeeded
   IP:               xx.xxx.x.xxx
   IPs:
     IP:           xx.xx.x.xx
   Controlled By:  StatefulSet/apisix-etcd
   Containers:
     etcd:
       Container ID:   containerd://xxxxxx
       Image:          docker.io/bitnami/etcd:3.5.7-debian-11-r14
       Image ID:       docker.io/bitnami/etcd@sha256:xxxxx
       Ports:          2379/TCP, 2380/TCP
       Host Ports:     0/TCP, 0/TCP
       State:          Terminated
         Reason:       Completed
         Exit Code:    0
         Started:      Sat, 06 Jan 2024 15:40:19 +0100
         Finished:     Sun, 07 Jan 2024 07:18:05 +0100
       Ready:          False
       Restart Count:  0
       Liveness:       exec [/opt/bitnami/scripts/etcd/healthcheck.sh] 
delay=60s timeout=5s period=30s #success=1 #failure=5
       Readiness:      exec [/opt/bitnami/scripts/etcd/healthcheck.sh] 
delay=60s timeout=5s period=10s #success=1 #failure=5
       Environment:
         BITNAMI_DEBUG:                     false
         MY_POD_IP:                          (v1:status.podIP)
         MY_POD_NAME:                       apisix-etcd-2 (v1:metadata.name)
         MY_STS_NAME:                       apisix-etcd
         ETCDCTL_API:                       3
         ETCD_ON_K8S:                       yes
         ETCD_START_FROM_SNAPSHOT:          no
         ETCD_DISASTER_RECOVERY:            no
         ETCD_NAME:                         $(MY_POD_NAME)
         ETCD_DATA_DIR:                     /bitnami/etcd/data
         ETCD_LOG_LEVEL:                    info
         ALLOW_NONE_AUTHENTICATION:         yes
         ETCD_AUTH_TOKEN:                   
jwt,priv-key=/opt/bitnami/etcd/certs/token/jwt-token.pem,sign-method=RS256,ttl=10m
         ETCD_ADVERTISE_CLIENT_URLS:        
http://$(MY_POD_NAME).apisix-etcd-headless.ingress-apisix.svc.cluster.local:2379,http://apisix-etcd.ingress-apisix.svc.cluster.local:2379
         ETCD_LISTEN_CLIENT_URLS:           http://0.0.0.0:2379
         ETCD_INITIAL_ADVERTISE_PEER_URLS:  
http://$(MY_POD_NAME).apisix-etcd-headless.ingress-apisix.svc.cluster.local:2380
         ETCD_LISTEN_PEER_URLS:             http://0.0.0.0:2380
         ETCD_INITIAL_CLUSTER_TOKEN:        etcd-cluster-k8s
         ETCD_INITIAL_CLUSTER_STATE:        existing
         ETCD_INITIAL_CLUSTER:              
apisix-etcd-0=http://apisix-etcd-0.apisix-etcd-headless.ingress-apisix.svc.cluster.local:2380,apisix-etcd-1=http://apisix-etcd-1.apisix-etcd-headless.ingress-apisix.svc.cluster.local:2380,apisix-etcd-2=http://apisix-etcd-2.apisix-etcd-headless.ingress-apisix.svc.cluster.local:2380
         ETCD_CLUSTER_DOMAIN:               
apisix-etcd-headless.ingress-apisix.svc.cluster.local
       Mounts:
         /bitnami/etcd from data (rw)
         /opt/bitnami/etcd/certs/token/ from etcd-jwt-token (ro)
         /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xxx 
(ro)
   Conditions:
     Type               Status
     DisruptionTarget   True 
     Initialized        True 
     Ready              False 
     ContainersReady    False 
     PodScheduled       True 
   Volumes:
     data:
       Type:       PersistentVolumeClaim (a reference to a 
PersistentVolumeClaim in the same namespace)
       ClaimName:  data-apisix-etcd-2
       ReadOnly:   false
     etcd-jwt-token:
       Type:        Secret (a volume populated by a Secret)
       SecretName:  apisix-etcd-jwt-token
       Optional:    false
     kube-api-access-5mkqf:
       Type:                    Projected (a volume that contains injected data 
from multiple sources)
       TokenExpirationSeconds:  3607
       ConfigMapName:           kube-root-ca.crt
       ConfigMapOptional:       <nil>
       DownwardAPI:             true
   QoS Class:                   BestEffort
   Node-Selectors:              <none>
   Tolerations:                 node.kubernetes.io/not-ready:NoExecute 
op=Exists for 300s
                                node.kubernetes.io/unreachable:NoExecute 
op=Exists for 300s
   Events:                      <none>
   ```
   
   What happened?
   
   What can I do to run apisix-etcd-2 again?
   
   How can I prevent this in the future?
   
   Please help!
   
   
   
   ### Environment
   
   Running on Azure, so most does not apply
   
   - APISIX version (run `apisix version`): -bash: apisix: Kommando nicht 
gefunden.
   - Operating system (run `uname -a`): Linux kubctl 4.19.0 #1 SMP Wed Jul 12 
12:00:44 MSK 2023 x86_64 GNU/Linux
   - OpenResty / Nginx version (run `openresty -V` or `nginx -V`):
   - etcd version, if relevant (run `curl 
http://127.0.0.1:9090/v1/server_info`):
   - APISIX Dashboard version, if relevant:
   - Plugin runner version, for issues related to plugin runners:
   - LuaRocks version, for installation issues (run `luarocks --version`):
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to