zhangyihong opened a new issue, #282:
URL: https://github.com/apache/apisix-helm-chart/issues/282

   k8s环境:
   1. AWS宁夏区 eks
   ```
   kubectl version --short
   Client Version: v1.19.6-eks-49a6c0
   Server Version: v1.20.15-eks-0d102a7
   ```
   
   2. chart版本:
   ```
   helm list -n ingress-apisix
   NAME         NAMESPACE       REVISION        UPDATED                         
        STATUS          CHART           APP VERSION
   apisix       ingress-apisix  1               2022-05-12 14:37:31.160379 
+0800 CST    deployed        apisix-0.9.2    2.13.1 
   ```
   安装方式:
   ```
   helm install apisix apisix/apisix \
     --set gateway.type=LoadBalancer \
     --set ingress-controller.enabled=true \
     --namespace ingress-apisix \
     --set ingress-controller.config.apisix.serviceNamespace=ingress-apisix
   ```
   POD,svc 详情:
   ```
   kubectl get po -n ingress-apisix -o wide
   NAME                                         READY   STATUS             
RESTARTS   AGE   IP               NODE                                          
      NOMINATED NODE   READINESS GATES
   apisix-8fd7755b8-gssjm                       1/1     Running            0    
      10m   172.19.110.47    ip-172-19-103-234.cn-northwest-1.compute.internal  
 <none>           <none>
   apisix-etcd-0                                1/1     Running            0    
      10m   172.19.97.230    ip-172-19-108-124.cn-northwest-1.compute.internal  
 <none>           <none>
   apisix-etcd-1                                0/1     CrashLoopBackOff   6    
      10m   172.19.125.194   ip-172-19-103-234.cn-northwest-1.compute.internal  
 <none>           <none>
   apisix-etcd-2                                1/1     Running            0    
      10m   172.19.161.188   ip-172-19-164-161.cn-northwest-1.compute.internal  
 <none>           <none>
   apisix-ingress-controller-699bf76956-4x78m   1/1     Running            0    
      10m   172.19.109.163   ip-172-19-103-234.cn-northwest-1.compute.internal  
 <none>           <none>
   
   kubectl get svc -n ingress-apisix
   NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP      
                                                                 PORT(S)        
     AGE
   apisix-admin                ClusterIP      10.100.192.112   <none>           
                                                                 9180/TCP       
     10m
   apisix-etcd                 ClusterIP      10.100.73.13     <none>           
                                                                 
2379/TCP,2380/TCP   10m
   apisix-etcd-headless        ClusterIP      None             <none>           
                                                                 
2379/TCP,2380/TCP   10m
   apisix-gateway              LoadBalancer   10.100.73.47     
xxxx-xxxxx.cn-northwest-1.elb.amazonaws.com.cn   80:32049/TCP        10m
   apisix-ingress-controller   ClusterIP      10.100.158.3     <none>           
                                                                 80/TCP         
     10m
   
   ```
   问题:
   上面可以看到apisix-etcd-1 一直重启,查看该pod的日志如下:
   ```
   kubectl logs apisix-etcd-1 -n ingress-apisix
   etcd 06:48:47.33 
   etcd 06:48:47.33 Welcome to the Bitnami etcd container
   etcd 06:48:47.34 Subscribe to project updates by watching 
https://github.com/bitnami/bitnami-docker-etcd
   etcd 06:48:47.34 Submit issues and feature requests at 
https://github.com/bitnami/bitnami-docker-etcd/issues
   etcd 06:48:47.34 
   etcd 06:48:47.34 INFO  ==> ** Starting etcd setup **
   etcd 06:48:47.36 INFO  ==> Validating settings in ETCD_* env vars..
   etcd 06:48:47.37 WARN  ==> You set the environment variable 
ALLOW_NONE_AUTHENTICATION=yes. For safety reasons, do not use this flag in a 
production environment.
   etcd 06:48:47.38 INFO  ==> Initializing etcd
   etcd 06:48:47.38 INFO  ==> Generating etcd config file using env variables
   etcd 06:48:47.40 INFO  ==> Detected data from previous deployments
   etcd 06:48:47.50 INFO  ==> Updating member in existing cluster
   
{"level":"warn","ts":"2022-05-12T06:48:47.594Z","caller":"clientv3/retry_interceptor.go:62","msg":"retrying
 of unary invoker 
failed","target":"endpoint://client-cfa2c90d-8acd-4633-86ea-aaf91758985e/apisix-etcd-0.apisix-etcd-headless.ingress-apisix.svc.cluster.local:2379","attempt":0,"error":"rpc
 error: code = NotFound desc = etcdserver: member not found"}
   Error: etcdserver: member not found
   ```
   
   ```
   kubectl describe pod apisix-etcd-1 -n ingress-apisix
   Name:         apisix-etcd-1
   Namespace:    ingress-apisix
   Priority:     0
   Node:         
ip-172-19-103-234.cn-northwest-1.compute.internal/172.19.103.234
   Start Time:   Thu, 12 May 2022 14:50:49 +0800
   Labels:       app.kubernetes.io/instance=apisix
                 app.kubernetes.io/managed-by=Helm
                 app.kubernetes.io/name=etcd
                 controller-revision-hash=apisix-etcd-6b644fb7bf
                 helm.sh/chart=etcd-7.0.4
                 statefulset.kubernetes.io/pod-name=apisix-etcd-1
   Annotations:  checksum/token-secret: 
c1e6eac2a55b08a1b379e89dd5133d17f5e112ef9ebab1429a62ef704f42f72b
                 kubernetes.io/psp: eks.privileged
   Status:       Running
   IP:           172.19.125.146
   IPs:
     IP:           172.19.125.146
   Controlled By:  StatefulSet/apisix-etcd
   Containers:
     etcd:
       Container ID:   
docker://b69e9d4673842170cc8e3306f61314e9801c8a3a3de03a2e1957ea0c32f9f20c
       Image:          docker.io/bitnami/etcd:3.4.18-debian-10-r14
       Image ID:       
docker-pullable://bitnami/etcd@sha256:ce0f7334f0b31f341c0bcea4b9cec6fb1e37837400fd16c257f9f6e84e35192b
       Ports:          2379/TCP, 2380/TCP
       Host Ports:     0/TCP, 0/TCP
       State:          Waiting
         Reason:       CrashLoopBackOff
       Last State:     Terminated
         Reason:       Error
         Exit Code:    1
         Started:      Thu, 12 May 2022 14:52:41 +0800
         Finished:     Thu, 12 May 2022 14:52:41 +0800
       Ready:          False
       Restart Count:  4
       Liveness:       exec [/opt/bitnami/scripts/etcd/healthcheck.sh] 
delay=60s timeout=5s period=30s #success=1 #failure=5
       Readiness:      exec [/opt/bitnami/scripts/etcd/healthcheck.sh] 
delay=60s timeout=5s period=10s #success=1 #failure=5
       Environment:
         BITNAMI_DEBUG:                     false
         MY_POD_IP:                          (v1:status.podIP)
         MY_POD_NAME:                       apisix-etcd-1 (v1:metadata.name)
         MY_STS_NAME:                       apisix-etcd
         ETCDCTL_API:                       3
         ETCD_ON_K8S:                       yes
         ETCD_START_FROM_SNAPSHOT:          no
         ETCD_DISASTER_RECOVERY:            no
         ETCD_NAME:                         $(MY_POD_NAME)
         ETCD_DATA_DIR:                     /bitnami/etcd/data
         ETCD_LOG_LEVEL:                    info
         ALLOW_NONE_AUTHENTICATION:         yes
         ETCD_AUTH_TOKEN:                   
jwt,priv-key=/opt/bitnami/etcd/certs/token/jwt-token.pem,sign-method=RS256,ttl=10m
         ETCD_ADVERTISE_CLIENT_URLS:        
http://$(MY_POD_NAME).apisix-etcd-headless.ingress-apisix.svc.cluster.local:2379,http://apisix-etcd.ingress-apisix.svc.cluster.local:2379
         ETCD_LISTEN_CLIENT_URLS:           http://0.0.0.0:2379
         ETCD_INITIAL_ADVERTISE_PEER_URLS:  
http://$(MY_POD_NAME).apisix-etcd-headless.ingress-apisix.svc.cluster.local:2380
         ETCD_LISTEN_PEER_URLS:             http://0.0.0.0:2380
         ETCD_INITIAL_CLUSTER_TOKEN:        etcd-cluster-k8s
         ETCD_INITIAL_CLUSTER_STATE:        new
         ETCD_INITIAL_CLUSTER:              
apisix-etcd-0=http://apisix-etcd-0.apisix-etcd-headless.ingress-apisix.svc.cluster.local:2380,apisix-etcd-1=http://apisix-etcd-1.apisix-etcd-headless.ingress-apisix.svc.cluster.local:2380,apisix-etcd-2=http://apisix-etcd-2.apisix-etcd-headless.ingress-apisix.svc.cluster.local:2380
         ETCD_CLUSTER_DOMAIN:               
apisix-etcd-headless.ingress-apisix.svc.cluster.local
       Mounts:
         /bitnami/etcd from data (rw)
         /opt/bitnami/etcd/certs/token/ from etcd-jwt-token (ro)
         /var/run/secrets/kubernetes.io/serviceaccount from default-token-rdhrk 
(ro)
   Conditions:
     Type              Status
     Initialized       True 
     Ready             False 
     ContainersReady   False 
     PodScheduled      True 
   Volumes:
     data:
       Type:       PersistentVolumeClaim (a reference to a 
PersistentVolumeClaim in the same namespace)
       ClaimName:  data-apisix-etcd-1
       ReadOnly:   false
     etcd-jwt-token:
       Type:        Secret (a volume populated by a Secret)
       SecretName:  apisix-etcd-jwt-token
       Optional:    false
     default-token-rdhrk:
       Type:        Secret (a volume populated by a Secret)
       SecretName:  default-token-rdhrk
       Optional:    false
   QoS Class:       BestEffort
   Node-Selectors:  <none>
   Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                    node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
   Events:
     Type     Reason     Age                  From               Message
     ----     ------     ----                 ----               -------
     Normal   Scheduled  2m14s                default-scheduler  Successfully 
assigned ingress-apisix/apisix-etcd-1 to 
ip-172-19-103-234.cn-northwest-1.compute.internal
     Normal   Pulled     22s (x5 over 116s)   kubelet            Container 
image "docker.io/bitnami/etcd:3.4.18-debian-10-r14" already present on machine
     Normal   Created    22s (x5 over 116s)   kubelet            Created 
container etcd
     Normal   Started    22s (x5 over 116s)   kubelet            Started 
container etcd
     Warning  BackOff    22s (x10 over 115s)  kubelet            Back-off 
restarting failed container
   ```
   
   删除该pod,让它重建一次,还是不能解决问题:
   ```
   kubectl delete pod apisix-etcd-1 -n ingress-apisix
   ```
   
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to