BadTorro commented on issue #11338:
URL: https://github.com/apache/apisix/issues/11338#issuecomment-2259172299

   > @sudhir649 @BadTorro Deploying new version of etcd worked for me initially 
but whenever the node is scaled down and scaled up, one of the etcd pods is 
going into crashloop. But since other two etcd pods were running, it wasn't 
affecting the route and upstream creations. Yet, we are about to use it in 
production environment, so wish there is a permanent fix. After reading the 
solution @sudhir649 you have pointed out, I have few questions. 1. Won't 
deleting the PVC cause loss of data that apisix needs. 2. For second solution, 
I feel it's good to give a try. (n-1)/2 , in my case number of etcd replicas is 
3, so according to this even if one pod is crashing, the disaster recovery cron 
will run and take backup of pvc. But as you mentioned when two pods were 
crashing there was no change, when third pod also crashed then all three pods 
came to running state. But in my case only one or rarely two pods crashing. If 
you have any inputs let me know. Thanks in advance!
   
   Regarding to that, I managed to get it work by basically: 
   
   - Deploying [longhorn](https://longhorn.io/) storage solution to the cluster
   - Configured rancher desktop based on[ this 
guide](https://medium.com/@thizaom/rancher-desktop-with-rancher-longhorn-280e687f5022)
 to have open-iscsi in place and useable 
   - changed the storageclass in the dedicated etcd chart and related 
values.yaml file to "longhorn"
   ```
   persistence:
     enabled: true
     storageClass: "longhorn"
   ```
   - started everything with "tilt up" 
   
   Currently keeps on running and did not crash since. 
   However, we are now as well checking if the [Bitnami 
chart](https://artifacthub.io/packages/helm/bitnami/apisix) runs out of the 
box... 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to