coffeebe4code opened a new issue, #487:
URL: https://github.com/apache/apisix-helm-chart/issues/487
I would be happy to do a PR for this. There was an issue with deploying the
helm chart a second time. After digging, it appeared to be related to a setting
in `etcd` configuration. Helm would timeout waiting for the new rollout to be a
success, and the etcd deployment was throwing hundreds of warnings every
millisecond. I'm not entirely sure what was causing helm upgrade to timeout,
but the etcd state had something to do with based on the warnings, and how the
pods showed in a constant warning state.
Looking at the etcd deployment, there is a value that describes the
clusterstate.
The default value is set `etcd.initialClusterState: "new"`. What I found
fixed this issue, and allowed for helm installs to work was to first delete
everything, as the etcd cluster state keeps looking for nodes or pvcs that
don't exist anymore.
Then for every helm install after the first, the clusterstate must be
`existing`
```
etcd:
enabled: true
### THIS CANNOT BE MESSED UP, If already deployed, state must be existing.
initialClusterState: "new"
```
then after deploying this to every environment. I updated the
`etcd.initialClusterState` to `"existing"`
This has fixed the issue.
Let me know if there is something else I am missing, or if this is actually
imperative to upgrading successfully.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]