Hi, We have an automated deployment flow that uses kubectl apply -f to upgrade the version of services running in our test and production environments. The file applied is templated with a version that is injected into the container image path, so that the only difference between the file used between deployments is the version/tag part of the docker image.
Normally this runs without any problems, but occasionally, the change just isn't applied - the command executes successfully but the pods are not recreated, the old one is left as is running the old version. This behaviour is especially confusing as no error is reported, all scripts report success but the wrong version of a service is left running in the cluster. It appears to be deterministic, in that going back to an older version, then applying the same updated file, always fails silently. If I was to guess, it would appear to be some type of hash collision where a change is not detected. Has anyone else observed this behaviour? Cheers, Kristian -- You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group. To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscr...@googlegroups.com. To post to this group, send email to kubernetes-users@googlegroups.com. Visit this group at https://groups.google.com/group/kubernetes-users. For more options, visit https://groups.google.com/d/optout.