This is an automated email from the ASF dual-hosted git repository.

fanningpj pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/pekko-management.git


The following commit(s) were added to refs/heads/main by this push:
     new cf436f21 doc workaround for k8s pod deletion order (#476)
cf436f21 is described below

commit cf436f21957b4bcad3e951c83dc9a513941b6edc
Author: PJ Fanning <[email protected]>
AuthorDate: Sat Aug 23 10:39:50 2025 +0100

    doc workaround for k8s pod deletion order (#476)
---
 .../paradox/kubernetes-deployment/preparing-for-production.md  | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git 
a/docs/src/main/paradox/kubernetes-deployment/preparing-for-production.md 
b/docs/src/main/paradox/kubernetes-deployment/preparing-for-production.md
index 8e1e0b8c..18374b39 100644
--- a/docs/src/main/paradox/kubernetes-deployment/preparing-for-production.md
+++ b/docs/src/main/paradox/kubernetes-deployment/preparing-for-production.md
@@ -89,8 +89,14 @@ version number off the current git commit hash, which is 
great especially for co
 need to be involved in selecting a unique version number. After building the 
image, you can take the version number generated in that step 
 and update the image referenced in the spec accordingly.
 
+## Rolling Updates and Scaling Down
 
+Older versions of Kubernetes used to scale down the youngest nodes first and 
Pekko Singletons are usually deployed
+on the oldest node (prior to Kubernetes 1.22).
+This meant less disruption as the Singletons would only be moved once during a 
Rolling Update.
+The Eclipse Ditto team have provided some documentation on how to adjust the 
Pod Deletion Cost to try to make the oldest node the
+last one to be scaled down.
+We hope to adjust Pekko Management to do this automatically in a future 
release.
 
-
-
+The GitHub Issue is 
[#414](https://github.com/apache/pekko-management/issues/414) and this 
[comment](https://github.com/apache/pekko-management/issues/414#issuecomment-3112361378)
 contains the workaround.
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to