Package: release.debian.org
Severity: normal
Tags: trixie
User: [email protected]
Usertags: pu

Hi,

I would like to propose the 4.0.7 stable point release of patroni (which has
been in testing for three weeks) to trixie, upgrading from the current 4.0.6
version.

Debdiff is attached, I have reviewed the upstream changes.


Michael
diff -Nru patroni-4.0.6/debian/changelog patroni-4.0.7/debian/changelog
--- patroni-4.0.6/debian/changelog      2025-06-08 08:52:19.000000000 +0200
+++ patroni-4.0.7/debian/changelog      2025-10-20 14:55:24.000000000 +0200
@@ -1,3 +1,35 @@
+patroni (4.0.7-3+deb13u1) trixie; urgency=medium
+
+  * Upload to stable.
+
+ -- Michael Banck <[email protected]>  Mon, 20 Oct 2025 14:55:24 +0200
+
+patroni (4.0.7-3) unstable; urgency=medium
+
+  * debian/tests/acceptance: Really make the changes from the last revision
+    this time.
+
+ -- Michael Banck <[email protected]>  Thu, 25 Sep 2025 15:01:53 +0200
+
+patroni (4.0.7-2) unstable; urgency=medium
+
+  * debian/tests/acceptance: Further changes to stopping etcd. If the init
+    script exists, use it. Otherwise, if systemd is available, use that. If
+    neither are available, do not try to stop etcd.
+
+ -- Michael Banck <[email protected]>  Thu, 25 Sep 2025 14:20:40 +0200
+
+patroni (4.0.7-1) unstable; urgency=medium
+
+  * New upstream release.
+  * debian/patches/startup_scripts.patch: Refreshed.
+  * debian/patches/avoid_overwriting_configuration_during_boostrap.patch:
+    Likewise.
+  * debian/patches/replslot-cluster-type-attribute.patch: Likewise.
+  * debian/tests/acceptance: Only stop etcd if init script exists.
+
+ -- Michael Banck <[email protected]>  Wed, 24 Sep 2025 16:58:33 +0200
+
 patroni (4.0.6-1) unstable; urgency=medium
 
   * New upstream release.
diff -Nru 
patroni-4.0.6/debian/patches/avoid_overwriting_configuration_during_boostrap.patch
 
patroni-4.0.7/debian/patches/avoid_overwriting_configuration_during_boostrap.patch
--- 
patroni-4.0.6/debian/patches/avoid_overwriting_configuration_during_boostrap.patch
  2025-03-15 11:57:39.000000000 +0100
+++ 
patroni-4.0.7/debian/patches/avoid_overwriting_configuration_during_boostrap.patch
  2025-09-24 08:10:12.000000000 +0200
@@ -21,7 +21,7 @@
 ===================================================================
 --- patroni.orig/patroni/postgresql/config.py
 +++ patroni/patroni/postgresql/config.py
-@@ -510,7 +510,7 @@ class ConfigHandler(object):
+@@ -513,7 +513,7 @@ class ConfigHandler(object):
              try:
                  for f in self._configuration_to_save:
                      config_file = os.path.join(self._config_dir, f)
@@ -30,7 +30,7 @@
                      if os.path.isfile(config_file):
                          shutil.copy(config_file, backup_file)
                          self.set_file_permissions(backup_file)
-@@ -523,7 +523,7 @@ class ConfigHandler(object):
+@@ -526,7 +526,7 @@ class ConfigHandler(object):
          try:
              for f in self._configuration_to_save:
                  config_file = os.path.join(self._config_dir, f)
diff -Nru patroni-4.0.6/debian/patches/replslot-cluster-type-attribute.patch 
patroni-4.0.7/debian/patches/replslot-cluster-type-attribute.patch
--- patroni-4.0.6/debian/patches/replslot-cluster-type-attribute.patch  
2025-02-20 22:32:02.000000000 +0100
+++ patroni-4.0.7/debian/patches/replslot-cluster-type-attribute.patch  
2025-10-20 13:59:09.000000000 +0200
@@ -10,11 +10,11 @@
  patroni/dcs/__init__.py        | 4 +++-
  2 files changed, 5 insertions(+), 2 deletions(-)
 
-diff --git a/docs/dynamic_configuration.rst b/docs/dynamic_configuration.rst
-index 09413312d..908bc3f68 100644
---- a/docs/dynamic_configuration.rst
-+++ b/docs/dynamic_configuration.rst
-@@ -61,9 +61,10 @@ In order to change the dynamic configuration you can use 
either :ref:`patronictl
+Index: patroni/docs/dynamic_configuration.rst
+===================================================================
+--- patroni.orig/docs/dynamic_configuration.rst
++++ patroni/docs/dynamic_configuration.rst
+@@ -61,9 +61,10 @@ In order to change the dynamic configura
  
     -  **my\_slot\_name**: the name of the permanent replication slot. If the 
permanent slot name matches with the name of the current node it will not be 
created on this node. If you add a permanent physical replication slot which 
name matches the name of a Patroni member, Patroni will ensure that the slot 
that was created is not removed even if the corresponding member becomes 
unresponsive, situation which would normally result in the slot's removal by 
Patroni. Although this can be useful in some situations, such as when you want 
replication slots used by members to persist during temporary failures or when 
importing existing members to a new Patroni cluster (see :ref:`Convert a 
Standalone to a Patroni Cluster <existing_data>` for details), caution should 
be exercised by the operator that these clashes in names are not persisted in 
the DCS, when the slot is no longer required, due to its effect on normal 
functioning of Patroni.
  
@@ -26,11 +26,11 @@
  
  -  **ignore\_slots**: list of sets of replication slot properties for which 
Patroni should ignore matching slots. This configuration/feature/etc. is useful 
when some replication slots are managed outside of Patroni. Any subset of 
matching properties will cause a slot to be ignored.
  
-diff --git a/patroni/dcs/__init__.py b/patroni/dcs/__init__.py
-index 4a3516426..19d3c1aa1 100644
---- a/patroni/dcs/__init__.py
-+++ b/patroni/dcs/__init__.py
-@@ -998,7 +998,9 @@ def __permanent_slots(self) -> Dict[str, Union[Dict[str, 
Any], Any]]:
+Index: patroni/patroni/dcs/__init__.py
+===================================================================
+--- patroni.orig/patroni/dcs/__init__.py
++++ patroni/patroni/dcs/__init__.py
+@@ -1011,7 +1011,9 @@ class Cluster(NamedTuple('Cluster',
      @property
      def permanent_physical_slots(self) -> Dict[str, Any]:
          """Dictionary of permanent ``physical`` replication slots."""
diff -Nru patroni-4.0.6/debian/patches/startup_scripts.patch 
patroni-4.0.7/debian/patches/startup_scripts.patch
--- patroni-4.0.6/debian/patches/startup_scripts.patch  2024-11-09 
10:47:13.000000000 +0100
+++ patroni-4.0.7/debian/patches/startup_scripts.patch  2025-10-20 
13:59:09.000000000 +0200
@@ -55,8 +55,8 @@
 -
  # Pre-commands to start watchdog device
  # Uncomment if watchdog is part of your patroni setup
- #ExecStartPre=-/usr/bin/sudo /sbin/modprobe softdog
- #ExecStartPre=-/usr/bin/sudo /bin/chown postgres /dev/watchdog
+ #ExecStartPre=-+/sbin/modprobe softdog
+ #ExecStartPre=-+/bin/chown postgres /dev/watchdog
  
  # Start the patroni process
 -ExecStart=/bin/patroni /etc/patroni.yml
diff -Nru patroni-4.0.6/debian/patroni.service 
patroni-4.0.7/debian/patroni.service
--- patroni-4.0.6/debian/patroni.service        2025-06-06 19:27:48.000000000 
+0200
+++ patroni-4.0.7/debian/patroni.service        2025-09-22 18:07:47.000000000 
+0200
@@ -23,8 +23,8 @@
 
 # Pre-commands to start watchdog device
 # Uncomment if watchdog is part of your patroni setup
-#ExecStartPre=-/usr/bin/sudo /sbin/modprobe softdog
-#ExecStartPre=-/usr/bin/sudo /bin/chown postgres /dev/watchdog
+#ExecStartPre=-+/sbin/modprobe softdog
+#ExecStartPre=-+/bin/chown postgres /dev/watchdog
 
 # Start the patroni process
 ExecStart=/bin/patroni /etc/patroni.yml
diff -Nru patroni-4.0.6/debian/tests/acceptance 
patroni-4.0.7/debian/tests/acceptance
--- patroni-4.0.6/debian/tests/acceptance       2025-02-20 22:32:02.000000000 
+0100
+++ patroni-4.0.7/debian/tests/acceptance       2025-09-25 14:18:30.000000000 
+0200
@@ -49,8 +49,16 @@
        # ensure no etcd server is running.
        if [ "$DCS" = "etcd" -o "$DCS" = "etcd3" ]
        then
-               service etcd stop
-               service etcd status || true
+               if [ -x /etc/init.d/etcd ]
+               then
+                       service etcd stop
+                       service etcd status || true
+               else
+                       if [ -d /run/systemd/system ]
+                       then
+                               systemctl stop etcd
+                       fi
+               fi
        fi
        # make sure various directories are writable by the postgres user
        for dir in features/output data /tmp/pgpass_postgres-?; do
diff -Nru patroni-4.0.6/docs/index.rst patroni-4.0.7/docs/index.rst
--- patroni-4.0.6/docs/index.rst        2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/docs/index.rst        2025-09-22 18:07:47.000000000 +0200
@@ -10,7 +10,7 @@
 
 We call Patroni a "template" because it is far from being a one-size-fits-all 
or plug-and-play replication system. It will have its own caveats. Use wisely. 
There are many ways to run high availability with PostgreSQL; for a list, see 
the `PostgreSQL Documentation 
<https://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_Pooling>`__.
 
-Currently supported PostgreSQL versions: 9.3 to 17.
+Currently supported PostgreSQL versions: 9.3 to 18.
 
 **Note to Citus users**: Starting from 3.0 Patroni nicely integrates with the 
`Citus <https://github.com/citusdata/citus>`__ database extension to Postgres. 
Please check the :ref:`Citus support page <citus>` in the Patroni documentation 
for more info about how to use Patroni high availability together with a Citus 
distributed cluster.
 
diff -Nru patroni-4.0.6/docs/README.rst patroni-4.0.7/docs/README.rst
--- patroni-4.0.6/docs/README.rst       2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/docs/README.rst       2025-09-22 18:07:47.000000000 +0200
@@ -34,6 +34,10 @@
 there is no requirement on the minimal number of nodes. Running a cluster 
consisting of one primary and one standby is
 perfectly fine. You can add more standby nodes later.
 
+**2-node clusters** (primary + standby) are common and provide automatic 
failover with high availability. Note that during failover, you'll temporarily 
have no redundancy until the failed node rejoins.
+
+**DCS requirements**: Your DCS (etcd, ZooKeeper, Consul) has to run with **3 
or 5 nodes** for proper consensus and fault tolerance. A single DCS cluster can 
store information for hundreds or thousands of Patroni clusters using different 
namespace/scope combinations.
+
 Running and Configuring
 -----------------------
 
diff -Nru patroni-4.0.6/docs/releases.rst patroni-4.0.7/docs/releases.rst
--- patroni-4.0.6/docs/releases.rst     2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/docs/releases.rst     2025-09-22 18:07:47.000000000 +0200
@@ -3,6 +3,56 @@
 Release notes
 =============
 
+Version 4.0.7
+-------------
+
+Released 2025-09-22
+
+**New features**
+
+- Add support for PostgreSQL 18 RC1 (Alexander Kukushkin)
+
+  GUC's validator rules were extended. Patroni now properly handles the new 
background I/O worker.
+
+
+**Bugfixes**
+
+- Fix potential issue around resolving localhost to IPv6 on Windows (András 
Váczi)
+
+  When configuring ``listen_addresses`` in PostgreSQL, using ``0.0.0.0`` or 
``127.0.0.1`` will restrict listening to IPv4 only, excluding IPv6. On typical 
Windows systems, however, ``localhost`` often resolves to the IPv6 address 
``::1`` by default. To ensure compatibility, Patroni now configures PostgreSQL 
to listen on ``127.0.0.1``, instead of ``localhost``, on Windows systems.
+
+- Return global config only when ``/config`` key exists in DCS (Alexander 
Kukushkin)
+
+  Patroni REST API was returning an empty configuration instead of raising an 
error if the ``/config`` key was missing in DCS.
+
+- Fix the issue of failsafe mode not being triggered in case of Etcd 
unavailability (Alexander Kukushkin)
+
+  Patroni was not always properly handling ``etcd3`` exceptions, which 
resulted in failsafe mode not being triggered.
+
+- Fix signal handler reentrancy deadlock (Waynerv)
+
+  Patroni running in a Docker container with ``PID=1`` in some special cases 
was experiencing deadlock after receiving ``SIGCHLD``.
+
+- Recreate (permanent) physical slot when it doesn't reserve WAL (Israel Barth 
Rubio)
+
+  Permanent physical replication slots created outside of Patroni scope 
without reserving WALs were causing a ``replication slot cannot be advanced`` 
error. To avoid this, Patroni now recreates such slots.
+
+- Handle watch cancelation messages in ``etcd3`` properly (Alexander Kukushkin)
+
+  When ``etcd3`` sends a cancelation message to the watch channel, it doesn't 
close the connection. This results in Patroni using stale data. Patroni now 
solves it by breaking a loop of reading chunked response and closing the 
connection on the Patroni side.
+
+- Handle case when ``HTTPConnection`` socket is wrapped with ``pyopenssl`` 
(Alexander Kukushkin)
+
+  Patroni was not correctly using ``pyopenssl`` interfaces, enforced in 
``python-etcd``.
+
+
+**Documentation improvements**
+
+- Improve 2-node cluster guidance (Nikolay Samokhvalov)
+
+  Clarify behaviour during failover and DCS requirements.
+
+
 Version 4.0.6
 -------------
 
diff -Nru patroni-4.0.6/extras/startup-scripts/patroni.service 
patroni-4.0.7/extras/startup-scripts/patroni.service
--- patroni-4.0.6/extras/startup-scripts/patroni.service        2025-06-06 
19:27:48.000000000 +0200
+++ patroni-4.0.7/extras/startup-scripts/patroni.service        2025-09-22 
18:07:47.000000000 +0200
@@ -23,8 +23,8 @@
 
 # Pre-commands to start watchdog device
 # Uncomment if watchdog is part of your patroni setup
-#ExecStartPre=-/usr/bin/sudo /sbin/modprobe softdog
-#ExecStartPre=-/usr/bin/sudo /bin/chown postgres /dev/watchdog
+#ExecStartPre=-+/sbin/modprobe softdog
+#ExecStartPre=-+/bin/chown postgres /dev/watchdog
 
 # Start the patroni process
 ExecStart=/bin/patroni /etc/patroni.yml
diff -Nru patroni-4.0.6/features/steps/basic_replication.py 
patroni-4.0.7/features/steps/basic_replication.py
--- patroni-4.0.6/features/steps/basic_replication.py   2025-06-06 
19:27:48.000000000 +0200
+++ patroni-4.0.7/features/steps/basic_replication.py   2025-09-22 
18:07:47.000000000 +0200
@@ -102,7 +102,7 @@
         assert False, "Error loading test data on {0}: {1}".format(pg_name, e)
 
 
-@then('Table {table_name:w} is present on {pg_name:name} after 
{max_replication_delay:d} seconds')
+@then('table {table_name:w} is present on {pg_name:name} after 
{max_replication_delay:d} seconds')
 def table_is_present_on(context, table_name, pg_name, max_replication_delay):
     max_replication_delay *= context.timeout_multiplier
     for _ in range(int(max_replication_delay)):
@@ -124,7 +124,7 @@
 @step('replication works from {primary:name} to {replica:name} after 
{time_limit:d} seconds')
 @then('replication works from {primary:name} to {replica:name} after 
{time_limit:d} seconds')
 def replication_works(context, primary, replica, time_limit):
-    context.execute_steps(u"""
+    context.execute_steps("""
         When I add the table test_{0} to {1}
         Then table test_{0} is present on {2} after {3} seconds
     """.format(str(time()).replace('.', '_').replace(',', '_'), primary, 
replica, time_limit))
diff -Nru patroni-4.0.6/features/steps/cascading_replication.py 
patroni-4.0.7/features/steps/cascading_replication.py
--- patroni-4.0.6/features/steps/cascading_replication.py       2025-06-06 
19:27:48.000000000 +0200
+++ patroni-4.0.7/features/steps/cascading_replication.py       2025-09-22 
18:07:47.000000000 +0200
@@ -9,7 +9,7 @@
     return context.pctl.start(name, custom_config={'tags': {tag_name: 
tag_value}})
 
 
-@then('There is a {label} with "{content}" in {name:name} data directory')
+@then('there is a {label} with "{content}" in {name:name} data directory')
 def check_label(context, label, content, name):
     value = (context.pctl.read_label(name, label) or '').replace('\n', '\\n')
     assert content in value, "\"{0}\" in {1} doesn't contain 
{2}".format(value, label, content)
diff -Nru patroni-4.0.6/.github/workflows/install_deps.py 
patroni-4.0.7/.github/workflows/install_deps.py
--- patroni-4.0.6/.github/workflows/install_deps.py     2025-06-06 
19:27:48.000000000 +0200
+++ patroni-4.0.7/.github/workflows/install_deps.py     2025-09-22 
18:07:47.000000000 +0200
@@ -10,7 +10,7 @@
 
 def install_requirements(what):
     subprocess.call([sys.executable, '-m', 'pip', 'install', '--upgrade', 
'pip'])
-    s = subprocess.call([sys.executable, '-m', 'pip', 'install', '--upgrade', 
'wheel', 'setuptools'])
+    s = subprocess.call([sys.executable, '-m', 'pip', 'install', '--upgrade', 
'wheel', 'setuptools', 'distlib'])
     if s != 0:
         return s
 
@@ -25,11 +25,11 @@
     requirements += ['coverage']
     # try to split tests between psycopg2 and psycopg3
     requirements += ['psycopg[binary]'] if sys.version_info >= (3, 8, 0) and\
-        (sys.platform != 'darwin' or what == 'etcd3') else 
['psycopg2-binary==2.9.9' 
+        (sys.platform != 'darwin' or what == 'etcd3') else 
['psycopg2-binary==2.9.9'
                                                             if sys.platform == 
'darwin' else 'psycopg2-binary']
 
-    from pip._vendor.distlib.markers import evaluator, DEFAULT_CONTEXT
-    from pip._vendor.distlib.util import parse_requirement
+    from distlib.markers import evaluator, DEFAULT_CONTEXT
+    from distlib.util import parse_requirement
 
     for r in read('requirements.txt').split('\n'):
         r = parse_requirement(r)
diff -Nru patroni-4.0.6/.github/workflows/release.yaml 
patroni-4.0.7/.github/workflows/release.yaml
--- patroni-4.0.6/.github/workflows/release.yaml        2025-06-06 
19:27:48.000000000 +0200
+++ patroni-4.0.7/.github/workflows/release.yaml        2025-09-22 
18:07:47.000000000 +0200
@@ -34,10 +34,10 @@
 
     - name: Publish distribution to Test PyPI
       if: github.event_name == 'push'
-      uses: pypa/[email protected]
+      uses: pypa/[email protected]
       with:
         repository_url: https://test.pypi.org/legacy/
 
     - name: Publish distribution to PyPI
       if: github.event_name == 'release'
-      uses: pypa/[email protected]
+      uses: pypa/[email protected]
diff -Nru patroni-4.0.6/.github/workflows/tests.yaml 
patroni-4.0.7/.github/workflows/tests.yaml
--- patroni-4.0.6/.github/workflows/tests.yaml  2025-06-06 19:27:48.000000000 
+0200
+++ patroni-4.0.7/.github/workflows/tests.yaml  2025-09-22 18:07:47.000000000 
+0200
@@ -198,7 +198,7 @@
 
     - uses: jakebailey/pyright-action@v2
       with:
-        version: 1.1.401
+        version: 1.1.405
 
   ydiff:
     name: Test compatibility with the latest version of ydiff
diff -Nru patroni-4.0.6/patroni/api.py patroni-4.0.7/patroni/api.py
--- patroni-4.0.6/patroni/api.py        2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/patroni/api.py        2025-09-22 18:07:47.000000000 +0200
@@ -517,7 +517,7 @@
         ``502`` instead.
         """
         cluster = self.server.patroni.dcs.cluster or 
self.server.patroni.dcs.get_cluster()
-        if cluster.config:
+        if cluster.config and cluster.config.modify_version:
             self._write_json_response(200, cluster.config.data)
         else:
             self.send_error(502)
diff -Nru patroni-4.0.6/patroni/dcs/etcd3.py patroni-4.0.7/patroni/dcs/etcd3.py
--- patroni-4.0.6/patroni/dcs/etcd3.py  2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/patroni/dcs/etcd3.py  2025-09-22 18:07:47.000000000 +0200
@@ -21,7 +21,7 @@
 from ..collections import EMPTY_DICT
 from ..exceptions import DCSError, PatroniException
 from ..postgresql.mpp import AbstractMPP
-from ..utils import deep_compare, enable_keepalive, iter_response_objects, 
RetryFailedError, USER_AGENT
+from ..utils import deep_compare, enable_keepalive, iter_response_objects, 
parse_bool, RetryFailedError, USER_AGENT
 from . import catch_return_false_exception, Cluster, ClusterConfig, \
     Failover, Leader, Member, Status, SyncState, TimelineHistory
 from .etcd import AbstractEtcd, AbstractEtcdClientWithFailover, 
catch_etcd_errors, \
@@ -66,6 +66,10 @@
     pass
 
 
+class Etcd3WatchCanceled(Etcd3Exception):
+    pass
+
+
 class Etcd3ClientError(Etcd3Exception):
 
     def __init__(self, code: Optional[int] = None, error: Optional[str] = 
None, status: Optional[int] = None) -> None:
@@ -356,7 +360,6 @@
                 exc = e
             self._reauthenticate = True
             if retry:
-                logger.error('retry = %s', retry)
                 retry.ensure_deadline(0.5, exc)
             elif reauthenticated:
                 raise exc
@@ -508,6 +511,8 @@
         if 'error' in message:
             raise _raise_for_data(message)
         result = message.get('result', EMPTY_DICT)
+        if parse_bool(result.get('canceled')):
+            raise Etcd3WatchCanceled('Watch canceled')
         header = result.get('header', EMPTY_DICT)
         self._check_cluster_raft_term(header.get('cluster_id'), 
header.get('raft_term'))
         events: List[Dict[str, Any]] = result.get('events', [])
@@ -555,8 +560,10 @@
 
         try:
             self._do_watch(result['header']['revision'])
+        except Etcd3WatchCanceled:
+            logger.info('Watch request canceled')
         except Exception as e:
-            # Following exceptions are expected on Windows because the /watch 
request  is done with `read_timeout`
+            # Following exceptions are expected on Windows because the /watch 
request is done with `read_timeout`
             if not (os.name == 'nt' and isinstance(e, (ReadTimeoutError, 
ProtocolError))):
                 logger.error('watchprefix failed: %r', e)
         finally:
@@ -576,16 +583,21 @@
                 time.sleep(1)
 
     def kill_stream(self) -> None:
-        sock = None
+        conn_sock: Any = None
         with self._response_lock:
             if isinstance(self._response, urllib3.response.HTTPResponse):
                 try:
-                    sock = self._response.connection.sock if 
self._response.connection else None
+                    conn_sock = self._response.connection.sock if 
self._response.connection else None
                 except Exception:
-                    sock = None
+                    conn_sock = None
             else:
                 self._response = False
-        if sock:
+        if conn_sock:
+            # python-etcd forces usage of pyopenssl if the last one is 
available.
+            # In this case HTTPConnection.socket is not inherited from 
socket.socket, but urllib3 uses custom
+            # class `WrappedSocket`, which shutdown() method could be 
incompatible with socket.shutdown().
+            # Therefore we use WrappedSocket.socket, which points to original 
`socket` object.
+            sock: socket.socket = conn_sock.socket if 
conn_sock.__class__.__name__ == 'WrappedSocket' else conn_sock
             try:
                 sock.shutdown(socket.SHUT_RDWR)
                 sock.close()
diff -Nru patroni-4.0.6/patroni/dcs/etcd.py patroni-4.0.7/patroni/dcs/etcd.py
--- patroni-4.0.6/patroni/dcs/etcd.py   2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/patroni/dcs/etcd.py   2025-09-22 18:07:47.000000000 +0200
@@ -317,7 +317,12 @@
 
         # Update machines_cache if previous attempt of update has failed
         if self._update_machines_cache:
-            self._load_machines_cache()
+            try:
+                self._load_machines_cache()
+            except etcd.EtcdException as e:
+                # If etcd cluster isn't accessible _load_machines_cache() -> 
_refresh_machines_cache() may raise
+                # etcd.EtcdException. We need to convert it to 
etcd.EtcdConnectionFailed for failsafe_mode to work.
+                raise etcd.EtcdConnectionFailed('No more machines in the 
cluster') from e
         elif not self._use_proxies and time.time() - 
self._machines_cache_updated > self._machines_cache_ttl:
             self._refresh_machines_cache()
 
diff -Nru patroni-4.0.6/patroni/dcs/__init__.py 
patroni-4.0.7/patroni/dcs/__init__.py
--- patroni-4.0.6/patroni/dcs/__init__.py       2025-06-06 19:27:48.000000000 
+0200
+++ patroni-4.0.7/patroni/dcs/__init__.py       2025-09-22 18:07:47.000000000 
+0200
@@ -18,7 +18,7 @@
 
 from .. import global_config
 from ..dynamic_loader import iter_classes, iter_modules
-from ..exceptions import PatroniFatalException
+from ..exceptions import PatroniAssertionError, PatroniFatalException
 from ..tags import Tags
 from ..utils import deep_compare, parse_int, uri
 
@@ -198,10 +198,13 @@
             data = {'conn_url': conn_url, 'api_url': api_url}
         else:
             try:
-                data = json.loads(value)
-                assert isinstance(data, dict)
-            except (AssertionError, TypeError, ValueError):
-                data: Dict[str, Any] = {}
+                json_data = json.loads(value)
+                if isinstance(json_data, dict):
+                    data = cast(Dict[str, Any], json_data)
+                else:
+                    raise PatroniAssertionError('not a dict')
+            except (PatroniAssertionError, TypeError, ValueError):
+                data = {}
         return Member(version, name, session, data)
 
     @property
@@ -480,9 +483,12 @@
             data: Dict[str, Any] = value
         elif value:
             try:
-                data = json.loads(value)
-                assert isinstance(data, dict)
-            except AssertionError:
+                json_data = json.loads(value)
+                if isinstance(json_data, dict):
+                    data = cast(Dict[str, Any], json_data)
+                else:
+                    raise PatroniAssertionError('not a dict')
+            except PatroniAssertionError:
                 data = {}
             except ValueError:
                 t = [a.strip() for a in value.split(':')]
@@ -547,10 +553,13 @@
             False
         """
         try:
-            data = json.loads(value)
-            assert isinstance(data, dict)
-        except (AssertionError, TypeError, ValueError):
-            data: Dict[str, Any] = {}
+            json_data = json.loads(value)
+            if isinstance(json_data, dict):
+                data = cast(Dict[str, Any], json_data)
+            else:
+                raise PatroniAssertionError('not a dict')
+        except (PatroniAssertionError, TypeError, ValueError):
+            data = {}
             modify_version = 0
         return ClusterConfig(version, data, version if modify_version is None 
else modify_version)
 
@@ -603,11 +612,12 @@
         try:
             if value and isinstance(value, str):
                 value = json.loads(value)
-            assert isinstance(value, dict)
+            if not isinstance(value, dict):
+                raise PatroniAssertionError('not a dict')
             leader = value.get('leader')
             quorum = value.get('quorum')
             return SyncState(version, leader, value.get('sync_standby'), 
int(quorum) if leader and quorum else 0)
-        except (AssertionError, TypeError, ValueError):
+        except (PatroniAssertionError, TypeError, ValueError):
             return SyncState.empty(version)
 
     @staticmethod
@@ -737,10 +747,13 @@
             []
         """
         try:
-            lines = json.loads(value)
-            assert isinstance(lines, list)
-        except (AssertionError, TypeError, ValueError):
-            lines: List[_HistoryTuple] = []
+            json_lines = json.loads(value)
+            if isinstance(json_lines, list):
+                lines = cast(List[_HistoryTuple], json_lines)
+            else:
+                raise PatroniAssertionError('not a list')
+        except (PatroniAssertionError, TypeError, ValueError):
+            lines = []
         return TimelineHistory(version, value, lines)
 
 
diff -Nru patroni-4.0.6/patroni/exceptions.py 
patroni-4.0.7/patroni/exceptions.py
--- patroni-4.0.6/patroni/exceptions.py 2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/patroni/exceptions.py 2025-09-22 18:07:47.000000000 +0200
@@ -53,3 +53,9 @@
     """Any issue identified while loading or validating the Patroni 
configuration."""
 
     pass
+
+
+class PatroniAssertionError(PatroniException):
+    """Any issue related to type/value validation."""
+
+    pass
diff -Nru patroni-4.0.6/patroni/__main__.py patroni-4.0.7/patroni/__main__.py
--- patroni-4.0.6/patroni/__main__.py   2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/patroni/__main__.py   2025-09-22 18:07:47.000000000 +0200
@@ -395,6 +395,12 @@
         if pid:
             os.kill(pid, signo)
 
+    import multiprocessing
+    patroni = multiprocessing.Process(target=patroni_main, 
args=(args.configfile,))
+    patroni.start()
+    pid = patroni.pid
+
+    # Set up signal handlers after fork to prevent child from inheriting them
     if os.name != 'nt':
         signal.signal(signal.SIGCHLD, sigchld_handler)
         signal.signal(signal.SIGHUP, passtochild)
@@ -404,11 +410,6 @@
     signal.signal(signal.SIGINT, passtochild)
     signal.signal(signal.SIGABRT, passtochild)
     signal.signal(signal.SIGTERM, passtochild)
-
-    import multiprocessing
-    patroni = multiprocessing.Process(target=patroni_main, 
args=(args.configfile,))
-    patroni.start()
-    pid = patroni.pid
     patroni.join()
 
 
diff -Nru patroni-4.0.6/patroni/postgresql/available_parameters/0_postgres.yml 
patroni-4.0.7/patroni/postgresql/available_parameters/0_postgres.yml
--- patroni-4.0.6/patroni/postgresql/available_parameters/0_postgres.yml        
2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/patroni/postgresql/available_parameters/0_postgres.yml        
2025-09-22 18:07:47.000000000 +0200
@@ -121,6 +121,11 @@
     version_from: 130000
     min_val: -1
     max_val: 2147483647
+  autovacuum_vacuum_max_threshold:
+  - type: Integer
+    version_from: 180000
+    min_val: -1
+    max_val: 2147483647
   autovacuum_vacuum_scale_factor:
   - type: Real
     version_from: 90300
@@ -137,6 +142,11 @@
     min_val: -1
     max_val: 2147483647
     unit: kB
+  autovacuum_worker_slots:
+  - type: Integer
+    version_from: 180000
+    min_val: 1
+    max_val: 262143
   backend_flush_after:
   - type: Integer
     version_from: 90600
@@ -434,6 +444,9 @@
   enable_bitmapscan:
   - type: Bool
     version_from: 90300
+  enable_distinct_reordering:
+  - type: Bool
+    version_from: 180000
   enable_gathermerge:
   - type: Bool
     version_from: 100000
@@ -485,6 +498,9 @@
   enable_presorted_aggregate:
   - type: Bool
     version_from: 160000
+  enable_self_join_elimination:
+  - type: Bool
+    version_from: 180000
   enable_seqscan:
   - type: Bool
     version_from: 90300
@@ -506,9 +522,13 @@
   exit_on_error:
   - type: Bool
     version_from: 90300
+  extension_control_path:
+  - type: String
+    version_from: 180000
   extension_destdir:
   - type: String
     version_from: 140000
+    version_till: 180000
   external_pid_file:
   - type: String
     version_from: 90300
@@ -517,6 +537,12 @@
     version_from: 90300
     min_val: -15
     max_val: 3
+  file_copy_method:
+  - type: Enum
+    version_from: 180000
+    possible_values:
+    - copy
+    - clone
   force_parallel_mode:
   - type: EnumBool
     version_from: 90600
@@ -629,6 +655,12 @@
     min_val: 0
     max_val: 2147483647
     unit: ms
+  idle_replication_slot_timeout:
+  - type: Integer
+    version_from: 180000
+    min_val: 0
+    max_val: 2147483647
+    unit: s
   idle_session_timeout:
   - type: Integer
     version_from: 140000
@@ -647,9 +679,38 @@
   io_combine_limit:
   - type: Integer
     version_from: 170000
+    version_till: 180000
     min_val: 1
     max_val: 32
     unit: 8kB
+  - type: Integer
+    version_from: 180000
+    min_val: 1
+    max_val: 128
+    unit: 8kB
+  io_max_combine_limit:
+  - type: Integer
+    version_from: 180000
+    min_val: 1
+    max_val: 128
+    unit: 8kB
+  io_max_concurrency:
+  - type: Integer
+    version_from: 180000
+    min_val: -1
+    max_val: 1024
+  io_method:
+  - type: Enum
+    version_from: 180000
+    possible_values:
+    - sync
+    - worker
+    - io_uring
+  io_workers:
+  - type: Integer
+    version_from: 180000
+    min_val: 1
+    max_val: 32
   IntervalStyle:
   - type: Enum
     version_from: 90300
@@ -748,6 +809,9 @@
   log_connections:
   - type: Bool
     version_from: 90300
+    version_till: 180000
+  - type: String
+    version_from: 180000
   log_destination:
   - type: String
     version_from: 90300
@@ -793,6 +857,9 @@
   log_line_prefix:
   - type: String
     version_from: 90300
+  log_lock_failures:
+  - type: Bool
+    version_from: 180000
   log_lock_waits:
   - type: Bool
     version_from: 90300
@@ -873,9 +940,15 @@
   log_rotation_size:
   - type: Integer
     version_from: 90300
+    version_till: 180000
     min_val: 0
     max_val: 2097151
     unit: kB
+  - type: Integer
+    version_from: 180000
+    min_val: 0
+    max_val: 2147483647
+    unit: kB
   log_startup_progress_interval:
   - type: Integer
     version_from: 150000
@@ -930,9 +1003,20 @@
   maintenance_work_mem:
   - type: Integer
     version_from: 90300
+    version_till: 180000
     min_val: 1024
     max_val: 2147483647
     unit: kB
+  - type: Integer
+    version_from: 180000
+    min_val: 64
+    max_val: 2147483647
+    unit: kB
+  max_active_replication_origins:
+  - type: Integer
+    version_from: 180000
+    min_val: 0
+    max_val: 262143
   max_connections:
   - type: Integer
     version_from: 90300
@@ -1084,6 +1168,9 @@
     version_from: 90600
     min_val: 0
     max_val: 262143
+  md5_password_warnings:
+  - type: Bool
+    version_from: 180000
   min_dynamic_shared_memory:
   - type: Integer
     version_from: 140000
@@ -1139,6 +1226,9 @@
     min_val: 16
     max_val: 131072
     unit: 8kB
+  oauth_validator_libraries:
+  - type: String
+    version_from: 180000
   old_snapshot_threshold:
   - type: Integer
     version_from: 90600
@@ -1336,6 +1426,10 @@
   ssl_ecdh_curve:
   - type: String
     version_from: 90400
+    version_till: 180000
+  ssl_groups:
+  - type: String
+    version_from: 180000
   ssl_key_file:
   - type: String
     version_from: 90300
@@ -1372,6 +1466,9 @@
     min_val: 0
     max_val: 2147483647
     unit: kB
+  ssl_tls13_ciphers:
+  - type: String
+    version_from: 180000
   standard_conforming_strings:
   - type: Bool
     version_from: 90300
@@ -1547,6 +1644,9 @@
   track_commit_timestamp:
   - type: Bool
     version_from: 90500
+  track_cost_delay_timing:
+  - type: Bool
+    version_from: 180000
   track_counts:
   - type: Bool
     version_from: 90300
@@ -1671,6 +1771,11 @@
     version_from: 90300
     min_val: 0
     max_val: 2000000000
+  vacuum_max_eager_freeze_failure_rate:
+  - type: Real
+    version_from: 180000
+    min_val: 0
+    max_val: 1
   vacuum_multixact_failsafe_age:
   - type: Integer
     version_from: 140000
@@ -1686,6 +1791,9 @@
     version_from: 90300
     min_val: 0
     max_val: 2000000000
+  vacuum_truncate:
+  - type: Bool
+    version_from: 180000
   wal_buffers:
   - type: Integer
     version_from: 90300
diff -Nru patroni-4.0.6/patroni/postgresql/config.py 
patroni-4.0.7/patroni/postgresql/config.py
--- patroni-4.0.6/patroni/postgresql/config.py  2025-06-06 19:27:48.000000000 
+0200
+++ patroni-4.0.7/patroni/postgresql/config.py  2025-09-22 18:07:47.000000000 
+0200
@@ -1102,8 +1102,11 @@
         listen_addresses = 
self._server_parameters['listen_addresses'].split(',')
 
         for la in listen_addresses:
-            if la.strip().lower() in ('*', '0.0.0.0', '127.0.0.1', 
'localhost'):  # we are listening on '*' or localhost
+            if la.strip().lower() in ('*', 'localhost'):  # we are listening 
on '*' or localhost
                 return 'localhost'  # connection via localhost is preferred
+            if la.strip() in ('0.0.0.0', '127.0.0.1'):  # Postgres listens 
only on IPv4
+                # localhost, but don't allow Windows to resolve to IPv6
+                return '127.0.0.1' if os.name == 'nt' else 'localhost'
         return listen_addresses[0].strip()  # can't use localhost, take first 
address from listen_addresses
 
     def resolve_connection_addresses(self) -> None:
diff -Nru patroni-4.0.6/patroni/postgresql/postmaster.py 
patroni-4.0.7/patroni/postgresql/postmaster.py
--- patroni-4.0.6/patroni/postgresql/postmaster.py      2025-06-06 
19:27:48.000000000 +0200
+++ patroni-4.0.7/patroni/postgresql/postmaster.py      2025-09-22 
18:07:47.000000000 +0200
@@ -177,7 +177,7 @@
             return not self.is_running()
 
     def wait_for_user_backends_to_close(self, stop_timeout: Optional[float]) 
-> None:
-        # These regexps are cross checked against versions PostgreSQL 9.1 .. 17
+        # These regexps are cross checked against versions PostgreSQL 9.1 .. 18
         aux_proc_re = re.compile("(?:postgres:)( .*:)? 
(?:(?:archiver|startup|autovacuum launcher|autovacuum worker|"
                                  "checkpointer|logger|stats collector|wal 
receiver|wal writer|writer)(?: process  )?|"
                                  "walreceiver|wal sender 
process|walsender|walwriter|background writer|"
@@ -185,7 +185,7 @@
                                  "logical replication tablesync worker for 
subscription|"
                                  "logical replication parallel apply worker 
for subscription|"
                                  "logical replication apply worker for 
subscription|"
-                                 "slotsync worker|walsummarizer|bgworker:) ")
+                                 "slotsync worker|walsummarizer|io 
worker|bgworker:) ")
 
         try:
             children = self.children()
diff -Nru patroni-4.0.6/patroni/postgresql/slots.py 
patroni-4.0.7/patroni/postgresql/slots.py
--- patroni-4.0.6/patroni/postgresql/slots.py   2025-06-06 19:27:48.000000000 
+0200
+++ patroni-4.0.7/patroni/postgresql/slots.py   2025-09-22 18:07:47.000000000 
+0200
@@ -329,6 +329,26 @@
                             ' FULL OUTER JOIN dropped ON true'), name)
         return (rows[0][0], rows[0][1]) if rows else (False, False)
 
+    def _drop_replication_slot(self, name: str) -> None:
+        """Drop replication slot by name.
+
+        .. note::
+            If not able to drop the slot, it will log a message and set the 
flag to reload slots.
+
+        :param name: name of the slot to be dropped.
+        """
+        active, dropped = self.drop_replication_slot(name)
+        if dropped:
+            logger.info("Dropped replication slot '%s'", name)
+            if name in self._replication_slots:
+                del self._replication_slots[name]
+        else:
+            self._schedule_load_slots = True
+            if active:
+                logger.warning("Unable to drop replication slot '%s', slot is 
active", name)
+            else:
+                logger.error("Failed to drop replication slot '%s'", name)
+
     def _drop_incorrect_slots(self, cluster: Cluster, slots: Dict[str, Any]) 
-> None:
         """Compare required slots and configured as permanent slots with those 
found, dropping extraneous ones.
 
@@ -344,15 +364,8 @@
         # drop old replication slots which are not presented in desired slots.
         for name in set(self._replication_slots) - set(slots):
             if not global_config.is_paused and not 
self.ignore_replication_slot(cluster, name):
-                active, dropped = self.drop_replication_slot(name)
-                if dropped:
-                    logger.info("Dropped unknown replication slot '%s'", name)
-                else:
-                    self._schedule_load_slots = True
-                    if active:
-                        logger.debug("Unable to drop unknown replication slot 
'%s', slot is still active", name)
-                    else:
-                        logger.error("Failed to drop replication slot '%s'", 
name)
+                logger.info("Trying to drop unknown replication slot '%s'", 
name)
+                self._drop_replication_slot(name)
 
         # drop slots with matching names but attributes that do not match, 
e.g. `plugin` or `database`.
         for name, value in slots.items():
@@ -391,15 +404,7 @@
                 if clean_inactive_physical_slots and 
value.get('expected_active') is False and value['xmin']:
                     logger.warning('Dropping physical replication slot %s 
because of its xmin value %s',
                                    name, value['xmin'])
-                    active, dropped = self.drop_replication_slot(name)
-                    if dropped:
-                        self._replication_slots.pop(name)
-                    else:
-                        self._schedule_load_slots = True
-                        if active:
-                            logger.warning("Unable to drop replication slot 
'%s', slot is active", name)
-                        else:
-                            logger.error("Failed to drop replication slot 
'%s'", name)
+                    self._drop_replication_slot(name)
 
             # Now we will create physical replication slots that are missing.
             if name not in self._replication_slots:
@@ -414,8 +419,18 @@
             # And advance restart_lsn on physical replication slots that are 
not expected to be active.
             elif self._postgresql.can_advance_slots and 
self._replication_slots[name]['type'] == 'physical' and\
                     value.get('expected_active') is not True and not 
value['xmin']:
+                restart_lsn = value.get('restart_lsn')
+                if not restart_lsn:
+                    # This slot either belongs to a member or was configured 
as a permanent slot. However, for some
+                    # reason the slot was created by an external agent instead 
of by Patroni, and it was created without
+                    # reserving the LSN. We drop the slot here, as we cannot 
advance it, and let Patroni recreate and
+                    # manage it in the next cycle.
+                    logger.warning('Dropping physical replication slot %s 
because it has no restart_lsn. '
+                                   'This slot was probably not created by 
Patroni, but by an external agent.', name)
+                    self._drop_replication_slot(name)
+                    continue
                 lsn = value.get('lsn')
-                if lsn and lsn > value['restart_lsn']:  # The slot has 
feedback in DCS and needs to be advanced
+                if lsn and lsn > restart_lsn:  # The slot has feedback in DCS 
and needs to be advanced
                     try:
                         lsn = format_lsn(lsn)
                         self._query("SELECT 
pg_catalog.pg_replication_slot_advance(%s, %s)", name, lsn)
diff -Nru patroni-4.0.6/patroni/validator.py patroni-4.0.7/patroni/validator.py
--- patroni-4.0.6/patroni/validator.py  2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/patroni/validator.py  2025-09-22 18:07:47.000000000 +0200
@@ -13,7 +13,7 @@
 
 from .collections import CaseInsensitiveSet, EMPTY_DICT
 from .dcs import dcs_modules
-from .exceptions import ConfigParseError
+from .exceptions import ConfigParseError, PatroniAssertionError
 from .log import type_logformat
 from .utils import data_directory_is_empty, get_major_version, parse_int, 
split_host_port
 
@@ -173,9 +173,12 @@
     :param value: list of host(s) and port items to be validated.
 
     :returns: ``True`` if all items are valid.
+
+    .. note::
+        :func:`validate_host_port` will raise an exception if validation 
failed.
     """
-    assert all([validate_host_port(v) for v in value]), "didn't pass the 
validation"
-    return True
+
+    return all(validate_host_port(v) for v in value)
 
 
 def comma_separated_host_port(string: str) -> bool:
@@ -870,7 +873,8 @@
     :param condition: result of a condition to be asserted.
     :param message: message to be thrown if the condition is ``False``.
     """
-    assert condition, message
+    if not condition:
+        raise PatroniAssertionError(message)
 
 
 class IntValidator(object):
diff -Nru patroni-4.0.6/patroni/version.py patroni-4.0.7/patroni/version.py
--- patroni-4.0.6/patroni/version.py    2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/patroni/version.py    2025-09-22 18:07:47.000000000 +0200
@@ -2,4 +2,4 @@
 
 :var __version__: the current Patroni version.
 """
-__version__ = '4.0.6'
+__version__ = '4.0.7'
diff -Nru patroni-4.0.6/README.rst patroni-4.0.7/README.rst
--- patroni-4.0.6/README.rst    2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/README.rst    2025-09-22 18:07:47.000000000 +0200
@@ -12,7 +12,7 @@
 
 We call Patroni a "template" because it is far from being a one-size-fits-all 
or plug-and-play replication system. It will have its own caveats. Use wisely.
 
-Currently supported PostgreSQL versions: 9.3 to 17.
+Currently supported PostgreSQL versions: 9.3 to 18.
 
 **Note to Citus users**: Starting from 3.0 Patroni nicely integrates with the 
`Citus <https://github.com/citusdata/citus>`__ database extension to Postgres. 
Please check the `Citus support page 
<https://github.com/patroni/patroni/blob/master/docs/citus.rst>`__ in the 
Patroni documentation for more info about how to use Patroni high availability 
together with a Citus distributed cluster.
 
diff -Nru patroni-4.0.6/setup.py patroni-4.0.7/setup.py
--- patroni-4.0.6/setup.py      2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/setup.py      2025-09-22 18:07:47.000000000 +0200
@@ -35,7 +35,6 @@
     'Environment :: Console',
     'Intended Audience :: Developers',
     'Intended Audience :: System Administrators',
-    'License :: OSI Approved :: MIT License',
     'Operating System :: MacOS',
     'Operating System :: POSIX :: Linux',
     'Operating System :: POSIX :: BSD :: FreeBSD',
@@ -205,6 +204,7 @@
         author_email=AUTHOR_EMAIL,
         description=DESCRIPTION,
         license=LICENSE,
+        license_files=('LICENSE',),
         keywords=KEYWORDS,
         long_description=read('README.rst'),
         classifiers=CLASSIFIERS,
diff -Nru patroni-4.0.6/tests/test_etcd3.py patroni-4.0.7/tests/test_etcd3.py
--- patroni-4.0.6/tests/test_etcd3.py   2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/tests/test_etcd3.py   2025-09-22 18:07:47.000000000 +0200
@@ -96,8 +96,12 @@
 
     @patch.object(urllib3.PoolManager, 'urlopen', mock_urlopen)
     @patch.object(Etcd3Client, 'watchprefix', 
Mock(return_value=urllib3.response.HTTPResponse()))
+    @patch.object(urllib3.response.HTTPResponse, 'read_chunked',
+                  Mock(return_value=[b'{"result":{"canceled":true}}']))
     def test__build_cache(self):
-        self.kv_cache._build_cache()
+        with patch('patroni.dcs.etcd3.logger') as mock_logger:
+            self.kv_cache._build_cache()
+            mock_logger.info.assert_called_once_with('Watch request canceled')
 
     def test__do_watch(self):
         self.client.watchprefix = Mock(return_value=False)
diff -Nru patroni-4.0.6/tests/test_etcd.py patroni-4.0.7/tests/test_etcd.py
--- patroni-4.0.6/tests/test_etcd.py    2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/tests/test_etcd.py    2025-09-22 18:07:47.000000000 +0200
@@ -211,6 +211,9 @@
                 patch.object(EtcdClient, '_load_machines_cache', 
Mock(return_value=True)):
             self.assertRaises(etcd.EtcdException, rtry, 
self.client.api_execute, '/', 'GET', params={'retry': rtry})
 
+        with patch.object(EtcdClient, '_get_machines_list', 
Mock(side_effect=etcd.EtcdConnectionFailed)):
+            self.assertRaises(etcd.EtcdConnectionFailed, 
self.client.api_execute, '/', 'GET')
+
         with patch.object(EtcdClient, '_do_http_request', 
Mock(side_effect=etcd.EtcdException)):
             self.client._read_timeout = 0.01
             self.assertRaises(etcd.EtcdException, self.client.api_execute, 
'/', 'GET')
diff -Nru patroni-4.0.6/tests/test_kubernetes.py 
patroni-4.0.7/tests/test_kubernetes.py
--- patroni-4.0.6/tests/test_kubernetes.py      2025-06-06 19:27:48.000000000 
+0200
+++ patroni-4.0.7/tests/test_kubernetes.py      2025-09-22 18:07:47.000000000 
+0200
@@ -23,7 +23,7 @@
 def mock_list_namespaced_config_map(*args, **kwargs):
     k8s_group_label = get_mpp({'citus': {'group': 0, 'database': 
'postgres'}}).k8s_group_label
     metadata = {'resource_version': '1', 'labels': {'f': 'b'}, 'name': 
'test-config',
-                'annotations': {'initialize': '123', 'config': '{}'}}
+                'annotations': {'initialize': '123', 'config': '[]', 
'history': '{}'}}
     items = 
[k8s_client.V1ConfigMap(metadata=k8s_client.V1ObjectMeta(**metadata))]
     metadata.update({'name': 'test-leader',
                      'annotations': {'optime': '1234x', 'leader': 'p-0', 
'ttl': '30s',
diff -Nru patroni-4.0.6/tests/test_patroni.py 
patroni-4.0.7/tests/test_patroni.py
--- patroni-4.0.6/tests/test_patroni.py 2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/tests/test_patroni.py 2025-09-22 18:07:47.000000000 +0200
@@ -136,9 +136,13 @@
         def mock_signal(signo, handler):
             handler(signo, None)
 
-        with patch('signal.signal', mock_signal):
-            with patch('os.waitpid', Mock(side_effect=[(1, 0), (0, 0)])):
+        with patch('signal.signal', mock_signal), patch('os.kill') as 
mock_kill:
+            with patch('os.waitpid', Mock(side_effect=[(1, 0), (0, 0)])), \
+                 patch('patroni.__main__.logger') as mock_logger:
                 _main()
+                mock_kill.assert_called_with(mock_process.return_value.pid, 
signal.SIGTERM)
+                if os.name != 'nt':
+                    mock_logger.info.assert_called_with('Reaped pid=%s, exit 
status=%s', 1, 0)
             with patch('os.waitpid', Mock(side_effect=OSError)):
                 _main()
 
diff -Nru patroni-4.0.6/tests/test_slots.py patroni-4.0.7/tests/test_slots.py
--- patroni-4.0.6/tests/test_slots.py   2025-06-06 19:27:48.000000000 +0200
+++ patroni-4.0.7/tests/test_slots.py   2025-09-22 18:07:47.000000000 +0200
@@ -55,13 +55,13 @@
         self.p.set_role('standby_leader')
         with patch.object(SlotsHandler, 'drop_replication_slot', 
Mock(return_value=(True, False))), \
                 patch.object(global_config.__class__, 'is_standby_cluster', 
PropertyMock(return_value=True)), \
-                patch('patroni.postgresql.slots.logger.debug') as mock_debug:
+                patch('patroni.postgresql.slots.logger.warning') as 
mock_warning:
             self.s.sync_replication_slots(cluster, self.tags)
-            mock_debug.assert_called_once()
+            mock_warning.assert_called_once_with("Unable to drop replication 
slot '%s', slot is active", 'foobar')
         self.p.set_role('replica')
         with patch.object(Postgresql, 'is_primary', Mock(return_value=False)), 
\
                 patch.object(global_config.__class__, 'is_paused', 
PropertyMock(return_value=True)), \
-                patch.object(SlotsHandler, 'drop_replication_slot') as 
mock_drop:
+                patch.object(SlotsHandler, '_drop_replication_slot') as 
mock_drop:
             config.data['slots'].pop('ls')
             self.s.sync_replication_slots(cluster, self.tags)
             mock_drop.assert_not_called()
@@ -342,6 +342,7 @@
                           [self.me, self.other, self.leadermem], None, 
SyncState.empty(), None, None)
         global_config.update(cluster)
         self.s.sync_replication_slots(cluster, self.tags)
+
         with patch.object(SlotsHandler, '_query', 
Mock(side_effect=[[('blabla', 'physical', None, 12345, None, None,
                                                                       None, 
None, None)], Exception])) as mock_query, \
                 patch('patroni.postgresql.slots.logger.error') as mock_error:
@@ -353,7 +354,7 @@
 
         with patch.object(SlotsHandler, '_query', 
Mock(side_effect=[[('test_1', 'physical', 1, 12345, None, None,
                                                                       None, 
None, None)], Exception])), \
-                patch.object(SlotsHandler, 'drop_replication_slot', 
Mock(return_value=(False, True))):
+                patch.object(SlotsHandler, '_drop_replication_slot', 
Mock(return_value=(True))):
             self.s.sync_replication_slots(cluster, self.tags)
 
         with patch.object(SlotsHandler, '_query', 
Mock(side_effect=[[('test_1', 'physical', 1, 12345, None, None,
@@ -366,16 +367,31 @@
 
         with patch.object(SlotsHandler, '_query', 
Mock(side_effect=[[('test_1', 'physical', 1, 12345, None, None,
                                                                       None, 
None, None)], Exception])), \
-                patch.object(SlotsHandler, 'drop_replication_slot', 
Mock(return_value=(False, False))):
+                patch.object(SlotsHandler, '_drop_replication_slot', 
Mock(return_value=(False))):
             self.s.sync_replication_slots(cluster, self.tags)
 
         with patch.object(SlotsHandler, '_query', 
Mock(side_effect=[[('test_1', 'physical', 1, 12345, None, None,
                                                                       None, 
None, None)], Exception])), \
                 patch.object(Cluster, 'is_unlocked', Mock(return_value=True)), 
\
-                patch.object(SlotsHandler, 'drop_replication_slot') as 
mock_drop:
+                patch.object(SlotsHandler, '_drop_replication_slot') as 
mock_drop:
             self.s.sync_replication_slots(cluster, self.tags)
             mock_drop.assert_not_called()
 
+        # If the slot has no restart_lsn, we should not try to advance it, and 
only warn the user that this is not an
+        # expected situation.
+        with patch.object(SlotsHandler, '_query', 
Mock(side_effect=[[('blabla', 'physical', None, None, None, None,
+                                                                      None, 
None, None)], Exception])) as mock_query, \
+                patch('patroni.postgresql.slots.logger.warning') as 
mock_warning, \
+                patch.object(SlotsHandler, '_drop_replication_slot') as 
mock_drop:
+            self.s.sync_replication_slots(cluster, self.tags)
+            for mock_call in mock_query.call_args_list:
+                self.assertNotIn("pg_catalog.pg_replication_slot_advance", 
mock_call[0][0])
+            self.assertEqual(mock_warning.call_args[0][0],
+                             'Dropping physical replication slot %s because it 
has no restart_lsn. '
+                             'This slot was probably not created by Patroni, 
but by an external agent.')
+            self.assertEqual(mock_warning.call_args[0][1], 'blabla')
+            mock_drop.assert_called_once_with('blabla')
+
     @patch.object(Postgresql, 'is_primary', Mock(return_value=False))
     @patch.object(Postgresql, 'role', PropertyMock(return_value='replica'))
     @patch.object(TestTags, 'tags', PropertyMock(return_value={'nofailover': 
True}))
@@ -389,3 +405,50 @@
                                                                       None, 
None, None)], Exception])) as mock_query:
             self.s.sync_replication_slots(cluster, self.tags)
             self.assertTrue(mock_query.call_args[0][0].startswith('SELECT 
slot_name, slot_type, xmin, '))
+
+    def test__drop_replication_slot(self):
+        """Test the :meth:~SlotsHandler._drop_replication_slot` method."""
+        # Should log info and remove the slot from the list when the slot is 
dropped
+        self.s._replication_slots['testslot'] = {'type': 'physical'}
+        self.s._schedule_load_slots = False
+        with patch.object(self.s, 'drop_replication_slot', 
return_value=(False, True)) as mock_drop, \
+                patch('patroni.postgresql.slots.logger.info') as mock_info, \
+                patch('patroni.postgresql.slots.logger.warning') as 
mock_warning, \
+                patch('patroni.postgresql.slots.logger.error') as mock_error:
+            self.s._drop_replication_slot('testslot')
+            mock_drop.assert_called_once_with('testslot')
+            mock_info.assert_called_once_with("Dropped replication slot '%s'", 
'testslot')
+            mock_warning.assert_not_called()
+            mock_error.assert_not_called()
+            self.assertFalse(self.s._schedule_load_slots)
+            self.assertNotIn('testslot', self.s._replication_slots)
+
+        # Should log warning and keep slot in the list when the slot is active 
and not dropped
+        self.s._replication_slots['testslot'] = {'type': 'physical'}
+        self.s._schedule_load_slots = False
+        with patch.object(self.s, 'drop_replication_slot', return_value=(True, 
False)) as mock_drop, \
+                patch('patroni.postgresql.slots.logger.info') as mock_info, \
+                patch('patroni.postgresql.slots.logger.warning') as 
mock_warning, \
+                patch('patroni.postgresql.slots.logger.error') as mock_error:
+            self.s._drop_replication_slot('testslot')
+            mock_drop.assert_called_once_with('testslot')
+            mock_info.assert_not_called()
+            mock_warning.assert_called_once_with("Unable to drop replication 
slot '%s', slot is active", 'testslot')
+            mock_error.assert_not_called()
+            self.assertTrue(self.s._schedule_load_slots)
+            self.assertIn('testslot', self.s._replication_slots)
+
+        # Should log error and keep the slot in the list when the slot is not 
active and not dropped
+        self.s._replication_slots['testslot'] = {'type': 'physical'}
+        self.s._schedule_load_slots = False
+        with patch.object(self.s, 'drop_replication_slot', 
return_value=(False, False)) as mock_drop, \
+                patch('patroni.postgresql.slots.logger.info') as mock_info, \
+                patch('patroni.postgresql.slots.logger.warning') as 
mock_warning, \
+                patch('patroni.postgresql.slots.logger.error') as mock_error:
+            self.s._drop_replication_slot('testslot')
+            mock_drop.assert_called_once_with('testslot')
+            mock_info.assert_not_called()
+            mock_warning.assert_not_called()
+            mock_error.assert_called_once_with("Failed to drop replication 
slot '%s'", 'testslot')
+            self.assertTrue(self.s._schedule_load_slots)
+            self.assertIn('testslot', self.s._replication_slots)

Reply via email to