Martin Kosek wrote:
On 08/31/2012 07:40 PM, Rob Crittenden wrote:
Rob Crittenden wrote:
It was possible use ipa-replica-manage connect/disconnect/del to end up
orphaning or or more IPA masters. This is an attempt to catch and
prevent that case.

I tested with this topology, trying to delete B.

A <-> B <-> C

I got here by creating B and C from A, connecting B to C then deleting
the link from A to B, so it went from A -> B and A -> C to the above.

What I do is look up the servers that the delete candidate host has
connections to and see if we're the last link.

I added an escape clause if there are only two masters.


Oh, this relies on my cleanruv patch 1031.


1) When I run ipa-replica-manage del --force to an already uninstalled host,
the new code will prevent me the deletation because it cannot connect to it. It
also crashes with UnboundLocalError:

# ipa-replica-manage del --force

Unable to connect to replica, forcing removal
Traceback (most recent call last):
   File "/sbin/ipa-replica-manage", line 708, in <module>
   File "/sbin/ipa-replica-manage", line 677, in main
     del_master(realm, args[1], options)
   File "/sbin/ipa-replica-manage", line 476, in del_master
     sys.exit("Failed read master data from '%s': %s" % (delrepl.hostname, 
UnboundLocalError: local variable 'delrepl' referenced before assignment


I also hit this error when removing a winsync replica.


2) As I wrote before, I think having --force option override the user inquiries
would benefit test automation:

+            if not ipautil.user_input("Continue to delete?", False):
+                sys.exit("Aborted")


3) I don't think this code won't cover this topology:

A - B - C - D - E

It would allow you deleting a replica C even though it would separate A-B and
D-E. Though we may not want to cover this situation now, what you got is
definitely helping.

I think you may be right. I only tested with 4 servers. With this B and D would both still have 2 agreements so wouldn't be covered by the last link test.


>From 66217dd61b8271d6282eaad729c92e6bf961123a Mon Sep 17 00:00:00 2001
From: Rob Crittenden <>
Date: Fri, 31 Aug 2012 11:56:58 -0400
Subject: [PATCH] When deleting a master, try to prevent orphaning other

If you have a replication topology like A <-> B <-> C and you try
to delete server B that will leave A and C orphaned. It may also
prevent re-installation of a new master on B because the cn=masters
entry for it probably still exists on at least one of the other masters.

Check on each master that it connects to to ensure that it isn't the
last link, and fail if it is. If any of the masters are not up then
warn that this could be a bad thing but let the user continue if
they want.

Document how to remove a cn=masters entry in the man page.
 install/tools/ipa-replica-manage       | 66 ++++++++++++++++++++++++++++++++++
 install/tools/man/ipa-replica-manage.1 | 12 +++++++
 2 files changed, 78 insertions(+)

diff --git a/install/tools/ipa-replica-manage b/install/tools/ipa-replica-manage
index c6ef51b7215164c9538afae942e3d42285ca860b..24a33bfb5ea51035eb12eaf7944a7e566640c2ff 100755
--- a/install/tools/ipa-replica-manage
+++ b/install/tools/ipa-replica-manage
@@ -398,9 +398,52 @@ def clean_ruv(realm, ruv, options):
     print "Cleanup task created"
+def check_last_link(delrepl, realm, dirman_passwd, force):
+    """
+    We don't want to orphan a server when deleting another one. If you have
+    A topology that looks like this:
+             A     B
+             |     |
+             |     |
+             |     |
+             C---- D
+    If we try to delete host D it will orphan host B.
+    What we need to do is if the master being deleted has only a single
+    agreement, connect to that master and make sure it has agreements with
+    more than just this master.
+    @delrepl: a ReplicationManager object of the master being deleted
+    returns: hostname of orphaned server or None
+    """
+    replica_names = delrepl.find_ipa_replication_agreements()
+    orphaned = []
+    # Connect to each remote server and see what agreements it has
+    for replica in replica_names:
+        try:
+            repl = replication.ReplicationManager(realm, replica, dirman_passwd)
+        except ldap.SERVER_DOWN, e:
+            print "Unable to validate that '%s' will not be orphaned." % replica
+            if not force and not ipautil.user_input("Continue to delete?", False):
+                sys.exit("Aborted")
+            continue
+        names = repl.find_ipa_replication_agreements()
+        if len(names) == 1 and names[0] == delrepl.hostname:
+            orphaned.append(replica)
+    if len(orphaned):
+        return ', '.join(orphaned)
+    else:
+        return None
 def del_master(realm, hostname, options):
     force_del = False
+    delrepl = None
     # 1. Connect to the local server
@@ -451,6 +494,29 @@ def del_master(realm, hostname, options):
         if not ipautil.user_input("Continue to delete?", False):
             sys.exit("Deletion aborted")
+    # Check for orphans if the remote server is up.
+    if delrepl and not winsync:
+        masters_dn = DN(('cn', 'masters'), ('cn', 'ipa'), ('cn', 'etc'), ipautil.realm_to_suffix(realm))
+        try:
+            masters = delrepl.conn.getList(masters_dn, ldap.SCOPE_ONELEVEL)
+        except Exception, e:
+            masters = []
+            print "Failed to read masters data from '%s': %s" % (delrepl.hostname, convert_error(e))
+            print "Skipping calculation to determine if one or more masters would be orphaned."
+            if not options.force:
+                sys.exit(1)
+        # This only applies if we have more than 2 IPA servers, otherwise
+        # there is no chance of an orphan.
+        if len(masters) > 2:
+            orphaned_server = check_last_link(delrepl, realm, options.dirman_passwd, options.force)
+            if orphaned_server is not None:
+                print "Deleting this server will orphan '%s'. " % orphaned_server
+                print "You will need to reconfigure your replication topology to delete this server."
+                sys.exit(1)
+    else:
+        print "Skipping calculation to determine if one or more masters would be orphaned."
     # 4. Remove each agreement
     for r in replica_names:
diff --git a/install/tools/man/ipa-replica-manage.1 b/install/tools/man/ipa-replica-manage.1
index 4a1c489f33591ff6ac98fe7f9a16ebb6a52ee28a..3eeadd8d6f5af61d9890994f7cadf3acfdc2f3e0 100644
--- a/install/tools/man/ipa-replica-manage.1
+++ b/install/tools/man/ipa-replica-manage.1
@@ -59,6 +59,18 @@ Each IPA master server has a unique replication ID. This ID is used by 389\-ds\-
 When a master is removed, all other masters need to remove its replication ID from the list of masters. Normally this occurs automatically when a master is deleted with ipa\-replica\-manage. If one or more masters was down or unreachable when ipa\-replica\-manage was executed then this replica ID may still exist. The clean\-ruv command may be used to clean up an unused replication ID.
 \fBNOTE\fR: clean\-ruv is \fBVERY DANGEROUS\fR. Execution against the wrong replication ID can result in inconsistent data on that master. The master should be re\-initialized from another if this happens.
+The replication topology is examined when a master is deleted and will attempt to prevent a master from being orphaned. For example, if your topology is A <\-> B <\-> C and you attempt to delete master B it will fail because that would leave masters and A and C orphaned.
+The list of masters is stored in cn=masters,cn=ipa,cn=etc,dc=example,dc=com. This should be cleaned up automatically when a master is deleted. If it occurs that you have deleted the master and all the agreements but these entries still exist then you will not be able to re-install IPA on it, the installation will fail with:
+An IPA master host cannot be deleted or disabled
+Use ldapdelete to remove these entries:
+ $ kinit admin
+ $ ldapdelete -vr -Y GSSAPI,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
+This should only be used as a last resort.
 \fB\-H\fR \fIHOST\fR, \fB\-\-host\fR=\fIHOST\fR

Freeipa-devel mailing list

Reply via email to