On 01/14/2016 04:59 PM, Petr Vobornik wrote:
On 01/14/2016 04:16 PM, Ludwig Krispenz wrote:


On 01/14/2016 03:59 PM, Stanislav Laznicka wrote:
On 01/14/2016 03:21 PM, Rob Crittenden wrote:
Stanislav Laznicka wrote:
Please see the rebased patches attached.

On 01/13/2016 02:01 PM, Martin Basti wrote:

On 18.12.2015 12:46, Stanislav Laznicka wrote:
Hi,

Attached are the patches for auto-find and clean of dangling
(cs)ruvs. Currently, the cleaning of an RUV waits for all replicas to
be online, even on --force. If that were an issue, I can make the
command fail before trying to clean any of RUVs. However, the user is shown a replica is offline and is prompted to confirm the cleaning so
the possible wait should not be a problem I believe.

Standa L.


Hello,

patches needs rebase, I cannot apply them.
Will this confuse people? Currently, for good or bad, there are two
commands for managing the two different topologies. This mixes some CA
work into ipa-replica-manage.

rob

Well, in the patch, I was just following the discussion at
https://fedorahosted.org/freeipa/ticket/5411. Ludwig mentions that
ipa-csreplica-manage should go deprecated and does not want to enhance
it. Also, the only thing the code does is removing trash from the ds
so it makes sense to me to do it in just one command, as well as the
users might expect that, too.

I guess it would be possible to add an option that would select which
of the subtrees should be cleaned of RUVs. It should stay as one
command nonetheless. Adding such an option for this command would then
probably mean all the commands should have it as it would make more
sense, though.

Let me add Petr and Ludwig to CC: as they both had inputs on keeping
the command in just ipa-replica-manage.
yes, that was the idea to keep ipa-csreplica-manage (which does not have
clean-ruv,..) for domain-level 0, but not add new features. Also
"ipa-replica-manage del" now triggers the ruv cleaning of ipaca


Yes, ipa-csreplica-manage should be deprecated.

I think that one of the reasons why dangling CA RUVs are not uncommon is that users forget about `ipa-csreplica-manage del` command when removing a replica.

New `ipa-replica-manage del` also removes replication agreements and therefore cleans RUVs of CA suffix (on domain level 1). In this context it is not inconsistent.

Btw, one of the good example why this commands will be helpful is following bz, especially a sentence in: https://bugzilla.redhat.com/show_bug.cgi?id=1295971#c5
"""
I had some mistakes to clean some valid RUV, for example, 52 for eupre1
"""

We should think about list-clean-ruv and abort-clean-ruv commands. There is no counterpart for CA suffix now. Could be in different patch.

With clean-dangling-ruvs command it would be good to deprecate clean-ruv command of ipa-replica-manage - should be different patch.

I'm not sure if it should abort if some replica is down. Maybe yes until https://fedorahosted.org/freeipa/ticket/5396 is fixed.

The path set misses update of man page.
Attached are the patches with the description for the man page. Abort of the clean-dangling-ruv operation on any replica offline status was also added.
From 3fe1ec52fb222f4b6e3066e61bfd5e3c0f9b7bd7 Mon Sep 17 00:00:00 2001
From: Stanislav Laznicka <slazn...@redhat.com>
Date: Fri, 18 Dec 2015 10:30:44 +0100
Subject: [PATCH 1/2] Listing and cleaning RUV extended for CA suffix

https://fedorahosted.org/freeipa/ticket/5411
---
 install/tools/ipa-replica-manage | 36 +++++++++++++++++++++++-------------
 ipaserver/install/replication.py |  2 +-
 2 files changed, 24 insertions(+), 14 deletions(-)

diff --git a/install/tools/ipa-replica-manage b/install/tools/ipa-replica-manage
index e4af7b2fd9a40482dfa75d275d528221a1bc22ad..188e2c73a41aa1fd476475f74128b85b7383b09e 100755
--- a/install/tools/ipa-replica-manage
+++ b/install/tools/ipa-replica-manage
@@ -345,7 +345,7 @@ def del_link(realm, replica1, replica2, dirman_passwd, force=False):
 
     return True
 
-def get_ruv(realm, host, dirman_passwd, nolookup=False):
+def get_ruv(realm, host, dirman_passwd, nolookup=False, ca=False):
     """
     Return the RUV entries as a list of tuples: (hostname, rid)
     """
@@ -354,7 +354,10 @@ def get_ruv(realm, host, dirman_passwd, nolookup=False):
         enforce_host_existence(host)
 
     try:
-        thisrepl = replication.ReplicationManager(realm, host, dirman_passwd)
+        if ca:
+            thisrepl = replication.get_cs_replication_manager(realm, host, dirman_passwd)
+        else:
+            thisrepl = replication.ReplicationManager(realm, host, dirman_passwd)
     except Exception as e:
         print("Failed to connect to server %s: %s" % (host, e))
         sys.exit(1)
@@ -362,7 +365,7 @@ def get_ruv(realm, host, dirman_passwd, nolookup=False):
     search_filter = '(&(nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff)(objectclass=nstombstone))'
     try:
         entries = thisrepl.conn.get_entries(
-            api.env.basedn, thisrepl.conn.SCOPE_SUBTREE, search_filter,
+            thisrepl.db_suffix, thisrepl.conn.SCOPE_SUBTREE, search_filter,
             ['nsds50ruv'])
     except errors.NotFound:
         print("No RUV records found.")
@@ -402,7 +405,7 @@ def get_rid_by_host(realm, sourcehost, host, dirman_passwd, nolookup=False):
         if '%s:389' % host == netloc:
             return int(rid)
 
-def clean_ruv(realm, ruv, options):
+def clean_ruv(realm, ruv, options, ca=False):
     """
     Given an RID create a CLEANALLRUV task to clean it up.
     """
@@ -412,7 +415,7 @@ def clean_ruv(realm, ruv, options):
         sys.exit("Replica ID must be an integer: %s" % ruv)
 
     servers = get_ruv(realm, options.host, options.dirman_passwd,
-                      options.nolookup)
+                      options.nolookup, ca=ca)
     found = False
     for (netloc, rid) in servers:
         if ruv == int(rid):
@@ -424,14 +427,21 @@ def clean_ruv(realm, ruv, options):
         sys.exit("Replica ID %s not found" % ruv)
 
     print("Clean the Replication Update Vector for %s" % hostname)
-    print()
-    print("Cleaning the wrong replica ID will cause that server to no")
-    print("longer replicate so it may miss updates while the process")
-    print("is running. It would need to be re-initialized to maintain")
-    print("consistency. Be very careful.")
-    if not options.force and not ipautil.user_input("Continue to clean?", False):
-        sys.exit("Aborted")
-    thisrepl = replication.ReplicationManager(realm, options.host,
+
+    if not options.force:
+        print()
+        print("Cleaning the wrong replica ID will cause that server to no")
+        print("longer replicate so it may miss updates while the process")
+        print("is running. It would need to be re-initialized to maintain")
+        print("consistency. Be very careful.")
+        if not ipautil.user_input("Continue to clean?", False):
+            sys.exit("Aborted")
+
+    if ca:
+        thisrepl = replication.get_cs_replication_manager(realm, options.host,
+                                                        options.dirman_passwd)
+    else:
+        thisrepl = replication.ReplicationManager(realm, options.host,
                                               options.dirman_passwd)
     thisrepl.cleanallruv(ruv)
     print("Cleanup task created")
diff --git a/ipaserver/install/replication.py b/ipaserver/install/replication.py
index 19592e21f32b2013225036b3ce692f6cdee15a73..3221a1bd00bf9375d4348e5ba44d1645f0911b3e 100644
--- a/ipaserver/install/replication.py
+++ b/ipaserver/install/replication.py
@@ -1343,7 +1343,7 @@ class ReplicationManager(object):
             {
                 'objectclass': ['top', 'extensibleObject'],
                 'cn': ['clean %d' % replicaId],
-                'replica-base-dn': [api.env.basedn],
+                'replica-base-dn': [self.db_suffix],
                 'replica-id': [replicaId],
             }
         )
-- 
2.5.0

From 34d6dfe19336ab2ad8a90620ac8d97e6d59aa859 Mon Sep 17 00:00:00 2001
From: Stanislav Laznicka <slazn...@redhat.com>
Date: Fri, 18 Dec 2015 10:34:52 +0100
Subject: [PATCH 2/2] Automatically detect and remove dangling RUVs

https://fedorahosted.org/freeipa/ticket/5411
---
 install/tools/ipa-replica-manage       | 162 +++++++++++++++++++++++++++++++++
 install/tools/man/ipa-replica-manage.1 |   3 +
 2 files changed, 165 insertions(+)

diff --git a/install/tools/ipa-replica-manage b/install/tools/ipa-replica-manage
index 188e2c73a41aa1fd476475f74128b85b7383b09e..36bccff4df68ddaf39e5d056a5a8ec55527057b6 100755
--- a/install/tools/ipa-replica-manage
+++ b/install/tools/ipa-replica-manage
@@ -60,6 +60,7 @@ commands = {
     "clean-ruv":(1, 1, "Replica ID of to clean", "must provide replica ID to clean"),
     "abort-clean-ruv":(1, 1, "Replica ID to abort cleaning", "must provide replica ID to abort cleaning"),
     "list-clean-ruv":(0, 0, "", ""),
+    "clean-dangling-ruv":(0, 0, "", ""),
     "dnarange-show":(0, 1, "[master fqdn]", ""),
     "dnanextrange-show":(0, 1, "", ""),
     "dnarange-set":(2, 2, "<master fqdn> <range>", "must provide a master and ID range"),
@@ -528,6 +529,165 @@ def list_clean_ruv(realm, host, dirman_passwd, verbose, nolookup=False):
                 print(str(dn))
                 print(entry.single_value.get('nstasklog'))
 
+
+def clean_dangling_ruvs(realm, host, options):
+    """
+    Cleans all RUVs and CS-RUVs that are left in the system from uninstalled replicas
+    """
+    # get the Directory Manager password
+    if options.dirman_passwd:
+        dirman_passwd = options.dirman_passwd
+    else:
+        dirman_passwd = installutils.read_password('Directory Manager',
+            confirm=False, validate=False, retry=False)
+        if dirman_passwd is None:
+            sys.exit('Directory Manager password is required')
+
+    options.dirman_passwd = dirman_passwd
+
+    try:
+        conn = ipaldap.IPAdmin(host, 636, cacert=CACERT)
+        conn.do_simple_bind(bindpw=dirman_passwd)
+
+        # get all masters
+        masters_dn = DN(('cn', 'masters'), ('cn', 'ipa'), ('cn', 'etc'),
+                        ipautil.realm_to_suffix(realm))
+        masters = conn.get_entries(masters_dn, conn.SCOPE_ONELEVEL)
+        info = dict()
+
+        # check whether CAs are configured on those masters
+        for master in masters:
+            info[master.single_value['cn']] = {
+                    'online': False, 'ca': False, 'ruvs': list(),
+                    'csruvs': list(), 'clean_ruv': list(),
+                    'clean_csruv': list()
+                    }
+            try:
+                ca_dn = DN(('cn', 'ca'), DN(master.dn))
+                entry = conn.get_entry(ca_dn)
+                info[master.single_value['cn']]['ca'] = True
+            except errors.NotFound:
+                continue
+
+    except Exception as e:
+        sys.exit(
+            "Failed to get data from '%s' while trying to list replicas: %s" %
+            (host, e)
+        )
+    finally:
+        conn.unbind()
+
+    # Get realm string for config tree
+    s = realm.split('.')
+    s = ['dc={dc},'.format(dc=x.lower()) for x in s]
+    realm_config = DN(('cn', ''.join(s)[0:-1]))
+
+    replica_dn = DN(('cn', 'replica'), realm_config,
+                    ('cn', 'mapping tree'), ('cn', 'config'))
+
+    csreplica_dn = DN(('cn', 'replica'), ('cn', 'o=ipaca'),
+                      ('cn', 'mapping tree'), ('cn', 'config'))
+
+    masters = [x.single_value['cn'] for x in masters]
+
+    ruvs = list()
+    csruvs = list()
+    offlines = list()
+    for master in masters:
+        try:
+            conn = ipaldap.IPAdmin(master, 636, cacert=CACERT)
+            conn.do_simple_bind(bindpw=dirman_passwd)
+            info[master]['online'] = True
+        except:
+            print("The server '%s' appears to be offline." % master)
+            offlines.append(master)
+            continue
+
+        try:
+            entry = conn.get_entry(replica_dn)
+            ruv = (master, entry.single_value.get('nsDS5ReplicaID'))
+            if ruv not in ruvs:
+                ruvs.append(ruv)
+
+            if(info[master]['ca']):
+                entry = conn.get_entry(csreplica_dn)
+                csruv = (master, entry.single_value.get('nsDS5ReplicaID'))
+                if csruv not in csruvs:
+                    csruvs.append(csruv)
+
+            # get_ruv returns server names with :port which needs to be split off
+            ruv_list = get_ruv(realm, master, dirman_passwd, options.nolookup)
+            info[master]['ruvs'] = [
+                (re.sub(':\d+', '', x), y)
+                for (x, y) in ruv_list
+                ]
+
+            ruv_list = get_ruv(realm, master, dirman_passwd, options.nolookup,
+                               ca=True)
+            info[master]['csruvs'] = [
+                (re.sub(':\d+', '', x), y)
+                for (x, y) in ruv_list
+                ]
+        except Exception as e:
+            sys.exit("Failed to obtain information from '%s': %s" %
+                     (master, str(e)))
+        finally:
+            conn.unbind()
+
+    clean_list = list()
+    dangles = False
+    # get the dangling RUVs
+    for master in masters:
+        if info[master]['online']:
+            for ruv in info[master]['ruvs']:
+                if (ruv not in ruvs) and (ruv[0] not in offlines):
+                    info[master]['clean_ruv'].append(ruv)
+                    dangles = True
+
+            if info[master]['ca']:
+                for csruv in info[master]['csruvs']:
+                    if (csruv not in csruvs) and (csruv[0] not in offlines):
+                        info[master]['clean_csruv'].append(csruv)
+                        dangles = True
+
+    if not dangles:
+        print('No dangling RUVs found')
+        sys.exit(0)
+
+    print('These RUVs are dangling and will be removed:')
+    for master in masters:
+        if info[master]['online'] and (info[master]['clean_ruv'] or
+                                       info[master]['clean_csruv']):
+            print('Host: {m}'.format(m=master))
+            print('\tRUVs:')
+            for ruv in info[master]['clean_ruv']:
+                print('\t\tid: {id}, hostname: {host}'.format(id=ruv[1], host=ruv[0]))
+
+            print('\tCS-RUVs:')
+            for csruv in info[master]['clean_csruv']:
+                print('\t\tid: {id}, hostname: {host}'.format(id=csruv[1], host=csruv[0]))
+
+    # TODO: this can be removed when #5396 is fixed
+    if offlines:
+        sys.exit("ERROR: All replicas need to be online to proceed.")
+
+    if not options.force and not ipautil.user_input("Proceed with cleaning?", False):
+        sys.exit("Aborted")
+
+    options.force = True
+    cleaned = list()
+    for master in masters:
+        options.host = master
+        for ruv in info[master]['clean_ruv']:
+            if ruv[1] not in cleaned:
+                cleaned.append(ruv[1])
+                clean_ruv(realm, ruv[1], options)
+        for csruv in info[master]['clean_csruv']:
+            if csruv[1] not in cleaned:
+                cleaned.append(csruv[1])
+                clean_ruv(realm, csruv[1], options, ca=True)
+
+
 def check_last_link(delrepl, realm, dirman_passwd, force):
     """
     We don't want to orphan a server when deleting another one. If you have
@@ -1460,6 +1620,8 @@ def main():
     elif args[0] == "list-clean-ruv":
         list_clean_ruv(realm, host, dirman_passwd, options.verbose,
                        options.nolookup)
+    elif args[0] == "clean-dangling-ruv":
+        clean_dangling_ruvs(realm, host, options)
     elif args[0] == "dnarange-show":
         if len(args) == 2:
             master = args[1]
diff --git a/install/tools/man/ipa-replica-manage.1 b/install/tools/man/ipa-replica-manage.1
index 3ed1c5734e3054501f39ffb4346f05c22361a584..ae109c4c5ff4720eb70b06c6f14a791088696d47 100644
--- a/install/tools/man/ipa-replica-manage.1
+++ b/install/tools/man/ipa-replica-manage.1
@@ -53,6 +53,9 @@ The available commands are:
 \fBclean\-ruv\fR [REPLICATION_ID]
 \- Run the CLEANALLRUV task to remove a replication ID.
 .TP
+\fBclean\-dangling\-ruv\fR
+\- Cleans all RUVs and CS\-RUVs that are left in the system from uninstalled replicas.
+.TP
 \fBabort\-clean\-ruv\fR [REPLICATION_ID]
 \- Abort a running CLEANALLRUV task. With \-\-force option the task does not wait for all the replica servers to have been sent the abort task, or be online, before completing.
 .TP
-- 
2.5.0

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code

Reply via email to