On Wed, 2012-02-29 at 07:10 +0000, JR Aquino wrote:
> On Feb 28, 2012, at 10:44 AM, JR Aquino wrote:
> 
> >
> >
> > On Feb 24, 2012, at 3:09 PM, JR Aquino wrote:
> >
> >> ipa-replica-manage del causes tombstone entries to remain in 389 DS.  This 
> >> has proven to be problematic.
> >> We can automatically perform the cleanup task at the deletion time to 
> >> minimize orphans and ghosts in the directory.
> >>
> >> This patch runs the cleanruv action on all masters following a delete.
> >> https://fedorahosted.org/freeipa/ticket/2303
> >
> > Rebased and corrected patch attached
> 
> NACK
> 
> Testing turned up a missed case for handling an exception during --force.
> 
> Corrected patch attached

Hello JR,

I reviewed and tested your patch and have few comments.

1) It needs rebasing, otherwise indentation of this block is weird:

+    # Get list of Masters
+    dn = 'cn=masters,cn=ipa,cn=etc,%s' % thisrepl.suffix
+    res = thisrepl.conn.search_s(dn, ldap.SCOPE_ONELEVEL)
+    master_names = []
+    for entry in res:
+        master_names.append(entry.cn)
+
         if force_del: 
-            dn = 'cn=masters,cn=ipa,cn=etc,%s' % thisrepl.suffix
-            res = thisrepl.conn.search_s(dn, ldap.SCOPE_ONELEVEL)
-            replica_names = []
-            for entry in res:
-                replica_names.append(entry.cn)
+            replica_names = master_names

We don't need to set replica_names in the "for" loop (i.e. for each
master). master_names array creation could be created with a simpler
construct:

master_names = [ entry.cn for entry in res ]


2) Error message improvement

When I tried to remove replica agreement and one of the replicas
(vm-062.idm.lab.bos.redhat.com) was down, Clean RUV task on this replica
failed as expected but I never get a message about which replica it was:

# ipa-replica-manage del vm-071.idm.lab.bos.redhat.com
Deleting a master is irreversible.
To reconnect to the remote master you will need to prepare a new replica file
and re-install.
Continue to delete? [no]: y
Deleted replication agreement from 'vm-022.idm.lab.bos.redhat.com' to 
'vm-071.idm.lab.bos.redhat.com'
Deleted replication agreement from 'vm-115.idm.lab.bos.redhat.com' to 
'vm-071.idm.lab.bos.redhat.com'
Cleaned RUV entry for 'vm-071.idm.lab.bos.redhat.com' on 
'vm-022.idm.lab.bos.redhat.com'
Cleaned RUV entry for 'vm-071.idm.lab.bos.redhat.com' on 
'vm-115.idm.lab.bos.redhat.com'
{'desc': "Can't contact LDAP server"}
Please refer to http://directory.fedoraproject.org/wiki/Howto:CLEANRUV for 
manual cleanup.

I just got:
{'desc': "Can't contact LDAP server"}
Please refer to http://directory.fedoraproject.org/wiki/Howto:CLEANRUV for 
manual cleanup.

I would expect a more explaining error, something like:
Error: Can't contact LDAP server
Failed to clean RUV entry for 'vm-071.idm.lab.bos.redhat.com' on 
'vm-062.idm.lab.bos.redhat.com':
Please refer to http://directory.fedoraproject.org/wiki/Howto:CLEANRUV for 
manual cleanup.


3) Redundant exception

I this this try..except statement is redundant as you don't perform
anything in the except part:

+    try:
+        repl1 = replication.ReplicationManager(realm, replica1,
dirman_passwd)
+
+        replica_dn = repl1.replica_dn()
+
+        attr = 'CLEANRUV%s' % delrepl_id
+        mod = [(ldap.MOD_REPLACE, 'nsds5task', attr)]
+        repl1.conn.modify_s(replica_dn, mod)
+
+    except Exception, e:
+        raise e


4) Bad tombstone replication

When I started the replica that was down during RUV cleanup, tombstone
was replicated again to all replicas. This would mean that manual
CleanRUV would have to be run on _all_ replicas again and not just the
one that was down. Are we OK with this? If yes, then it would be nice to
mention this information in the error message.

Martin

_______________________________________________
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Reply via email to