Now, with some effort this can be resolved, eg
if the server is removed, keep it internally as removed server and
for segments connecting this server trigger removal of replication
agreements or mark a the last segment, when tried to remove, as
pending and once the server is removed also remove the
corresponding repl agreements
Why should we "keep it internally" ?
If you mark the agreements as managed by setting an attribute on
them, then you will never have any issue recognizing a "managed"
agreement in cn=config, and you will also immediately find out it
is "old" as it is not backed by a segment so you will safely remove
it.
I didn't want to add new flags/fields to the replication agreements
as long as anything can be handled by the data in the shared tree.
We have too. I think it is a must or we will find numerous corner cases.
Is there a specific reason why you do not want to add flags to
replication agreements in cn=config ?
Simo and I had a discussion on this and had agreed that the "marking" of a replication agreement as controlled by the plugin could be done by a naming convention on the replication agreements. They are originally created as "cn=meTo<remote host>,..." and would be renamed to something like "cn=<local host>-to-<remote host>,....." avoiding to add a new attribute to the repl agreement schema.

Unfortunately this does not work out of the box. I only discovered afetr implementing and testing (was not aware of before :-) that DS does not implement modrdn operation for internal backends, It just returns unwilling_to_perform. And if it will be implemented the replication plugin will have to be changed as well to catch the mordrdn to update the in memory
objects with the new name (which is used in logging).

So, if there is no objection, I will go back to the "flag" solution.

_______________________________________________
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Reply via email to