On 01/09/2015 03:50 PM, Simo Sorce wrote:
On Fri, 09 Jan 2015 15:29:02 +0100
Ludwig Krispenz <lkris...@redhat.com> wrote:

On 01/07/2015 05:35 PM, Simo Sorce wrote:
On Wed, 07 Jan 2015 17:23:08 +0100
Ludwig Krispenz <lkris...@redhat.com> wrote:

On 01/07/2015 05:13 PM, Simo Sorce wrote:
On Wed, 07 Jan 2015 17:11:53 +0100
Ludwig Krispenz <lkris...@redhat.com> wrote:

Now, with some effort this can be resolved, eg
if the server is removed, keep it internally as removed
server and for segments connecting this server trigger
removal of replication agreements or mark a the last
segment, when tried to remove, as pending and once the
server is removed also remove the corresponding repl
agreements
Why should we "keep it internally" ?
If you mark the agreements as managed by setting an attribute
on them, then you will never have any issue recognizing a
"managed" agreement in cn=config, and you will also
immediately find out it is "old" as it is not backed by a
segment so you will safely remove it.
I didn't want to add new flags/fields to the replication
agreements as long as anything can be handled by the data in
the shared tree.
We have too. I think it is a must or we will find numerous
corner cases. Is there a specific reason why you do not want to
add flags to replication agreements in cn=config ?
Simo and I had a discussion on this and had agreed that the
"marking" of a replication agreement
as controlled by the plugin could be done by a naming convention
on the replication agreements.
They are originally created as "cn=meTo<remote host>,..." and
would be renamed to something like
"cn=<local host>-to-<remote host>,....." avoiding to add a new
attribute to the repl agreement schema.

Unfortunately this does not work out of the box. I only
discovered afetr implementing and testing (was not aware of
before :-) that DS does not implement modrdn operation for
internal backends, It just returns unwilling_to_perform.
And if it will be implemented the replication plugin will have to
be changed as well to catch the mordrdn to update the in memory
objects with the new name (which is used in logging).

So, if there is no objection, I will go back to the "flag"
solution.
What about simply deleting the agreement and adding it back with
the new name ?
it will stop replication and the restart it again, unnecessary
interrupting replication for some time.
Assume you have a working topology and then raise the domain level
and the plugin becomes active,
creates segments and "marks" agreements as controlled. This should
happen as smooth as
possible.
While this is true, it is also a rare operation. I do not see it as
a big deal to be honest.
However if you prefer to add a flag attribute that is fine by me
too.
after thinking a bit more about it, I don't think we need the mark at
all.
We discussed this already and we came to the conclusion we need it :)
ok, I think in the discussion we came up with the mark was with my concerns about
removing a replica, and you more or less destroyed my concerns.
I agree there could be some corner cases like the ones you sketch below.
But it will still be difficult to do the right thing in all cases.
Take your example of a marked agreement and no segment. we could arrive at this state
in two different scenarios:
1]
-  take a backup of the shared database
- add a segment, a RA will be created and marked
- the database gets corrupted and you restore the backup
- the agmt is marked and no segment exists - should we really delete the agmt ?
2]
-  have a segment and a marked agreement
- take a backup of the dse.ldif
- delete the segment, agmt is removed
- restore the dse.ldif,
- the agmt is marked, no segment exists

The agreement would have been marked in two scenarios
- the agreement exists and the dom level is raised, so that a segment
is created from the agreement
- the dom level is set, the plugin active and a segment is addded to
the shared tree so that a replication agreement is generated.
In all cases where an agreement is marked, there is a 1:1
corresponding segment,  so the existence of a segment should be
marking enough. I will make the mark_agreement and check_mark as
noops, so if we really run into a
scenario where a mark would be required, it can be added in one of
the methods discussed
so far.
I recall the problems came in corner cases. Like when a replica starts
and finds an agreement.
If it is marked and a segment does not exist, we simply delete it.
However if it not marked and a segment does not exist, what do you do ?
Without markings you can only import it, and this way you may end up
reviving deleted agreements by mistake.

The reason this may happen is that cn=config and the main database are
separate, and a backup dse.ldif may be use upon start even though the
shared database had been already updated, so we cannot count on the
replica starting and finding the segment-to-be-deleted-via-replication
still there.

The marking was needed to make sure that once an agreement was imported
we wouldn't do it again later on.
I still maintain that marking it is safer and will lead to less issues
with phantom segments coming back to life.

Simo.



_______________________________________________
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Reply via email to