On 06/02/2014 08:38 AM, Simo Sorce wrote:
On Mon, 2014-06-02 at 10:08 -0400, Rob Crittenden wrote:
Simo Sorce wrote:
However we may want to be able to mark a topology for 'multiple' sets.
For example we may want to have by default the same topology both for
the main database and for the CA database.
I think we should store them separately and making them the "same" would
be applied by a tool, but the data would just reflect the connections.

I was thinking the object DN would contain the LDAP database name (or
some rough equivalent), so we would store the IPA connections separate
from the CA connections.
Ok, we can debate about this, maybe we simply have a flag in the
framework that 'links' two topologies and simply replicates any change
from one to the other.

The only reason I had to 'mark' stuff in a single topology in order to
share it is that this way any change is atomic and the 2 topologies
cannot diverge, as the objects are the same, if we think the chance of
divergence is low or that it is not important because the topology
plugin will always prevent disconnected states anyway then we may avoid
it, and let the framework try to keep topologies in sync and just loudly
warn if they somehow get out of sync (which will happen briefly every
time replication of the topology objects happens :).

ad 2] the data required are available in the replicationAgreement (and
eventually replica) entries, but the question is if there should be a
1:1 relationship to entries in the shared tree or a condensed
representation, if there should be a server or connection oriented view.
My answer is no, we need only one object per connection, but config
entries are per direction (and different ones on different servers).
We also need to store the type, MMR, read-only, etc, for future-proofing.
Store where ?

One entry per connection would mirror what we have now in the mapping
tree (which is generally ok). I wonder if this would be limiting with
other agreement types depending on the schema we use.
My idea is that on the connection object you have a set of attributes
that tells you how replication happen.

So normally you'll have:
dn: uuid?
objectclass: ipaReplicationTopologySegment
left: master-X
right: master-Y
direction: both || left-right || right-left (|| none ?)

If we have other special types we change direction accordingly or add
another attribute.

We already have the list of servers, so we need to add only the list of
connections in the topology view. We may need to amend the servers
objects to add additional data in some cases. For example indicate
whether it is fully installed or not (on creation the topology plugin
would complain the server is disconnected until we create the first
segment, but that may actually be a good thing :-)
Not sure I grok the fully installed part. A server isn't added as a
master until it is actually installed, so a prepared master shouldn't
show here.
Uhmm you may be right, if we can make this a non-problem, all the
better.

Next question: How to handle changes directly done in the dse.ldif, if
everything should be done by the topology plugin it would have to verify
and compare the info in cn=config and in the shared tree at every
startup of the directory server, which might be complicated by the fact
that the replication plugin might already be started and repl agreemnts
are active before the topology plugin is started and could do its work.
(plugin starting order and dependencies need to be checked).
Why do we care which one starts first ?
We can simply change replication agreements at any time, so the fact the
replication topology (and therefore agreements) can change after startup
should not be an issue.
Someone could delete an agreement, or worse, add one we don't know
about. Does that matter?
Someone can do this at any time after startup, so we already need to
handle this, why should it be a problem ?

It shouldn't be a problem for replication, since everything is dynamic.



However I agree we want to avoid churn, so to answer to Ludwig as well I
guess we just want to make sure the topology plugin always starts before
the replication plugin and amends replication agreements accordingly.

+1 - I think this will avoid some problems.


What happens to values in the mapping tree that aren't represented in
our own topology view?
I think we should ignore them if they reference a machine that is not a
recognized master, I guess the main issue here is a case when a master
got deleted and somehow the cn=config entry was not and we end up with
an orphan agreement that the topology plugin initially created but does
not recognize as its own.
I see 2 options here:
1) We ignore it, and let the admin deal with the issue
2) We mark agreements with a special attribute that indicates they have
been generated by the topology plugin, so the plugin can delete any it
does not recognize as currently valid. The only problem here is initial
migration, but that is not a huge issue IMO, the topology plugin may
simply recognize the two sides of an existing agreement are 2 ipa
masters, and just take over those entries.

[..]

So when we do the migration to this version some script will be needed
to create the initial topology from the agreements. Could we have a race
condition?
If we do the takeover thing I describe above maybe not ?

I really do not want to touch the replication plugin. It works just fine
as it is, and handling topology has nothing to do with handling the low
level details of the replication. To each its own.
If other deployments want to use the topology plugin, we can later move
it to the 389ds codebase and generalize it.
My memory is fuzzy but I think that restarts are required when
adding/deleting agreements. Is that right? What implications would that
have for this?
We create (and change) agreements on the fly w/o ever restarting the
server right now, so I would say not ?

Correct.  Replication configuration is dynamic.


Simo.


_______________________________________________
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Reply via email to