Thanks,

I will look into it and try to add what's missing and also try to make the design a bit more clear.


Ludwig

On 05/07/2015 04:32 PM, thierry bordaz wrote:
On 04/29/2015 11:18 AM, Ludwig Krispenz wrote:
Hi, thanks again, so there is some work to do, but see some answers inline
On 04/27/2015 10:18 AM, thierry bordaz wrote:
On 04/21/2015 10:49 AM, Ludwig Krispenz wrote:

On 04/20/2015 06:00 PM, thierry bordaz wrote:
On 04/13/2015 10:56 AM, Ludwig Krispenz wrote:
Hi,

in the attachment you find the latest state of the "topology plugin", it implements what is defined in the design page: http://www.freeipa.org/page/V4/Manage_replication_topology (which is also waiting for a reviewer)

It contains the plugin itself and a core of ipa commands to manage a topology. to be really applicable, some work outside is required, eg the management of the domain level and a decision where the binddn group should be maintained.

Thanks,
Ludwig


Hello Ludwig,

Quite long review to do. So far I only looked at the startup phase and I have only few questions and comments.
Thanks for your time, and I'm looking forward to your review of the other parts, you raise some valid points. I'll try to answer some of them inline, but will integrate some into a next version of the patch

In ipa_topo_start, do you need to get argc/argv as you are not using plugin-argxx attributes ?
no. It was a leftover from a "standard" plugin


topo_plugin_conf configuration parameters are not freed when the plugin is closed. Is it closed only at shutdown ?
Also I would initiatlize it to {NULL}.
So far it is not planned to be dynamic, but I will addres the memory management

In case the config does not contain any nsslapd-topo-plugin-shared-replica-root, I wonder if ipa_topo_apply_shared_config may crash as shared_replica_root will be NULL. or at least in ipa_topo_apply_shared_replica_config/ipa_topo_util_get_replica_conf.

Also if nsslapd-topo-plugin-shared-replica-root contains an invalid root suffix (typo), topoRepl remains NULL and ipa_topo_util_get_replica_conf/ipa_topo_cfg_replica_add can crash.
for the two comments above, I was assuming that plugin conf and shared tree would be setup by ipa tools and server setup, so assuming only valid data, but you are right, checking for bad data doesn't hurt.

In ipa_topo_util_segment_from_entry, if the config entry has no direction/left/right it will crash. Shouldn't it return an error if the config is invalid.
adding a segment should be done with the ipa command 'ipa topologysegment-add ...' and this always provides a direction (param or default). If you try to add a segment directly, direction is a required attribute of teh segment objectclass, so it should be rejected-

The update of domainLevel may start the plugin. If two mods update the domainLevel they could be done in parallele.
yes :-(


In ipa_topo_util_update_agmt_list, if there is a marked agmnt but no segment it deletes the agreement. Is it possible there is a segment but no agmnt ? For example, if the server were stopped or crashed after the segment was created but before the local config was updated.
then it should be created from the segment


Hosts are taken from shared config tree (cn=masters,<cfg>), is it possible to have a replica agreement to a host that is not under 'cn=masters,<cfg>'
yes, it will be ignored by the plugin


thanks
thierry




Hi Ludwig,

I continued the review of the design/topology plugin code. This is really an interesting plugin but unfortunately I have not yet reviewed all the parts.

I went through the design and digging the related parts in the code. So far I need to review the rest starting at http://www.freeipa.org/page/V4/Manage_replication_topology#connectivity_management.

I think I did ~50% design but may be more than 50% of the code.

Here are additional points:


    in ipa_topo_set_domain_level, you may record the new Domain
    level value as FATAL (it is already recorded in case of oneline
    import)

this just records the actual domain level, I don't think we need to log it every time, only if it is changed should be sufficient (to verify)


    ipa_topo_be_state_change is called for any backend going online.
    Domain level and start should be done only for a backend mapping
    a shared-replica-root.

yes, the comment is already there, but the actual check isn't, so it would be recreated at online init of any backend, think it is not too bad, but will change it

    Also the plugin can be started many times (each online init),
        ipa_topo_util_start is not protected by a lock
        Some fields will leak (in ipa_topo_init_shared_config)
    Also I wonder if you reinit several times the same replica-root,
    its previous config will leak. (replica->repl_segments)

you shouldn't :-), but needs to be made safer


    In ipa_topo_apply_shared_replica_config,
    I do not see where replica_config is kept (leak ?)

it is created and set to the shared_conf data, but if it aready exists, it will leak (that's probably the case above), it should be freed when the plugin is stopped


    ipa_topo_util_start/ipa_topo_apply_shared_config is called at
    startup or during online-init.
    For online-init, if the plugin was already active, what is the
    need of calling ipa_topo_util_start ?

you don't know if the data in the shared tree are teh same as before

    For online-init, It initializes all the replica-root. Could it
    init only the reinitialized replic-root ?

yes, it could (sjhould).


    in
    
http://www.freeipa.org/page/V4/Manage_replication_topology#shared_configuration:_database,
    it mentions ipaReplTopoConfigMaster.
    Is it implemented  ?

it is there as a concept, not completed


    in
    
http://www.freeipa.org/page/V4/Manage_replication_topology#shared_configuration:_segment,
    what happens if a server under cn=masters is removed ?

for all its manages suffixes, the marked segments connecting it are removed, the ldap principal is removed form the binddn groups


    in
    
http://www.freeipa.org/page/V4/Manage_replication_topology#shared_configuration:_example.
    There is a segment cn=111_to_102. For example it was created by
    vm-111 when topology plugin starts with 'dc=example,dc=com' .
    What prevents vm-102 topology plugins to create the segment
    cn=102_to_111 ?

the pre add check should prevent this, but if you do it simultaneously, two segments will be added and one will be ignored when the internal segments are updated (double check)


    in ipa_topo_post_mod.
    Is it 100% that if we have 'cn=replica
    example,cn=topology,dc=example,dc=com' then it exists the
    related config in topo_shared_conf.replicas.
    In ipa_topo_util_get_replica_conf, it is looking like the entry
    can exist before the related config is added.
    In that case when modifying a segment
    ipa_topo_util_get_conf_for_segment will return 'tconf' config
    that is not linked in topo_shared_conf.replicas tconf will leak
    and I am unsure the post_mod is fully processed.

    In ipa_topo_post_mod
    in ipa_topo_util_segment_update if the
    segment.ipaReplTopoSegmentDirection was "none" and MOD set it to
    "both", segment->right/left are not set but it is said to be
    bidirectional

I'm no longer sure if we have a valid scenario, need to think about it again


    ipaReplTopoSegmentStatus: can not find it in the design

it was forgotten when the method to marjk agreements was last chnged :-)


    in ipa_topo_util_existing_agmts_update, my understanding is that
    a host only updates its local replica agreement.

that's the only agreements it can update with internal ops

    So even if the segment update is replicated, others hosts will
    not skip updates where ->origin is not themself.

you mean will skip/ignore ?

    I think you may add a comment about this as it looks an
    important thing.
    Also I did not find this comment in the design but may be I
    missed it.

ok


    in ipa_topo_util_existing_agmts_update, it applies the mods on
    left or on right. That means we do not support serveral instance
    on the same machine. I also missed that point in the design.

    in ipa_topo_agmt_mod, it does nothing when deleting a managed
    attribute ?

    in ipa_topo_agmt_mod, if update of the replica agreement fails
    (ipa_topo_agreement_dn or ipa_topo_util_modify) you may log a
    message

    in ipa_topo_agmt_mod, if the mod is not related to any managed
    attribute, there is no replica agreement update but the 'dn' is
    not freed.

    in ipa_topo_post_mod, I do not see 'domainLevel' in the schema.
    Is it stored in an extensible object ?

the mod of an agreeement via the plugin was a bit neglected in my tests, will check again and answer later






Hi Ludwig,

Last (late) part of the review :-)

In ipa_topo_agmt_setup, the 'cn' of the replica agreement need to be freed.

My understanding is that ipa_topo_apply_shared_config will not be called twice in parallel. Correct ?

In ipa_topo_util_update_agmt_list, 'filter' will leak

In ipa_topo_util_update_agmt_list, shouldn't 'smods' be freed ?

In ipa_topo_util_segment_merge,.. difficult to understand but nice :-)
When merging, depending on which server the merge will occur, we may decide to duplicate the first replica_agreement managed attributes or to keep the values given in the added segment. (at least it is what I understand :-\ )


In ipa_topo_util_update_segments_for_host, when adding a replica we lookup all the RA toward that replica. But if RA is related to a not managed repl_root, I think it may crash in ipa_topo_util_segment_write/ipa_topo_segment_dn (conf=NULL)


ipa_topo_connection_fanout returns allocated targets/node_fanout. I do not see where it is freed in ipa_topo_check_disconnect_reject.

In design, it is said that in preop internal operation by topology_plugin are successful. Where is it tested ?

Regarding the design:
I have not found the list of operation that the admnistrator has to do before activating the plugin. For example, if a non marked replica agreement exists it is deleted if there is no segment to the replicahost. Does that mean that all segments must be created or RA marked before activating the plugin ?

If some attributes are skipped from replication (reduce replication trafic or DMZ), the checking of the connectivity become complex. For example if a replica receives only parts of the attributes and an other all the attributes. The decision to create a replica agreement to a new replica depends if we want fractional/full replication.

Thanks
thierry

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code

Reply via email to