Auerbach, Steven wrote:
> We have IPA set up in active-active mode.  The first node (ipa01) logs
> errors regularly (every few minutes) that seem to be based upon an
> attempt to communicate with a replica that no longer exists.
> 
>  
> 
> Feb 25 14:38:04 ipa01 named[2161]: LDAP query timed out. Try to adjust
> "timeout" parameter
> 
> Feb 25 14:38:04 ipa01 named[2161]: LDAP query timed out. Try to adjust
> "timeout" parameter
> 
> Feb 25 14:38:14 ipa01 named[2161]: LDAP query timed out. Try to adjust
> "timeout" parameter
> 
> Feb 25 14:38:14 ipa01 named[2161]: LDAP query timed out. Try to adjust
> "timeout" parameter
> 
> Feb 25 14:38:22 ipa01 ns-slapd: GSSAPI Error: Unspecified GSS failure. 
> Minor code may provide more information (Cannot contact any KDC for
> <<REALM>> '<<REALM>>.LOCAL')
> 
> Feb 25 14:38:35 ipa01 named[2161]: LDAP query timed out. Try to adjust
> "timeout" parameter
> 
> Feb 25 14:38:35 ipa01 named[2161]: LDAP query timed out. Try to adjust
> "timeout" parameter
> 
> Feb 25 14:38:45 ipa01 named[2161]: LDAP query timed out. Try to adjust
> "timeout" parameter
> 
> Feb 25 14:38:45 ipa01 named[2161]: LDAP query timed out. Try to adjust
> "timeout" parameter
> 
> Feb 25 14:38:45 ipa01 ns-slapd: GSSAPI Error: Unspecified GSS failure. 
> Minor code may provide more information (Server
> ldap/ipa02.<<REALM>>.local@<<REALM>>.LOCAL not found in Kerberos database)
> 
>  
> 
> The only place I found any references to the server ipa02 is in dse.ldif
> files in the /etc/dirsrv/slapd-<<REALM>>-LOCAL/ folders
> 
>  
> 
> Quoted from dse.ldif:
> 
> dn: cn=replica,cn=dc\3D<<REALM>>\2Cdc\3Dlocal,cn=mapping tree,cn=config
> 
> cn: replica
> 
> nsDS5Flags: 1
> 
> objectClass: top
> 
> objectClass: nsds5replica
> 
> objectClass: extensibleobject
> 
> nsDS5ReplicaType: 3
> 
> nsDS5ReplicaRoot: dc=<<REALM>>,dc=local
> 
> nsds5ReplicaLegacyConsumer: off
> 
> nsDS5ReplicaId: 4
> 
> nsDS5ReplicaBindDN: cn=replication manager,cn=config
> 
> _nsDS5ReplicaBindDN:
> krbprincipalname=ldap/ipa02._<<REALM>>.local@<<REALM>>.LOCAL,cn=services,cn=accounts,dc=<<REALM>>,dc=local
> 
> _nsDS5ReplicaBindDN:
> krbprincipalname=ldap/ipa-r02._<<REALM>>.local@<<REALM>>.LOCAL,cn=services,cn=accounts,dc=<<REALM>>,dc=local
> 
> creatorsName: cn=directory manager
> 
> modifiersName: cn=Multimaster Replication Plugin,cn=plugins,cn=config
> 
> createTimestamp: 20130924144354Z
> 
> modifyTimestamp: 20160225194116Z
> 
> nsState:: BAAAAAAAAADcWM9WAAAAAAEAAAAAAAAAZQAAAAAAAAADAAAAAAAAAA==
> 
> nsDS5ReplicaName: a5641a0e-252711e3-96afcc83-6ff9b802
> 
> numSubordinates: 1
> 
>  
> 
>  
> 
> When I execute “ipa-replica-manage list” from either the master or
> replica server I get the same response:
> 
> ipa01.<<REALM>>.local: master
> 
> ipa-r02.<<REALM>>.local: master

You should run it as this on each host:

$ ipa-replica-manage list -v `hostname`

This will show the current agreements it has and the status.

>  
> 
> and when I execute “ipa-csreplica-manage list” from either the master or
> the replica server I get the same response:
> 
> ipa01.<<REALM>>.local: master
> 
> ipa-r02.<<REALM>>.local: CA not configured

You should strongly consider adding a second CA. Right now you have a
single point of failure.


> 
> I would have expected one of these commands to include the “ipa02”
> server as well since it is in the dse.ldif file.
> 
>  
> 
> I know we are configured in “active-active” mode and that the CA is only
> on ipa01.

389-ds uses multi-master replication. active-active is typically a term
used with load balancers and clusters and this isn't really that.

>  
> 
> From an operating perspective, identity management operations (including
> signing on to the browser-based interface and updates made one server
> showing up on the other) from the replica (ipa-r02) are much faster than
> from the master (ipa01). I am guessing that this is because any task
> executing on the replica has only a replica pointer to the master,
> whereas any operation on the master that tries to replicate has to
> timeout on the invalid pointer to “ipa02” before it can actually
> communicate with the replica (ipa-r02).  Of course my intuition could be
> completely wrong and my actual understanding of how this process works
> is nil.

I'm not intimately familiar with low-level 389-ds replication but I
don't believe it is done serially.

> 
> I would like to clean up this environment before I hand the reins over
> to the next person on my team.
> 
>  
> 
> So my questions are:
> 
> 1)    Is there a way to remove the invalid pointer without having to
> disrupt services on the ipa01?

The ipa-replica-manage command will show the current agreements.
Removing a stale one won't affect operations.

> 2)    Do I need to clean this up in this location at all?

If there is bogus agreement then yes. It is a resource drag as the
server needs to calculate and store any changes for something that will
never get sent.

rob

> 
>  
> 
> Thanks for your interest.
> 
>  
> 
>  
> 
> *Steven Auerbach, Systems Administrator*
> 
> 
> 

-- 
Manage your subscription for the Freeipa-users mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-users
Go to http://freeipa.org for more info on the project

Reply via email to