On 11/05/2013 06:04 AM, Alexander Bokovoy wrote:
On Tue, 05 Nov 2013, Tamas Papp wrote:

The systems are uptodate F19 KVM guests.

I'm trying to login the web ui with no success:

"Your session has expired. Please re-login.

To login with Kerberos, please make sure you have valid tickets
(obtainable via kinit) and configured
<http://ipa31.bph.cxn/ipa/config/unauthorized.html> the browser
correctly, then click Login.

To login with username and password, enter them in the fields below then
click Login."

Then after a while something happens and it starts working.

In logs:

On the "primary" node:

[05/Nov/2013:12:19:06 +0100] NSMMReplicationPlugin -
agmt="cn=meToipa12.bpo.cxn" (ipa12:389): Replication bind with GSSAPI
auth resumed

On the "secondary" node:

[05/Nov/2013:12:31:25 +0100] csngen_new_csn - Warning: too much time
skew (-1658 secs). Current seqnum=3
[05/Nov/2013:12:45:33 +0100] csngen_new_csn - Warning: too much time
skew (-811 secs). Current seqnum=a
[05/Nov/2013:12:45:33 +0100] csngen_new_csn - Warning: too much time
skew (-812 secs). Current seqnum=1
[05/Nov/2013:12:45:35 +0100] csngen_new_csn - Warning: too much time
skew (-811 secs). Current seqnum=1
[05/Nov/2013:12:45:47 +0100] csngen_new_csn - Warning: too much time
skew (-800 secs). Current seqnum=4
[05/Nov/2013:12:45:47 +0100] csngen_new_csn - Warning: too much time
skew (-801 secs). Current seqnum=1
[05/Nov/2013:12:45:49 +0100] csngen_new_csn - Warning: too much time
skew (-800 secs). Current seqnum=1


This has been fixed upstream and in some releases - to allow replication to proceed despite excessive clock skew - what is your 389-ds-base version and platform?

Date shows up the same system time on both machines:

Tue Nov  5 12:59:29 CET 2013

I called as primary the machine that was installed initially and
secondary is the one that was deployed by replication.
Virtual Machines are known to have issues with keeping time in sync.

Finally, I have some questions:)

1. How can this happen, what's the problem? Is it something about the
design, I screwed up something, or maybe the virtualization layer..?
How can I avoid it and if it happens, how can I fix it immediately?
It is virtualization/time issue.

2. What is the difference between 'primary' and 'secondary'. What does
happen, if the primary machine gets destroyed?
In IPA all replicas are the same, they only would differ by the paths
they sync with each other and by presence of integrated CA (if any).

If you have deployed original IPA server with integrated CA, then your
other replicas better to have at least one with CA configured to allow
proper recovery in case primary one is destroyed.

4. How many "master" can I use?
Technically there could be 65536 different masters in 389-ds replication

5. If I have a network like this:

A2          B2

A2 and B1,2 are replicated from A1

If the connection gets lost between A and B site, are B1 and 2 (and
A1,2) replicated fine?
I assume from the above that B1 does not know about B2 (and vice versa)?
Once connectivity between sites A and B restored, all unreplicated data
will be replicated. There could be conflicts if there were changes on
both sides during the split but majority of them are solved
automatically by 389-ds.

6. If a client is installed with ipa-client-install using A1 and A1 gets
lost, does the client know, where it needs to connect (failover..)?
IPA server which was used to enroll the host will be primary one (A1 in
your example). There is failover in sssd.conf to use SRV records of the
domain, and trying servers in the order returned by the SRV records.

7. Can I install slave (read-only) replicas so clients access them only
for queries and for changes (like pw change) they access master servers?
No read-only replicas available for IPA. All replicas are read-write and
propagate changes across replication paths as defined in replication
agreements. All IPA servers are really masters, thus we have
multi-master replication rather than master-slave.

Freeipa-users mailing list

Reply via email to