On 06/12/2015 03:27 PM, Simo Sorce wrote:
About the ranges, each replica has a unique replicaID, the selection of
the ranges could use this replicaID for most significant digit.
Publishing the ranges to the shared tree looks good but what is benefit
of publishing dnaRemainingValues (either the exact value or sample) ?
----- Original Message -----
From: "Petr Spacek" <pspa...@redhat.com>
To: "Simo Sorce" <s...@redhat.com>
Cc: "freeipa-devel" <email@example.com>, "Tomas Capek" <tca...@redhat.com>,
<lkris...@redhat.com>, "Thierry Bordaz" <tbor...@redhat.com>
Sent: Friday, June 12, 2015 5:09:08 AM
Subject: Re: [Freeipa-devel] DNA range distribution to replicas by default
On 11.6.2015 16:11, Simo Sorce wrote:
On Thu, 2015-06-11 at 12:38 +0200, Petr Spacek wrote:
On 9.6.2015 15:06, Simo Sorce wrote:
On Tue, 2015-06-09 at 10:30 +0200, Petr Spacek wrote:
I would like to discuss
"Error creating a user when jumping from an original server to replica".
Currently the DNA ranges are distributed from master to other replicas
first attempt to get a number from particular range.
This works well as long as the original master is reachable but fails
miserably when the master is not reachable for any reason.
It is apparently confusing to users  because it is
They have created a replica to be sure that everything will work when
first server is down, right?
Remediation is technically simple  (just assign a range to the new
but it is confusing to the users, error-prone, and personally I feel
is an unnecessary obstacle.
It seems to me that the original motivation for this behavior was that
masters were not able to request range back from other replicas when a
range was depleted.
This deficiency is tracked as
https://bugzilla.redhat.com/show_bug.cgi?id=1029640 and it is slated for
in 4.2.x time frame.
Can we distribute ranges to the replicas during ipa-replica-install when
fix bug 1029640?
That was not the only reason, another reason is that you do not want to
distribute and fragment ranges to replicas that will never be used to
create users. What we should do perhaps, is to automatically give a
range to CA enabled masters so that at least those servers have a range.
If all your CAs are unavailable you have major issues anyway.
Though it is a bit bad to have magic behaviors, maybe we should have a
"main DNA range holder" role that can be assigned to arbitrary servers
(maybe the first replica gets it by default), and when done the server
acquire part of the range if it has none.
This concept sounds good to me!
I would only reverse the default, i.e. distribute ranges by default to all
replicas and let admin to toggle a knob if he feels that his case really
to limit range distribution.
By the time you *feel* that it may be too late.
Another option is that a replica can instantiate a whole new range if
all the range bearing servers are not around, but that also comes with
its own issues.
In general I wouldn't want to split by default, because in domains with
*many* replicas most of them are used for load balancing and will never
be used to create users, so the range would be wasted.
This should not be an issue when
https://bugzilla.redhat.com/show_bug.cgi?id=1029640 is fixed because
will be able to request range back if the local chunk is depleted.
Is that correct?
To some degree, the main issue is when replicas get removed abruptly and
are not around to "give back" anything.
We would need to start working on a range-scavenging tool to reclaim
"lost" ranges if you go and automatically distribute ranges to every
replica that ever pops up.
Okay, I understand that.
I can't help myself but it seems to me that this problem is inherent to
current design and can always happen because the range information is local
the replica. As a result, if the replica with a range disappears we always
need to do some sort of manual recovery to get the free numbers back.
Consequently, lowering number of replicas with ranges just makes the problem
less common but does not eliminate it.
Let's look at: cn=posix-ids,cn=dna,cn=ipa,cn=etc,dc=ipa,dc=example
It seems that we already have information which replicas have free values in
the shared tree - this is good, but not sufficient to eliminate the problem.
The information about range start/end and the next free value is missing in
the shared tree and is stored only in cn=config on particular replica.
It seems to me that adding this range start/end values to the shared tree
would help because the information about the range would be preserved even if
the replica was deleted/lost.
Apparently the attribute dnaRemainingValues in the shared tree is updated
after each number allocation so adding the next free value (to a new
attribute) to the shared tree would not add any significant replication churn
because the object needs to be updated anyway.
What did I miss?
We could publish the range there I guess.
But I'd rather keep the counters local and update the "available" values only
every 100 or so.
This is to reduce the number of replication messages going out. Even if you do
not know the exact starting point that is not a huge deal as DNA checks that an
ID is free before assigning it anyways.
Who is consuming it ?
Manage your subscription for the Freeipa-devel mailing list:
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code