On 18.04.2016 15:22, Petr Spacek wrote:
On 6.4.2016 10:57, Petr Spacek wrote:
On 6.4.2016 10:50, Jan Cholasta wrote:
On 4.4.2016 13:51, Petr Spacek wrote:
On 4.4.2016 13:39, Martin Basti wrote:

On 31.03.2016 09:58, Petr Spacek wrote:
On 26.2.2016 15:37, Petr Spacek wrote:
On 25.2.2016 16:46, Simo Sorce wrote:
On Thu, 2016-02-25 at 15:54 +0100, Petr Spacek wrote:
On 25.2.2016 15:28, Simo Sorce wrote:
On Thu, 2016-02-25 at 14:45 +0100, Petr Spacek wrote:
Variant C
An alternative is to be lazy and dumb. Maybe it would be enough for
the first
round ...

We would retain
[first step - no change from variant A]
* create locations
* assign 'main' (aka 'primary' aka 'home') servers to locations
++ specify weights for the 'main' servers in given location, i.e.
input (server, weight) tuples

Then, backups would be auto-generated set of all remaining servers
from all
other locations.

Additional storage complexity: 0

This covers the scenario "always prefer local servers and use remote
only as
fallback" easily. It does not cover any other scenario.

This might be sufficient for the first run and would allow us to
gather some
feedback from the field.

Now I'm inclined to this variant :-)
To be honest, this is all I always had in mind, for the first step.

To recap:
- define a location with the list of servers (perhaps location is a
property of server objects so you can have only one location per server,
and if you remove the server it is automatically removed from the
location w/o additional work or referential integrity necessary), if
weight is not defined (default) then they all have the same weight.

- Allow to specify backup locations in the location object, priorities
are calculated automatically and all backup locations have same weight.
Hmm, weights have to be inherited form the original location in all
cases. Did
you mean that all backup locations have the same *priority*?
Yes, sorry.

Anyway, explicit configuration of backup locations is introducing API and
schema for variant A and that is what I'm questioning above. It is hard to
make it extensible so we do not have headache in future when somebody
that more flexibility is needed OR that link-based approach is better.
I think no matter we do we'll need to allow admins to override backup
locations, in future if we can calculate them automatically admins will
simply not set any backup location explicitly (or set some special value
like "autogenerate" and the system will do it for them.

Forcing admins to mentally calculate weights to force the system to
autogenerate the configuration they want would be a bad experience, I
personally would find it very annoying.

In other words, for doing what you propose above we would have to design
complete schema and API for variant A anyway to make sure we do not lock
ourselves, so we are not getting any saving by doing so.
A seemed much more complicated to me, as you wanted to define a ful
matrix for weights of servers when they are served as backups and all

- Define a *default* location, which is the backup for any other
location but always with lower priority to any other explicitly defined
backup locations.
I would rather *always* use the default location as backup for all other
locations. It does not require any API or schema (as it equals to "all
servers" except "servers in this location" which can be easily calculated
on fly).
We can start with this, but it works well only in a stellar topology
where you have a central location all other location connect to.
As soon as you have a super-stellar topology where you have hub location
to which regional locations connect to, then this is wasteful.

This can be later on extended in whatever direction we want without any
upgrade/migration problem.

More importantly, all the schema and API will be common for all other
anyway so we can start doing so and see how much time is left when it is
I am ok with this for the first step.
After all location is mostly about the "normal" case where clients want
to reach the local servers, the backup part is only an additional
feature we can keep simple for now. It's a degraded mode of operation
anyway so it is probably ok to have just one default backup location as
a starting point.
Okay, now we are in agreement. I will think about minimal schema and API
the weekend.
Well, it took longer than one weekend.

There was couple of changes in the design document:
* ‎Feature Management: CLI proposal
* ‎Feature Management: web UI - idea with topology graph replaced original
complicated table
* Feature Management: described necessary configuration outside of IPA DNS
* Version 1 parts which were moved into separate document:
* ‎Assumptions: removed misleading reference to DHCP, clarified role of DNS
* Assumptions: removed misleading mention of 'different networks' and added
summary explaining how Location is defined
* Implementation: high-level outline added

Current version:

Full diff:

Practical usage is described in section How to test:

I will think about LDAP schema after we agree on CLI.

Petr^2 Spacek

Petr^2 Spacek

- Weights for backup location servers are the same as the weight defined
within the backup location itself, so no additional weights are defined
for backups.
Yes, that was somehow implied in the variant A. Sorry for not
mentioning it.
Weight is always relative number for servers inside one location.
Ok it looked a lot more complex from your description.

Design review:

You missed warning when there is no backup DNS server in location
Thanks, added.

"Number of IPA DNS servers <= number of configured IPA locations" I dont

You need at least one DNS server per location, thus  DNS servers >= locations
Good catch, fixed.

Design (Version 1: DNAME per client)  Link to design doesn't work for me
Oh, my wiki-fu was weak. Fixed.

CLI looks good to me. Maybe we should explicitly write in design that
priorities of the SRV records will be set statically (What values? 0 - servers
in location, 100 - backup?)
I've added a note about static priorities. Particular values are just an
implementation detail so I would not clutter feature management section with
If server can be only in one location, why bother with
location-{add,mod,remove}-member and not use server-mod:

     server-mod <FQDN> --location=<NAME> [--location-weight=0..65535]

? This is the natural way to model one-to-many relationships in the API,
consistent with existing stuff.
I originally wanted to have location-add-member command so (external) DNS
servers and IPA servers can be assigned to a location using the same command:
location-add-member     LOCATION_NAME --ipa-server=<FQDN>
location-add-member     LOCATION_NAME --advertising-server=<server/view ID>

Should I split this between
server-mod <FQDN> --location=<NAME> [--location-weight=0..65535]
dnsserver-mod <server/view ID> --type=external --advertise-location=...

I do not like splitting server-to-location assignment management between two
commands very much. Current proposal in design page was inspired by
group-add-member command which has --users and --groups options which seemed
philosophically similar to me.

Anyway, I'm open to suggestions how to handle this.
Honza and me are playing with idea that Server Roles can be re-used for
Locations, too.

The rough idea is that 'the advertising' server will have a role like 'DNS
Location XYZ DNS server' and that the member server will have role like 'IPA
master in location XYZ'.

(Pick your own names, these are just examples.)

Obvious advantage is consistency in the user interface, which is something we
really need.

The question is how where to put equivalent of --weight option.

This would make location-add-member command unnecessary.

Today I found out that I misunderstood how non-IPA SRV records will work with the DNS locations feature.

I expected that other SRV record stored in IPA domain will be copied unchanged to locations, only for IPA services SRV records will be altered with priorities.

However, DNS locations *will not* handle other SRV than IPA ones, what effectively means that custom user SRV records will disappear on hosts thats belong to a location.

domain: ipa.test
server: server.ipa.test
custom SRV record in IPA domain: _userservice._udp SRV record: 0 100 123 server.ipa.test.

The record above will not be accessible from clients that connect to server with enabled locations. I think that users may have own services on IPA server with custom SRV records, and I don't consider this behavior as user-friendly and blocker for deployment of this feature.

NACK to design from me. We should fix this.


Manage your subscription for the Freeipa-devel mailing list:
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code

Reply via email to