Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-04-22 Thread Aleš Mareček
Design doc reviewed. Some minor specifications discussed with Petr and Martin, 
added to the doc. UQE_ACK.
Thanks,
 - alich -

- Original Message -
> From: "Martin Basti" <mba...@redhat.com>
> To: "Simo Sorce" <s...@redhat.com>, "Petr Spacek" <pspa...@redhat.com>
> Cc: freeipa-devel@redhat.com
> Sent: Thursday, April 21, 2016 7:39:02 PM
> Subject: Re: [Freeipa-devel] Locations design v2: LDAP schema & user  
> interface
> 
> 
> 
> On 21.04.2016 18:58, Simo Sorce wrote:
> > On Thu, 2016-04-21 at 17:39 +0200, Petr Spacek wrote:
> >> On 19.4.2016 19:17, Simo Sorce wrote:
> >>> On Tue, 2016-04-19 at 11:11 +0200, Petr Spacek wrote:
> >>>> On 18.4.2016 21:33, Simo Sorce wrote:
> >>>>> On Mon, 2016-04-18 at 17:44 +0200, Petr Spacek wrote:
> >>>>>> * Find, filter and copy hand-made records from main tree into the
> >>>>>> _locations sub-trees. This means that every hand-made record
> >>>>>> needs to be copied and synchronized N-times where N = number of IPA
> >>>>>> locations.
> >>>>> This ^^ seem the one that provides the best semantics for admins and
> >>>>> the
> >>>>> least unexpected results.
> >>>>>
> >>>>>> My favorite option for the first version is 'document that enabling
> >>>>>> DNS location will hide hand-made records in IPA domain.'
> >>>>> I do not think this is acceptable, sorry.
> >>>>>
> >>>>>> The feature is disabled by default and needs additional configuration
> >>>>>> anyway so simply upgrading should not break anything.
> >>>>> It is also useless this way.
> >>>>>
> >>>>>> I'm eager to hear opinions and answers to questions above.
> >>>>> HTH,
> >>>> Well it does not help because you did not answer the questions listed in
> >>>> the
> >>>> design page.
> >>>>
> >>>> Anyway, here is third version of the design. It avoids copying user-made
> >>>> records (basically 2 DNAMEs were replaced with bunch of CNAMEs):
> >>>>
> >>>> http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Design_.28Version_3:_CNAME_per_service_name.29
> >>>>
> >>>> It seems like a good middle ground:
> >>>> http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Comparison_of_proposals
> >>> It does seem like a decent middle ground.
> >>> And I guess an admin would be able to add custom templates if he wants
> >>> to have specific services forwarded to the location specific subtree ?
> >> Yes, the bind-dyndb-ldap's RecordGenerator and PerServerConfigInLDAP are
> >> generic enough. At the moment we do not plan to expose these mechanisms in
> >> user interface, we might do that later on.
> >>
> >>
> >>>> This required changes in RecordGenerator design, too:
> >>>> https://fedorahosted.org/bind-dyndb-ldap/wiki/Design/RecordGenerator
> >>> I do not see where you specify the specific record names you forward to
> >>> the location trees here?
> >> I do not understand the question. Let's have a look at the example:
> >>
> >> # DN specifies DNS node name which will hold the generated record:
> >> dn: idnsName=_udp,idnsname=example.com.,cn=dns,dc=example,dc=com
> >> # this is equivalent to _udp.example.com.
> >>
> >> objectClass: idnsTemplateObject
> >> objectClass: top
> >> objectClass: idnsRecord
> >> idnsName: _udp
> >>
> >> # sub-type determines type of the generated record = DNAME
> >> idnsTemplateAttribute;dnamerecord:
> >> _udp.\{substitutionvariable_ipalocation\}._locations
> >> # generated value will be _udp.your-location._locations
> >> # it is a relative name so zone name (example.com) will be automatically
> >> appended
> >>
> >> The template is just string, so you can specify an absolute name if you
> >> want:
> >> idnsTemplateAttribute;dnamerecord:
> >> _udp.\{substitutionvariable_ipalocation\}._locations.another.zone.example.
> >>
> >> Of course 'ipalocation' is just a variable name so user can define his own
> >> in
> >> PerServerConfigInLDAP.
> >>
> >> Is it clearer now?
> > Sorry I thought you said in option 3 that you would only create records
> > for specific 

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-04-21 Thread Martin Basti



On 21.04.2016 18:58, Simo Sorce wrote:

On Thu, 2016-04-21 at 17:39 +0200, Petr Spacek wrote:

On 19.4.2016 19:17, Simo Sorce wrote:

On Tue, 2016-04-19 at 11:11 +0200, Petr Spacek wrote:

On 18.4.2016 21:33, Simo Sorce wrote:

On Mon, 2016-04-18 at 17:44 +0200, Petr Spacek wrote:

* Find, filter and copy hand-made records from main tree into the
_locations sub-trees. This means that every hand-made record
needs to be copied and synchronized N-times where N = number of IPA
locations.

This ^^ seem the one that provides the best semantics for admins and the
least unexpected results.


My favorite option for the first version is 'document that enabling
DNS location will hide hand-made records in IPA domain.'

I do not think this is acceptable, sorry.


The feature is disabled by default and needs additional configuration
anyway so simply upgrading should not break anything.

It is also useless this way.


I'm eager to hear opinions and answers to questions above.

HTH,

Well it does not help because you did not answer the questions listed in the
design page.

Anyway, here is third version of the design. It avoids copying user-made
records (basically 2 DNAMEs were replaced with bunch of CNAMEs):

http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Design_.28Version_3:_CNAME_per_service_name.29

It seems like a good middle ground:
http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Comparison_of_proposals

It does seem like a decent middle ground.
And I guess an admin would be able to add custom templates if he wants
to have specific services forwarded to the location specific subtree ?

Yes, the bind-dyndb-ldap's RecordGenerator and PerServerConfigInLDAP are
generic enough. At the moment we do not plan to expose these mechanisms in
user interface, we might do that later on.



This required changes in RecordGenerator design, too:
https://fedorahosted.org/bind-dyndb-ldap/wiki/Design/RecordGenerator

I do not see where you specify the specific record names you forward to
the location trees here?

I do not understand the question. Let's have a look at the example:

# DN specifies DNS node name which will hold the generated record:
dn: idnsName=_udp,idnsname=example.com.,cn=dns,dc=example,dc=com
# this is equivalent to _udp.example.com.

objectClass: idnsTemplateObject
objectClass: top
objectClass: idnsRecord
idnsName: _udp

# sub-type determines type of the generated record = DNAME
idnsTemplateAttribute;dnamerecord:
_udp.\{substitutionvariable_ipalocation\}._locations
# generated value will be _udp.your-location._locations
# it is a relative name so zone name (example.com) will be automatically 
appended

The template is just string, so you can specify an absolute name if you want:
idnsTemplateAttribute;dnamerecord:
_udp.\{substitutionvariable_ipalocation\}._locations.another.zone.example.

Of course 'ipalocation' is just a variable name so user can define his own in
PerServerConfigInLDAP.

Is it clearer now?

Sorry I thought you said in option 3 that you would only create records
for specific services using CNAMEs
I was looking for how you configure which services you are going to pick
in that case and couldn't see it.
This example is a DNAME one and looks to me it is about option 2 ?

I put there image for version 3 there, and put/fix some implementation 
details there. I will add more implementation details tomorrow.


Basically, IPA knows what services are on which server (except NTP, will 
be fixed), so based on this we are able to generate proper SRV records 
in all locations, and mark the original one by attribute 
'idnsTemplateAttribute;cnamerecord'  Please see example here, I will 
refer on it later 
http://www.freeipa.org/page/V4/DNS_Location_Mechanism#CNAME_data_generation



In case that server is not configured to provide Location specific data, 
or the server is old, the original SRV record (marked with 
'idnsTemplateAttribute') will be used. In case that server is configured 
to provide location specific data, bind-dyndb-ldap will replace the 
original SRV record by CNAME according to location.


Other SRV records (those not marked by 'idnsTemplateAttribute') are 
untouched.


Martin

--
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-04-21 Thread Simo Sorce
On Thu, 2016-04-21 at 17:39 +0200, Petr Spacek wrote:
> On 19.4.2016 19:17, Simo Sorce wrote:
> > On Tue, 2016-04-19 at 11:11 +0200, Petr Spacek wrote:
> >> On 18.4.2016 21:33, Simo Sorce wrote:
> >>> On Mon, 2016-04-18 at 17:44 +0200, Petr Spacek wrote:
>  * Find, filter and copy hand-made records from main tree into the
>  _locations sub-trees. This means that every hand-made record
>  needs to be copied and synchronized N-times where N = number of IPA
>  locations.
> >>>
> >>> This ^^ seem the one that provides the best semantics for admins and the
> >>> least unexpected results.
> >>>
>  My favorite option for the first version is 'document that enabling
>  DNS location will hide hand-made records in IPA domain.'
> >>>
> >>> I do not think this is acceptable, sorry.
> >>>
>  The feature is disabled by default and needs additional configuration
>  anyway so simply upgrading should not break anything.
> >>>
> >>> It is also useless this way.
> >>>
>  I'm eager to hear opinions and answers to questions above.
> >>>
> >>> HTH,
> >>
> >> Well it does not help because you did not answer the questions listed in 
> >> the
> >> design page.
> >>
> >> Anyway, here is third version of the design. It avoids copying user-made
> >> records (basically 2 DNAMEs were replaced with bunch of CNAMEs):
> >>
> >> http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Design_.28Version_3:_CNAME_per_service_name.29
> >>
> >> It seems like a good middle ground:
> >> http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Comparison_of_proposals
> > 
> > It does seem like a decent middle ground.
> > And I guess an admin would be able to add custom templates if he wants
> > to have specific services forwarded to the location specific subtree ?
> 
> Yes, the bind-dyndb-ldap's RecordGenerator and PerServerConfigInLDAP are
> generic enough. At the moment we do not plan to expose these mechanisms in
> user interface, we might do that later on.
> 
> 
> >> This required changes in RecordGenerator design, too:
> >> https://fedorahosted.org/bind-dyndb-ldap/wiki/Design/RecordGenerator
> > 
> > I do not see where you specify the specific record names you forward to
> > the location trees here?
> 
> I do not understand the question. Let's have a look at the example:
> 
> # DN specifies DNS node name which will hold the generated record:
> dn: idnsName=_udp,idnsname=example.com.,cn=dns,dc=example,dc=com
> # this is equivalent to _udp.example.com.
> 
> objectClass: idnsTemplateObject
> objectClass: top
> objectClass: idnsRecord
> idnsName: _udp
> 
> # sub-type determines type of the generated record = DNAME
> idnsTemplateAttribute;dnamerecord:
> _udp.\{substitutionvariable_ipalocation\}._locations
> # generated value will be _udp.your-location._locations
> # it is a relative name so zone name (example.com) will be automatically 
> appended
> 
> The template is just string, so you can specify an absolute name if you want:
> idnsTemplateAttribute;dnamerecord:
> _udp.\{substitutionvariable_ipalocation\}._locations.another.zone.example.
> 
> Of course 'ipalocation' is just a variable name so user can define his own in
> PerServerConfigInLDAP.
> 
> Is it clearer now?

Sorry I thought you said in option 3 that you would only create records
for specific services using CNAMEs
I was looking for how you configure which services you are going to pick
in that case and couldn't see it.
This example is a DNAME one and looks to me it is about option 2 ?

-- 
Simo Sorce * Red Hat, Inc * New York

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-04-21 Thread Petr Spacek
On 19.4.2016 19:17, Simo Sorce wrote:
> On Tue, 2016-04-19 at 11:11 +0200, Petr Spacek wrote:
>> On 18.4.2016 21:33, Simo Sorce wrote:
>>> On Mon, 2016-04-18 at 17:44 +0200, Petr Spacek wrote:
 * Find, filter and copy hand-made records from main tree into the
 _locations sub-trees. This means that every hand-made record
 needs to be copied and synchronized N-times where N = number of IPA
 locations.
>>>
>>> This ^^ seem the one that provides the best semantics for admins and the
>>> least unexpected results.
>>>
 My favorite option for the first version is 'document that enabling
 DNS location will hide hand-made records in IPA domain.'
>>>
>>> I do not think this is acceptable, sorry.
>>>
 The feature is disabled by default and needs additional configuration
 anyway so simply upgrading should not break anything.
>>>
>>> It is also useless this way.
>>>
 I'm eager to hear opinions and answers to questions above.
>>>
>>> HTH,
>>
>> Well it does not help because you did not answer the questions listed in the
>> design page.
>>
>> Anyway, here is third version of the design. It avoids copying user-made
>> records (basically 2 DNAMEs were replaced with bunch of CNAMEs):
>>
>> http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Design_.28Version_3:_CNAME_per_service_name.29
>>
>> It seems like a good middle ground:
>> http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Comparison_of_proposals
> 
> It does seem like a decent middle ground.
> And I guess an admin would be able to add custom templates if he wants
> to have specific services forwarded to the location specific subtree ?

Yes, the bind-dyndb-ldap's RecordGenerator and PerServerConfigInLDAP are
generic enough. At the moment we do not plan to expose these mechanisms in
user interface, we might do that later on.


>> This required changes in RecordGenerator design, too:
>> https://fedorahosted.org/bind-dyndb-ldap/wiki/Design/RecordGenerator
> 
> I do not see where you specify the specific record names you forward to
> the location trees here?

I do not understand the question. Let's have a look at the example:

# DN specifies DNS node name which will hold the generated record:
dn: idnsName=_udp,idnsname=example.com.,cn=dns,dc=example,dc=com
# this is equivalent to _udp.example.com.

objectClass: idnsTemplateObject
objectClass: top
objectClass: idnsRecord
idnsName: _udp

# sub-type determines type of the generated record = DNAME
idnsTemplateAttribute;dnamerecord:
_udp.\{substitutionvariable_ipalocation\}._locations
# generated value will be _udp.your-location._locations
# it is a relative name so zone name (example.com) will be automatically 
appended

The template is just string, so you can specify an absolute name if you want:
idnsTemplateAttribute;dnamerecord:
_udp.\{substitutionvariable_ipalocation\}._locations.another.zone.example.

Of course 'ipalocation' is just a variable name so user can define his own in
PerServerConfigInLDAP.

Is it clearer now?

Petr^2 Spacek


>> Also, CLI was updated to follow Honza's recommendations from previous 
>> e-mails:
>> http://www.freeipa.org/page/V4/DNS_Location_Mechanism#CLI
> 
> Thanks for updating all designs in concert.
> 
> Simo.

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-04-19 Thread Simo Sorce
On Tue, 2016-04-19 at 11:11 +0200, Petr Spacek wrote:
> On 18.4.2016 21:33, Simo Sorce wrote:
> > On Mon, 2016-04-18 at 17:44 +0200, Petr Spacek wrote:
> >> * Find, filter and copy hand-made records from main tree into the
> >> _locations sub-trees. This means that every hand-made record
> >> needs to be copied and synchronized N-times where N = number of IPA
> >> locations.
> > 
> > This ^^ seem the one that provides the best semantics for admins and the
> > least unexpected results.
> > 
> >> My favorite option for the first version is 'document that enabling
> >> DNS location will hide hand-made records in IPA domain.'
> > 
> > I do not think this is acceptable, sorry.
> > 
> >> The feature is disabled by default and needs additional configuration
> >> anyway so simply upgrading should not break anything.
> > 
> > It is also useless this way.
> > 
> >> I'm eager to hear opinions and answers to questions above.
> > 
> > HTH,
> 
> Well it does not help because you did not answer the questions listed in the
> design page.
> 
> Anyway, here is third version of the design. It avoids copying user-made
> records (basically 2 DNAMEs were replaced with bunch of CNAMEs):
> 
> http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Design_.28Version_3:_CNAME_per_service_name.29
> 
> It seems like a good middle ground:
> http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Comparison_of_proposals

It does seem like a decent middle ground.
And I guess an admin would be able to add custom templates if he wants
to have specific services forwarded to the location specific subtree ?

> This required changes in RecordGenerator design, too:
> https://fedorahosted.org/bind-dyndb-ldap/wiki/Design/RecordGenerator

I do not see where you specify the specific record names you forward to
the location trees here?

> Also, CLI was updated to follow Honza's recommendations from previous e-mails:
> http://www.freeipa.org/page/V4/DNS_Location_Mechanism#CLI

Thanks for updating all designs in concert.

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-04-19 Thread Petr Spacek
On 18.4.2016 21:33, Simo Sorce wrote:
> On Mon, 2016-04-18 at 17:44 +0200, Petr Spacek wrote:
>> * Find, filter and copy hand-made records from main tree into the
>> _locations sub-trees. This means that every hand-made record
>> needs to be copied and synchronized N-times where N = number of IPA
>> locations.
> 
> This ^^ seem the one that provides the best semantics for admins and the
> least unexpected results.
> 
>> My favorite option for the first version is 'document that enabling
>> DNS location will hide hand-made records in IPA domain.'
> 
> I do not think this is acceptable, sorry.
> 
>> The feature is disabled by default and needs additional configuration
>> anyway so simply upgrading should not break anything.
> 
> It is also useless this way.
> 
>> I'm eager to hear opinions and answers to questions above.
> 
> HTH,

Well it does not help because you did not answer the questions listed in the
design page.

Anyway, here is third version of the design. It avoids copying user-made
records (basically 2 DNAMEs were replaced with bunch of CNAMEs):

http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Design_.28Version_3:_CNAME_per_service_name.29

It seems like a good middle ground:
http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Comparison_of_proposals

This required changes in RecordGenerator design, too:
https://fedorahosted.org/bind-dyndb-ldap/wiki/Design/RecordGenerator

Also, CLI was updated to follow Honza's recommendations from previous e-mails:
http://www.freeipa.org/page/V4/DNS_Location_Mechanism#CLI


Please review.

-- 
Petr^2 Spacek

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-04-18 Thread Simo Sorce
On Mon, 2016-04-18 at 17:44 +0200, Petr Spacek wrote:
> * Find, filter and copy hand-made records from main tree into the
> _locations sub-trees. This means that every hand-made record
> needs to be copied and synchronized N-times where N = number of IPA
> locations.

This ^^ seem the one that provides the best semantics for admins and the
least unexpected results.

> My favorite option for the first version is 'document that enabling
> DNS location will hide hand-made records in IPA domain.'

I do not think this is acceptable, sorry.

> The feature is disabled by default and needs additional configuration
> anyway so simply upgrading should not break anything.

It is also useless this way.

> I'm eager to hear opinions and answers to questions above.

HTH,
Simo.

-- 
Simo Sorce * Red Hat, Inc * New York

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-04-18 Thread Petr Spacek
On 18.4.2016 17:44, Petr Spacek wrote:
> On 18.4.2016 16:42, Martin Basti wrote:
>>
>>
>> On 18.04.2016 15:22, Petr Spacek wrote:
>>> On 6.4.2016 10:57, Petr Spacek wrote:
 On 6.4.2016 10:50, Jan Cholasta wrote:
> On 4.4.2016 13:51, Petr Spacek wrote:
>> On 4.4.2016 13:39, Martin Basti wrote:
>>>
>>> On 31.03.2016 09:58, Petr Spacek wrote:
 On 26.2.2016 15:37, Petr Spacek wrote:
> On 25.2.2016 16:46, Simo Sorce wrote:
>> On Thu, 2016-02-25 at 15:54 +0100, Petr Spacek wrote:
>>> On 25.2.2016 15:28, Simo Sorce wrote:
 On Thu, 2016-02-25 at 14:45 +0100, Petr Spacek wrote:
> Variant C
> -
> An alternative is to be lazy and dumb. Maybe it would be enough 
> for
> the first
> round ...
>
> We would retain
> [first step - no change from variant A]
> * create locations
> * assign 'main' (aka 'primary' aka 'home') servers to locations
> ++ specify weights for the 'main' servers in given location, i.e.
> manually
> input (server, weight) tuples
>
> Then, backups would be auto-generated set of all remaining servers
> from all
> other locations.
>
> Additional storage complexity: 0
>
> This covers the scenario "always prefer local servers and use 
> remote
> only as
> fallback" easily. It does not cover any other scenario.
>
> This might be sufficient for the first run and would allow us to
> gather some
> feedback from the field.
>
> Now I'm inclined to this variant :-)
 To be honest, this is all I always had in mind, for the first step.

 To recap:
 - define a location with the list of servers (perhaps location is a
 property of server objects so you can have only one location per
 server,
 and if you remove the server it is automatically removed from the
 location w/o additional work or referential integrity necessary), 
 if
 weight is not defined (default) then they all have the same weight.
>>> Agreed.
>>>
>>>
 - Allow to specify backup locations in the location object, 
 priorities
 are calculated automatically and all backup locations have same
 weight.
>>> Hmm, weights have to be inherited form the original location in all
>>> cases. Did
>>> you mean that all backup locations have the same *priority*?
>> Yes, sorry.
>>
>>> Anyway, explicit configuration of backup locations is introducing
>>> API and
>>> schema for variant A and that is what I'm questioning above. It is
>>> hard to
>>> make it extensible so we do not have headache in future when 
>>> somebody
>>> decides
>>> that more flexibility is needed OR that link-based approach is 
>>> better.
>> I think no matter we do we'll need to allow admins to override backup
>> locations, in future if we can calculate them automatically admins 
>> will
>> simply not set any backup location explicitly (or set some special 
>> value
>> like "autogenerate" and the system will do it for them.
>>
>> Forcing admins to mentally calculate weights to force the system to
>> autogenerate the configuration they want would be a bad experience, I
>> personally would find it very annoying.
>>
>>> In other words, for doing what you propose above we would have to
>>> design
>>> complete schema and API for variant A anyway to make sure we do not
>>> lock
>>> ourselves, so we are not getting any saving by doing so.
>> A seemed much more complicated to me, as you wanted to define a ful
>> matrix for weights of servers when they are served as backups and all
>> that.
>>
 - Define a *default* location, which is the backup for any other
 location but always with lower priority to any other explicitly
 defined
 backup locations.
>>> I would rather *always* use the default location as backup for all
>>> other
>>> locations. It does not require any API or schema (as it equals to 
>>> "all
>>> servers" except "servers in this location" which can be easily
>>> calculated
>>> on fly).
>> We can start with this, but it works well only in a stellar topology
>> where you have a central location all other location connect to.
>> As soon as you have a super-stellar topology where 

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-04-18 Thread Martin Basti



On 18.04.2016 15:22, Petr Spacek wrote:

On 6.4.2016 10:57, Petr Spacek wrote:

On 6.4.2016 10:50, Jan Cholasta wrote:

On 4.4.2016 13:51, Petr Spacek wrote:

On 4.4.2016 13:39, Martin Basti wrote:


On 31.03.2016 09:58, Petr Spacek wrote:

On 26.2.2016 15:37, Petr Spacek wrote:

On 25.2.2016 16:46, Simo Sorce wrote:

On Thu, 2016-02-25 at 15:54 +0100, Petr Spacek wrote:

On 25.2.2016 15:28, Simo Sorce wrote:

On Thu, 2016-02-25 at 14:45 +0100, Petr Spacek wrote:

Variant C
-
An alternative is to be lazy and dumb. Maybe it would be enough for
the first
round ...

We would retain
[first step - no change from variant A]
* create locations
* assign 'main' (aka 'primary' aka 'home') servers to locations
++ specify weights for the 'main' servers in given location, i.e.
manually
input (server, weight) tuples

Then, backups would be auto-generated set of all remaining servers
from all
other locations.

Additional storage complexity: 0

This covers the scenario "always prefer local servers and use remote
only as
fallback" easily. It does not cover any other scenario.

This might be sufficient for the first run and would allow us to
gather some
feedback from the field.

Now I'm inclined to this variant :-)

To be honest, this is all I always had in mind, for the first step.

To recap:
- define a location with the list of servers (perhaps location is a
property of server objects so you can have only one location per server,
and if you remove the server it is automatically removed from the
location w/o additional work or referential integrity necessary), if
weight is not defined (default) then they all have the same weight.

Agreed.



- Allow to specify backup locations in the location object, priorities
are calculated automatically and all backup locations have same weight.

Hmm, weights have to be inherited form the original location in all
cases. Did
you mean that all backup locations have the same *priority*?

Yes, sorry.


Anyway, explicit configuration of backup locations is introducing API and
schema for variant A and that is what I'm questioning above. It is hard to
make it extensible so we do not have headache in future when somebody
decides
that more flexibility is needed OR that link-based approach is better.

I think no matter we do we'll need to allow admins to override backup
locations, in future if we can calculate them automatically admins will
simply not set any backup location explicitly (or set some special value
like "autogenerate" and the system will do it for them.

Forcing admins to mentally calculate weights to force the system to
autogenerate the configuration they want would be a bad experience, I
personally would find it very annoying.


In other words, for doing what you propose above we would have to design
complete schema and API for variant A anyway to make sure we do not lock
ourselves, so we are not getting any saving by doing so.

A seemed much more complicated to me, as you wanted to define a ful
matrix for weights of servers when they are served as backups and all
that.


- Define a *default* location, which is the backup for any other
location but always with lower priority to any other explicitly defined
backup locations.

I would rather *always* use the default location as backup for all other
locations. It does not require any API or schema (as it equals to "all
servers" except "servers in this location" which can be easily calculated
on fly).

We can start with this, but it works well only in a stellar topology
where you have a central location all other location connect to.
As soon as you have a super-stellar topology where you have hub location
to which regional locations connect to, then this is wasteful.


This can be later on extended in whatever direction we want without any
upgrade/migration problem.

More importantly, all the schema and API will be common for all other
variants
anyway so we can start doing so and see how much time is left when it is
done.

I am ok with this for the first step.
After all location is mostly about the "normal" case where clients want
to reach the local servers, the backup part is only an additional
feature we can keep simple for now. It's a degraded mode of operation
anyway so it is probably ok to have just one default backup location as
a starting point.

Okay, now we are in agreement. I will think about minimal schema and API
over
the weekend.

Well, it took longer than one weekend.

There was couple of changes in the design document:
* ‎Feature Management: CLI proposal
* ‎Feature Management: web UI - idea with topology graph replaced original
complicated table
* Feature Management: described necessary configuration outside of IPA DNS
* Version 1 parts which were moved into separate document:
V4/DNS_Location_Mechanism_with_per_client_override
* ‎Assumptions: removed misleading reference to DHCP, clarified role of DNS
views
* Assumptions: removed misleading mention of 'different networks' and added
summary explaining how 

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-04-18 Thread Petr Spacek
On 6.4.2016 10:57, Petr Spacek wrote:
> On 6.4.2016 10:50, Jan Cholasta wrote:
>> On 4.4.2016 13:51, Petr Spacek wrote:
>>> On 4.4.2016 13:39, Martin Basti wrote:


 On 31.03.2016 09:58, Petr Spacek wrote:
> On 26.2.2016 15:37, Petr Spacek wrote:
>> On 25.2.2016 16:46, Simo Sorce wrote:
>>> On Thu, 2016-02-25 at 15:54 +0100, Petr Spacek wrote:
 On 25.2.2016 15:28, Simo Sorce wrote:
> On Thu, 2016-02-25 at 14:45 +0100, Petr Spacek wrote:
>> Variant C
>> -
>> An alternative is to be lazy and dumb. Maybe it would be enough for
>> the first
>> round ...
>>
>> We would retain
>> [first step - no change from variant A]
>> * create locations
>> * assign 'main' (aka 'primary' aka 'home') servers to locations
>> ++ specify weights for the 'main' servers in given location, i.e.
>> manually
>> input (server, weight) tuples
>>
>> Then, backups would be auto-generated set of all remaining servers
>> from all
>> other locations.
>>
>> Additional storage complexity: 0
>>
>> This covers the scenario "always prefer local servers and use remote
>> only as
>> fallback" easily. It does not cover any other scenario.
>>
>> This might be sufficient for the first run and would allow us to
>> gather some
>> feedback from the field.
>>
>> Now I'm inclined to this variant :-)
> To be honest, this is all I always had in mind, for the first step.
>
> To recap:
> - define a location with the list of servers (perhaps location is a
> property of server objects so you can have only one location per 
> server,
> and if you remove the server it is automatically removed from the
> location w/o additional work or referential integrity necessary), if
> weight is not defined (default) then they all have the same weight.
 Agreed.


> - Allow to specify backup locations in the location object, priorities
> are calculated automatically and all backup locations have same 
> weight.
 Hmm, weights have to be inherited form the original location in all
 cases. Did
 you mean that all backup locations have the same *priority*?
>>> Yes, sorry.
>>>
 Anyway, explicit configuration of backup locations is introducing API 
 and
 schema for variant A and that is what I'm questioning above. It is 
 hard to
 make it extensible so we do not have headache in future when somebody
 decides
 that more flexibility is needed OR that link-based approach is better.
>>> I think no matter we do we'll need to allow admins to override backup
>>> locations, in future if we can calculate them automatically admins will
>>> simply not set any backup location explicitly (or set some special value
>>> like "autogenerate" and the system will do it for them.
>>>
>>> Forcing admins to mentally calculate weights to force the system to
>>> autogenerate the configuration they want would be a bad experience, I
>>> personally would find it very annoying.
>>>
 In other words, for doing what you propose above we would have to 
 design
 complete schema and API for variant A anyway to make sure we do not 
 lock
 ourselves, so we are not getting any saving by doing so.
>>> A seemed much more complicated to me, as you wanted to define a ful
>>> matrix for weights of servers when they are served as backups and all
>>> that.
>>>
> - Define a *default* location, which is the backup for any other
> location but always with lower priority to any other explicitly 
> defined
> backup locations.
 I would rather *always* use the default location as backup for all 
 other
 locations. It does not require any API or schema (as it equals to "all
 servers" except "servers in this location" which can be easily 
 calculated
 on fly).
>>> We can start with this, but it works well only in a stellar topology
>>> where you have a central location all other location connect to.
>>> As soon as you have a super-stellar topology where you have hub location
>>> to which regional locations connect to, then this is wasteful.
>>>
 This can be later on extended in whatever direction we want without any
 upgrade/migration problem.

 More importantly, all the schema and API will be common for all other
 variants
 anyway so we can start doing so and see how much time is left when it 
 is
 done.
>>> I am ok with this for the first step.
>>> After all location is mostly about the "normal" 

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-04-14 Thread Jan Cholasta

On 6.4.2016 10:57, Petr Spacek wrote:

On 6.4.2016 10:50, Jan Cholasta wrote:

On 4.4.2016 13:51, Petr Spacek wrote:

On 4.4.2016 13:39, Martin Basti wrote:



On 31.03.2016 09:58, Petr Spacek wrote:

On 26.2.2016 15:37, Petr Spacek wrote:

On 25.2.2016 16:46, Simo Sorce wrote:

On Thu, 2016-02-25 at 15:54 +0100, Petr Spacek wrote:

On 25.2.2016 15:28, Simo Sorce wrote:

On Thu, 2016-02-25 at 14:45 +0100, Petr Spacek wrote:

Variant C
-
An alternative is to be lazy and dumb. Maybe it would be enough for
the first
round ...

We would retain
[first step - no change from variant A]
* create locations
* assign 'main' (aka 'primary' aka 'home') servers to locations
++ specify weights for the 'main' servers in given location, i.e.
manually
input (server, weight) tuples

Then, backups would be auto-generated set of all remaining servers
from all
other locations.

Additional storage complexity: 0

This covers the scenario "always prefer local servers and use remote
only as
fallback" easily. It does not cover any other scenario.

This might be sufficient for the first run and would allow us to
gather some
feedback from the field.

Now I'm inclined to this variant :-)

To be honest, this is all I always had in mind, for the first step.

To recap:
- define a location with the list of servers (perhaps location is a
property of server objects so you can have only one location per server,
and if you remove the server it is automatically removed from the
location w/o additional work or referential integrity necessary), if
weight is not defined (default) then they all have the same weight.

Agreed.



- Allow to specify backup locations in the location object, priorities
are calculated automatically and all backup locations have same weight.

Hmm, weights have to be inherited form the original location in all
cases. Did
you mean that all backup locations have the same *priority*?

Yes, sorry.


Anyway, explicit configuration of backup locations is introducing API and
schema for variant A and that is what I'm questioning above. It is hard to
make it extensible so we do not have headache in future when somebody
decides
that more flexibility is needed OR that link-based approach is better.

I think no matter we do we'll need to allow admins to override backup
locations, in future if we can calculate them automatically admins will
simply not set any backup location explicitly (or set some special value
like "autogenerate" and the system will do it for them.

Forcing admins to mentally calculate weights to force the system to
autogenerate the configuration they want would be a bad experience, I
personally would find it very annoying.


In other words, for doing what you propose above we would have to design
complete schema and API for variant A anyway to make sure we do not lock
ourselves, so we are not getting any saving by doing so.

A seemed much more complicated to me, as you wanted to define a ful
matrix for weights of servers when they are served as backups and all
that.


- Define a *default* location, which is the backup for any other
location but always with lower priority to any other explicitly defined
backup locations.

I would rather *always* use the default location as backup for all other
locations. It does not require any API or schema (as it equals to "all
servers" except "servers in this location" which can be easily calculated
on fly).

We can start with this, but it works well only in a stellar topology
where you have a central location all other location connect to.
As soon as you have a super-stellar topology where you have hub location
to which regional locations connect to, then this is wasteful.


This can be later on extended in whatever direction we want without any
upgrade/migration problem.

More importantly, all the schema and API will be common for all other
variants
anyway so we can start doing so and see how much time is left when it is
done.

I am ok with this for the first step.
After all location is mostly about the "normal" case where clients want
to reach the local servers, the backup part is only an additional
feature we can keep simple for now. It's a degraded mode of operation
anyway so it is probably ok to have just one default backup location as
a starting point.

Okay, now we are in agreement. I will think about minimal schema and API
over
the weekend.

Well, it took longer than one weekend.

There was couple of changes in the design document:
* ‎Feature Management: CLI proposal
* ‎Feature Management: web UI - idea with topology graph replaced original
complicated table
* Feature Management: described necessary configuration outside of IPA DNS
* Version 1 parts which were moved into separate document:
V4/DNS_Location_Mechanism_with_per_client_override
* ‎Assumptions: removed misleading reference to DHCP, clarified role of DNS
views
* Assumptions: removed misleading mention of 'different networks' and added
summary explaining how Location is defined
* Implementation: 

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-04-06 Thread Petr Spacek
On 6.4.2016 10:50, Jan Cholasta wrote:
> On 4.4.2016 13:51, Petr Spacek wrote:
>> On 4.4.2016 13:39, Martin Basti wrote:
>>>
>>>
>>> On 31.03.2016 09:58, Petr Spacek wrote:
 On 26.2.2016 15:37, Petr Spacek wrote:
> On 25.2.2016 16:46, Simo Sorce wrote:
>> On Thu, 2016-02-25 at 15:54 +0100, Petr Spacek wrote:
>>> On 25.2.2016 15:28, Simo Sorce wrote:
 On Thu, 2016-02-25 at 14:45 +0100, Petr Spacek wrote:
> Variant C
> -
> An alternative is to be lazy and dumb. Maybe it would be enough for
> the first
> round ...
>
> We would retain
> [first step - no change from variant A]
> * create locations
> * assign 'main' (aka 'primary' aka 'home') servers to locations
> ++ specify weights for the 'main' servers in given location, i.e.
> manually
> input (server, weight) tuples
>
> Then, backups would be auto-generated set of all remaining servers
> from all
> other locations.
>
> Additional storage complexity: 0
>
> This covers the scenario "always prefer local servers and use remote
> only as
> fallback" easily. It does not cover any other scenario.
>
> This might be sufficient for the first run and would allow us to
> gather some
> feedback from the field.
>
> Now I'm inclined to this variant :-)
 To be honest, this is all I always had in mind, for the first step.

 To recap:
 - define a location with the list of servers (perhaps location is a
 property of server objects so you can have only one location per 
 server,
 and if you remove the server it is automatically removed from the
 location w/o additional work or referential integrity necessary), if
 weight is not defined (default) then they all have the same weight.
>>> Agreed.
>>>
>>>
 - Allow to specify backup locations in the location object, priorities
 are calculated automatically and all backup locations have same weight.
>>> Hmm, weights have to be inherited form the original location in all
>>> cases. Did
>>> you mean that all backup locations have the same *priority*?
>> Yes, sorry.
>>
>>> Anyway, explicit configuration of backup locations is introducing API 
>>> and
>>> schema for variant A and that is what I'm questioning above. It is hard 
>>> to
>>> make it extensible so we do not have headache in future when somebody
>>> decides
>>> that more flexibility is needed OR that link-based approach is better.
>> I think no matter we do we'll need to allow admins to override backup
>> locations, in future if we can calculate them automatically admins will
>> simply not set any backup location explicitly (or set some special value
>> like "autogenerate" and the system will do it for them.
>>
>> Forcing admins to mentally calculate weights to force the system to
>> autogenerate the configuration they want would be a bad experience, I
>> personally would find it very annoying.
>>
>>> In other words, for doing what you propose above we would have to design
>>> complete schema and API for variant A anyway to make sure we do not lock
>>> ourselves, so we are not getting any saving by doing so.
>> A seemed much more complicated to me, as you wanted to define a ful
>> matrix for weights of servers when they are served as backups and all
>> that.
>>
 - Define a *default* location, which is the backup for any other
 location but always with lower priority to any other explicitly defined
 backup locations.
>>> I would rather *always* use the default location as backup for all other
>>> locations. It does not require any API or schema (as it equals to "all
>>> servers" except "servers in this location" which can be easily 
>>> calculated
>>> on fly).
>> We can start with this, but it works well only in a stellar topology
>> where you have a central location all other location connect to.
>> As soon as you have a super-stellar topology where you have hub location
>> to which regional locations connect to, then this is wasteful.
>>
>>> This can be later on extended in whatever direction we want without any
>>> upgrade/migration problem.
>>>
>>> More importantly, all the schema and API will be common for all other
>>> variants
>>> anyway so we can start doing so and see how much time is left when it is
>>> done.
>> I am ok with this for the first step.
>> After all location is mostly about the "normal" case where clients want
>> to reach the local servers, the backup part is only an additional
>> feature we can keep simple for now. It's a degraded mode of operation
>> anyway so it is probably 

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-04-06 Thread Jan Cholasta

On 4.4.2016 13:51, Petr Spacek wrote:

On 4.4.2016 13:39, Martin Basti wrote:



On 31.03.2016 09:58, Petr Spacek wrote:

On 26.2.2016 15:37, Petr Spacek wrote:

On 25.2.2016 16:46, Simo Sorce wrote:

On Thu, 2016-02-25 at 15:54 +0100, Petr Spacek wrote:

On 25.2.2016 15:28, Simo Sorce wrote:

On Thu, 2016-02-25 at 14:45 +0100, Petr Spacek wrote:

Variant C
-
An alternative is to be lazy and dumb. Maybe it would be enough for
the first
round ...

We would retain
[first step - no change from variant A]
* create locations
* assign 'main' (aka 'primary' aka 'home') servers to locations
++ specify weights for the 'main' servers in given location, i.e.
manually
input (server, weight) tuples

Then, backups would be auto-generated set of all remaining servers
from all
other locations.

Additional storage complexity: 0

This covers the scenario "always prefer local servers and use remote
only as
fallback" easily. It does not cover any other scenario.

This might be sufficient for the first run and would allow us to
gather some
feedback from the field.

Now I'm inclined to this variant :-)

To be honest, this is all I always had in mind, for the first step.

To recap:
- define a location with the list of servers (perhaps location is a
property of server objects so you can have only one location per server,
and if you remove the server it is automatically removed from the
location w/o additional work or referential integrity necessary), if
weight is not defined (default) then they all have the same weight.

Agreed.



- Allow to specify backup locations in the location object, priorities
are calculated automatically and all backup locations have same weight.

Hmm, weights have to be inherited form the original location in all
cases. Did
you mean that all backup locations have the same *priority*?

Yes, sorry.


Anyway, explicit configuration of backup locations is introducing API and
schema for variant A and that is what I'm questioning above. It is hard to
make it extensible so we do not have headache in future when somebody
decides
that more flexibility is needed OR that link-based approach is better.

I think no matter we do we'll need to allow admins to override backup
locations, in future if we can calculate them automatically admins will
simply not set any backup location explicitly (or set some special value
like "autogenerate" and the system will do it for them.

Forcing admins to mentally calculate weights to force the system to
autogenerate the configuration they want would be a bad experience, I
personally would find it very annoying.


In other words, for doing what you propose above we would have to design
complete schema and API for variant A anyway to make sure we do not lock
ourselves, so we are not getting any saving by doing so.

A seemed much more complicated to me, as you wanted to define a ful
matrix for weights of servers when they are served as backups and all
that.


- Define a *default* location, which is the backup for any other
location but always with lower priority to any other explicitly defined
backup locations.

I would rather *always* use the default location as backup for all other
locations. It does not require any API or schema (as it equals to "all
servers" except "servers in this location" which can be easily calculated
on fly).

We can start with this, but it works well only in a stellar topology
where you have a central location all other location connect to.
As soon as you have a super-stellar topology where you have hub location
to which regional locations connect to, then this is wasteful.


This can be later on extended in whatever direction we want without any
upgrade/migration problem.

More importantly, all the schema and API will be common for all other
variants
anyway so we can start doing so and see how much time is left when it is
done.

I am ok with this for the first step.
After all location is mostly about the "normal" case where clients want
to reach the local servers, the backup part is only an additional
feature we can keep simple for now. It's a degraded mode of operation
anyway so it is probably ok to have just one default backup location as
a starting point.

Okay, now we are in agreement. I will think about minimal schema and API over
the weekend.

Well, it took longer than one weekend.

There was couple of changes in the design document:
* ‎Feature Management: CLI proposal
* ‎Feature Management: web UI - idea with topology graph replaced original
complicated table
* Feature Management: described necessary configuration outside of IPA DNS
* Version 1 parts which were moved into separate document:
V4/DNS_Location_Mechanism_with_per_client_override
* ‎Assumptions: removed misleading reference to DHCP, clarified role of DNS
views
* Assumptions: removed misleading mention of 'different networks' and added
summary explaining how Location is defined
* Implementation: high-level outline added

Current version:

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-04-04 Thread Petr Spacek
On 4.4.2016 13:39, Martin Basti wrote:
> 
> 
> On 31.03.2016 09:58, Petr Spacek wrote:
>> On 26.2.2016 15:37, Petr Spacek wrote:
>>> On 25.2.2016 16:46, Simo Sorce wrote:
 On Thu, 2016-02-25 at 15:54 +0100, Petr Spacek wrote:
> On 25.2.2016 15:28, Simo Sorce wrote:
>> On Thu, 2016-02-25 at 14:45 +0100, Petr Spacek wrote:
>>> Variant C
>>> -
>>> An alternative is to be lazy and dumb. Maybe it would be enough for
>>> the first
>>> round ...
>>>
>>> We would retain
>>> [first step - no change from variant A]
>>> * create locations
>>> * assign 'main' (aka 'primary' aka 'home') servers to locations
>>> ++ specify weights for the 'main' servers in given location, i.e.
>>> manually
>>> input (server, weight) tuples
>>>
>>> Then, backups would be auto-generated set of all remaining servers
>>> from all
>>> other locations.
>>>
>>> Additional storage complexity: 0
>>>
>>> This covers the scenario "always prefer local servers and use remote
>>> only as
>>> fallback" easily. It does not cover any other scenario.
>>>
>>> This might be sufficient for the first run and would allow us to
>>> gather some
>>> feedback from the field.
>>>
>>> Now I'm inclined to this variant :-)
>> To be honest, this is all I always had in mind, for the first step.
>>
>> To recap:
>> - define a location with the list of servers (perhaps location is a
>> property of server objects so you can have only one location per server,
>> and if you remove the server it is automatically removed from the
>> location w/o additional work or referential integrity necessary), if
>> weight is not defined (default) then they all have the same weight.
> Agreed.
>
>
>> - Allow to specify backup locations in the location object, priorities
>> are calculated automatically and all backup locations have same weight.
> Hmm, weights have to be inherited form the original location in all
> cases. Did
> you mean that all backup locations have the same *priority*?
 Yes, sorry.

> Anyway, explicit configuration of backup locations is introducing API and
> schema for variant A and that is what I'm questioning above. It is hard to
> make it extensible so we do not have headache in future when somebody
> decides
> that more flexibility is needed OR that link-based approach is better.
 I think no matter we do we'll need to allow admins to override backup
 locations, in future if we can calculate them automatically admins will
 simply not set any backup location explicitly (or set some special value
 like "autogenerate" and the system will do it for them.

 Forcing admins to mentally calculate weights to force the system to
 autogenerate the configuration they want would be a bad experience, I
 personally would find it very annoying.

> In other words, for doing what you propose above we would have to design
> complete schema and API for variant A anyway to make sure we do not lock
> ourselves, so we are not getting any saving by doing so.
 A seemed much more complicated to me, as you wanted to define a ful
 matrix for weights of servers when they are served as backups and all
 that.

>> - Define a *default* location, which is the backup for any other
>> location but always with lower priority to any other explicitly defined
>> backup locations.
> I would rather *always* use the default location as backup for all other
> locations. It does not require any API or schema (as it equals to "all
> servers" except "servers in this location" which can be easily calculated
> on fly).
 We can start with this, but it works well only in a stellar topology
 where you have a central location all other location connect to.
 As soon as you have a super-stellar topology where you have hub location
 to which regional locations connect to, then this is wasteful.

> This can be later on extended in whatever direction we want without any
> upgrade/migration problem.
>
> More importantly, all the schema and API will be common for all other
> variants
> anyway so we can start doing so and see how much time is left when it is
> done.
 I am ok with this for the first step.
 After all location is mostly about the "normal" case where clients want
 to reach the local servers, the backup part is only an additional
 feature we can keep simple for now. It's a degraded mode of operation
 anyway so it is probably ok to have just one default backup location as
 a starting point.
>>> Okay, now we are in agreement. I will think about minimal schema and API 
>>> over
>>> the weekend.
>> Well, it took longer than one weekend.
>>
>> There was couple of changes in the design document:
>> * ‎Feature Management: CLI proposal
>> 

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-04-04 Thread Martin Basti



On 31.03.2016 09:58, Petr Spacek wrote:

On 26.2.2016 15:37, Petr Spacek wrote:

On 25.2.2016 16:46, Simo Sorce wrote:

On Thu, 2016-02-25 at 15:54 +0100, Petr Spacek wrote:

On 25.2.2016 15:28, Simo Sorce wrote:

On Thu, 2016-02-25 at 14:45 +0100, Petr Spacek wrote:

Variant C
-
An alternative is to be lazy and dumb. Maybe it would be enough for
the first
round ...

We would retain
[first step - no change from variant A]
* create locations
* assign 'main' (aka 'primary' aka 'home') servers to locations
++ specify weights for the 'main' servers in given location, i.e.
manually
input (server, weight) tuples

Then, backups would be auto-generated set of all remaining servers
from all
other locations.

Additional storage complexity: 0

This covers the scenario "always prefer local servers and use remote
only as
fallback" easily. It does not cover any other scenario.

This might be sufficient for the first run and would allow us to
gather some
feedback from the field.

Now I'm inclined to this variant :-)

To be honest, this is all I always had in mind, for the first step.

To recap:
- define a location with the list of servers (perhaps location is a
property of server objects so you can have only one location per server,
and if you remove the server it is automatically removed from the
location w/o additional work or referential integrity necessary), if
weight is not defined (default) then they all have the same weight.

Agreed.



- Allow to specify backup locations in the location object, priorities
are calculated automatically and all backup locations have same weight.

Hmm, weights have to be inherited form the original location in all cases. Did
you mean that all backup locations have the same *priority*?

Yes, sorry.


Anyway, explicit configuration of backup locations is introducing API and
schema for variant A and that is what I'm questioning above. It is hard to
make it extensible so we do not have headache in future when somebody decides
that more flexibility is needed OR that link-based approach is better.

I think no matter we do we'll need to allow admins to override backup
locations, in future if we can calculate them automatically admins will
simply not set any backup location explicitly (or set some special value
like "autogenerate" and the system will do it for them.

Forcing admins to mentally calculate weights to force the system to
autogenerate the configuration they want would be a bad experience, I
personally would find it very annoying.


In other words, for doing what you propose above we would have to design
complete schema and API for variant A anyway to make sure we do not lock
ourselves, so we are not getting any saving by doing so.

A seemed much more complicated to me, as you wanted to define a ful
matrix for weights of servers when they are served as backups and all
that.


- Define a *default* location, which is the backup for any other
location but always with lower priority to any other explicitly defined
backup locations.

I would rather *always* use the default location as backup for all other
locations. It does not require any API or schema (as it equals to "all
servers" except "servers in this location" which can be easily calculated on 
fly).

We can start with this, but it works well only in a stellar topology
where you have a central location all other location connect to.
As soon as you have a super-stellar topology where you have hub location
to which regional locations connect to, then this is wasteful.


This can be later on extended in whatever direction we want without any
upgrade/migration problem.

More importantly, all the schema and API will be common for all other variants
anyway so we can start doing so and see how much time is left when it is done.

I am ok with this for the first step.
After all location is mostly about the "normal" case where clients want
to reach the local servers, the backup part is only an additional
feature we can keep simple for now. It's a degraded mode of operation
anyway so it is probably ok to have just one default backup location as
a starting point.

Okay, now we are in agreement. I will think about minimal schema and API over
the weekend.

Well, it took longer than one weekend.

There was couple of changes in the design document:
* ‎Feature Management: CLI proposal
* ‎Feature Management: web UI - idea with topology graph replaced original
complicated table
* Feature Management: described necessary configuration outside of IPA DNS
* Version 1 parts which were moved into separate document:
V4/DNS_Location_Mechanism_with_per_client_override
* ‎Assumptions: removed misleading reference to DHCP, clarified role of DNS 
views
* Assumptions: removed misleading mention of 'different networks' and added
summary explaining how Location is defined
* Implementation: high-level outline added

Current version:
http://www.freeipa.org/page/V4/DNS_Location_Mechanism

Full diff:

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-03-31 Thread Petr Spacek
On 26.2.2016 15:37, Petr Spacek wrote:
> On 25.2.2016 16:46, Simo Sorce wrote:
>> On Thu, 2016-02-25 at 15:54 +0100, Petr Spacek wrote:
>>> On 25.2.2016 15:28, Simo Sorce wrote:
 On Thu, 2016-02-25 at 14:45 +0100, Petr Spacek wrote:
> Variant C
> -
> An alternative is to be lazy and dumb. Maybe it would be enough for
> the first
> round ...
>
> We would retain
> [first step - no change from variant A]
> * create locations
> * assign 'main' (aka 'primary' aka 'home') servers to locations
> ++ specify weights for the 'main' servers in given location, i.e.
> manually
> input (server, weight) tuples
>
> Then, backups would be auto-generated set of all remaining servers
> from all
> other locations.
>
> Additional storage complexity: 0
>
> This covers the scenario "always prefer local servers and use remote
> only as
> fallback" easily. It does not cover any other scenario.
>
> This might be sufficient for the first run and would allow us to
> gather some
> feedback from the field.
>
> Now I'm inclined to this variant :-)

 To be honest, this is all I always had in mind, for the first step.

 To recap:
 - define a location with the list of servers (perhaps location is a
 property of server objects so you can have only one location per server,
 and if you remove the server it is automatically removed from the
 location w/o additional work or referential integrity necessary), if
 weight is not defined (default) then they all have the same weight.
>>>
>>> Agreed.
>>>
>>>
 - Allow to specify backup locations in the location object, priorities
 are calculated automatically and all backup locations have same weight.
>>>
>>> Hmm, weights have to be inherited form the original location in all cases. 
>>> Did
>>> you mean that all backup locations have the same *priority*?
>>
>> Yes, sorry.
>>
>>> Anyway, explicit configuration of backup locations is introducing API and
>>> schema for variant A and that is what I'm questioning above. It is hard to
>>> make it extensible so we do not have headache in future when somebody 
>>> decides
>>> that more flexibility is needed OR that link-based approach is better.
>>
>> I think no matter we do we'll need to allow admins to override backup
>> locations, in future if we can calculate them automatically admins will
>> simply not set any backup location explicitly (or set some special value
>> like "autogenerate" and the system will do it for them.
>>
>> Forcing admins to mentally calculate weights to force the system to
>> autogenerate the configuration they want would be a bad experience, I
>> personally would find it very annoying.
>>
>>> In other words, for doing what you propose above we would have to design
>>> complete schema and API for variant A anyway to make sure we do not lock
>>> ourselves, so we are not getting any saving by doing so.
>>
>> A seemed much more complicated to me, as you wanted to define a ful
>> matrix for weights of servers when they are served as backups and all
>> that.
>>
 - Define a *default* location, which is the backup for any other
 location but always with lower priority to any other explicitly defined
 backup locations.
>>>
>>> I would rather *always* use the default location as backup for all other
>>> locations. It does not require any API or schema (as it equals to "all
>>> servers" except "servers in this location" which can be easily calculated 
>>> on fly).
>>
>> We can start with this, but it works well only in a stellar topology
>> where you have a central location all other location connect to.
>> As soon as you have a super-stellar topology where you have hub location
>> to which regional locations connect to, then this is wasteful.
>>
>>> This can be later on extended in whatever direction we want without any
>>> upgrade/migration problem.
>>>
>>> More importantly, all the schema and API will be common for all other 
>>> variants
>>> anyway so we can start doing so and see how much time is left when it is 
>>> done.
>>
>> I am ok with this for the first step.
>> After all location is mostly about the "normal" case where clients want
>> to reach the local servers, the backup part is only an additional
>> feature we can keep simple for now. It's a degraded mode of operation
>> anyway so it is probably ok to have just one default backup location as
>> a starting point.
> 
> Okay, now we are in agreement. I will think about minimal schema and API over
> the weekend.

Well, it took longer than one weekend.

There was couple of changes in the design document:
* ‎Feature Management: CLI proposal
* ‎Feature Management: web UI - idea with topology graph replaced original
complicated table
* Feature Management: described necessary configuration outside of IPA DNS
* Version 1 parts which were moved into separate document:

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-26 Thread Petr Spacek
On 25.2.2016 16:46, Simo Sorce wrote:
> On Thu, 2016-02-25 at 15:54 +0100, Petr Spacek wrote:
>> On 25.2.2016 15:28, Simo Sorce wrote:
>>> On Thu, 2016-02-25 at 14:45 +0100, Petr Spacek wrote:
 Variant C
 -
 An alternative is to be lazy and dumb. Maybe it would be enough for
 the first
 round ...

 We would retain
 [first step - no change from variant A]
 * create locations
 * assign 'main' (aka 'primary' aka 'home') servers to locations
 ++ specify weights for the 'main' servers in given location, i.e.
 manually
 input (server, weight) tuples

 Then, backups would be auto-generated set of all remaining servers
 from all
 other locations.

 Additional storage complexity: 0

 This covers the scenario "always prefer local servers and use remote
 only as
 fallback" easily. It does not cover any other scenario.

 This might be sufficient for the first run and would allow us to
 gather some
 feedback from the field.

 Now I'm inclined to this variant :-)
>>>
>>> To be honest, this is all I always had in mind, for the first step.
>>>
>>> To recap:
>>> - define a location with the list of servers (perhaps location is a
>>> property of server objects so you can have only one location per server,
>>> and if you remove the server it is automatically removed from the
>>> location w/o additional work or referential integrity necessary), if
>>> weight is not defined (default) then they all have the same weight.
>>
>> Agreed.
>>
>>
>>> - Allow to specify backup locations in the location object, priorities
>>> are calculated automatically and all backup locations have same weight.
>>
>> Hmm, weights have to be inherited form the original location in all cases. 
>> Did
>> you mean that all backup locations have the same *priority*?
> 
> Yes, sorry.
> 
>> Anyway, explicit configuration of backup locations is introducing API and
>> schema for variant A and that is what I'm questioning above. It is hard to
>> make it extensible so we do not have headache in future when somebody decides
>> that more flexibility is needed OR that link-based approach is better.
> 
> I think no matter we do we'll need to allow admins to override backup
> locations, in future if we can calculate them automatically admins will
> simply not set any backup location explicitly (or set some special value
> like "autogenerate" and the system will do it for them.
> 
> Forcing admins to mentally calculate weights to force the system to
> autogenerate the configuration they want would be a bad experience, I
> personally would find it very annoying.
> 
>> In other words, for doing what you propose above we would have to design
>> complete schema and API for variant A anyway to make sure we do not lock
>> ourselves, so we are not getting any saving by doing so.
> 
> A seemed much more complicated to me, as you wanted to define a ful
> matrix for weights of servers when they are served as backups and all
> that.
> 
>>> - Define a *default* location, which is the backup for any other
>>> location but always with lower priority to any other explicitly defined
>>> backup locations.
>>
>> I would rather *always* use the default location as backup for all other
>> locations. It does not require any API or schema (as it equals to "all
>> servers" except "servers in this location" which can be easily calculated on 
>> fly).
> 
> We can start with this, but it works well only in a stellar topology
> where you have a central location all other location connect to.
> As soon as you have a super-stellar topology where you have hub location
> to which regional locations connect to, then this is wasteful.
> 
>> This can be later on extended in whatever direction we want without any
>> upgrade/migration problem.
>>
>> More importantly, all the schema and API will be common for all other 
>> variants
>> anyway so we can start doing so and see how much time is left when it is 
>> done.
> 
> I am ok with this for the first step.
> After all location is mostly about the "normal" case where clients want
> to reach the local servers, the backup part is only an additional
> feature we can keep simple for now. It's a degraded mode of operation
> anyway so it is probably ok to have just one default backup location as
> a starting point.

Okay, now we are in agreement. I will think about minimal schema and API over
the weekend.

Petr^2 Spacek


>>> - Weights for backup location servers are the same as the weight defined
>>> within the backup location itself, so no additional weights are defined
>>> for backups.
>>
>> Yes, that was somehow implied in the variant A. Sorry for not mentioning it.
>> Weight is always relative number for servers inside one location.
> 
> Ok it looked a lot more complex from your description.
> 
> Simo.

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute 

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-25 Thread Simo Sorce
On Thu, 2016-02-25 at 15:54 +0100, Petr Spacek wrote:
> On 25.2.2016 15:28, Simo Sorce wrote:
> > On Thu, 2016-02-25 at 14:45 +0100, Petr Spacek wrote:
> >> Variant C
> >> -
> >> An alternative is to be lazy and dumb. Maybe it would be enough for
> >> the first
> >> round ...
> >>
> >> We would retain
> >> [first step - no change from variant A]
> >> * create locations
> >> * assign 'main' (aka 'primary' aka 'home') servers to locations
> >> ++ specify weights for the 'main' servers in given location, i.e.
> >> manually
> >> input (server, weight) tuples
> >>
> >> Then, backups would be auto-generated set of all remaining servers
> >> from all
> >> other locations.
> >>
> >> Additional storage complexity: 0
> >>
> >> This covers the scenario "always prefer local servers and use remote
> >> only as
> >> fallback" easily. It does not cover any other scenario.
> >>
> >> This might be sufficient for the first run and would allow us to
> >> gather some
> >> feedback from the field.
> >>
> >> Now I'm inclined to this variant :-)
> > 
> > To be honest, this is all I always had in mind, for the first step.
> > 
> > To recap:
> > - define a location with the list of servers (perhaps location is a
> > property of server objects so you can have only one location per server,
> > and if you remove the server it is automatically removed from the
> > location w/o additional work or referential integrity necessary), if
> > weight is not defined (default) then they all have the same weight.
> 
> Agreed.
> 
> 
> > - Allow to specify backup locations in the location object, priorities
> > are calculated automatically and all backup locations have same weight.
> 
> Hmm, weights have to be inherited form the original location in all cases. Did
> you mean that all backup locations have the same *priority*?

Yes, sorry.

> Anyway, explicit configuration of backup locations is introducing API and
> schema for variant A and that is what I'm questioning above. It is hard to
> make it extensible so we do not have headache in future when somebody decides
> that more flexibility is needed OR that link-based approach is better.

I think no matter we do we'll need to allow admins to override backup
locations, in future if we can calculate them automatically admins will
simply not set any backup location explicitly (or set some special value
like "autogenerate" and the system will do it for them.

Forcing admins to mentally calculate weights to force the system to
autogenerate the configuration they want would be a bad experience, I
personally would find it very annoying.

> In other words, for doing what you propose above we would have to design
> complete schema and API for variant A anyway to make sure we do not lock
> ourselves, so we are not getting any saving by doing so.

A seemed much more complicated to me, as you wanted to define a ful
matrix for weights of servers when they are served as backups and all
that.

> > - Define a *default* location, which is the backup for any other
> > location but always with lower priority to any other explicitly defined
> > backup locations.
> 
> I would rather *always* use the default location as backup for all other
> locations. It does not require any API or schema (as it equals to "all
> servers" except "servers in this location" which can be easily calculated on 
> fly).

We can start with this, but it works well only in a stellar topology
where you have a central location all other location connect to.
As soon as you have a super-stellar topology where you have hub location
to which regional locations connect to, then this is wasteful.

> This can be later on extended in whatever direction we want without any
> upgrade/migration problem.
> 
> More importantly, all the schema and API will be common for all other variants
> anyway so we can start doing so and see how much time is left when it is done.

I am ok with this for the first step.
After all location is mostly about the "normal" case where clients want
to reach the local servers, the backup part is only an additional
feature we can keep simple for now. It's a degraded mode of operation
anyway so it is probably ok to have just one default backup location as
a starting point.

> > - Weights for backup location servers are the same as the weight defined
> > within the backup location itself, so no additional weights are defined
> > for backups.
> 
> Yes, that was somehow implied in the variant A. Sorry for not mentioning it.
> Weight is always relative number for servers inside one location.

Ok it looked a lot more complex from your description.

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-25 Thread Petr Spacek
On 25.2.2016 15:28, Simo Sorce wrote:
> On Thu, 2016-02-25 at 14:45 +0100, Petr Spacek wrote:
>> Variant C
>> -
>> An alternative is to be lazy and dumb. Maybe it would be enough for
>> the first
>> round ...
>>
>> We would retain
>> [first step - no change from variant A]
>> * create locations
>> * assign 'main' (aka 'primary' aka 'home') servers to locations
>> ++ specify weights for the 'main' servers in given location, i.e.
>> manually
>> input (server, weight) tuples
>>
>> Then, backups would be auto-generated set of all remaining servers
>> from all
>> other locations.
>>
>> Additional storage complexity: 0
>>
>> This covers the scenario "always prefer local servers and use remote
>> only as
>> fallback" easily. It does not cover any other scenario.
>>
>> This might be sufficient for the first run and would allow us to
>> gather some
>> feedback from the field.
>>
>> Now I'm inclined to this variant :-)
> 
> To be honest, this is all I always had in mind, for the first step.
> 
> To recap:
> - define a location with the list of servers (perhaps location is a
> property of server objects so you can have only one location per server,
> and if you remove the server it is automatically removed from the
> location w/o additional work or referential integrity necessary), if
> weight is not defined (default) then they all have the same weight.

Agreed.


> - Allow to specify backup locations in the location object, priorities
> are calculated automatically and all backup locations have same weight.

Hmm, weights have to be inherited form the original location in all cases. Did
you mean that all backup locations have the same *priority*?

Anyway, explicit configuration of backup locations is introducing API and
schema for variant A and that is what I'm questioning above. It is hard to
make it extensible so we do not have headache in future when somebody decides
that more flexibility is needed OR that link-based approach is better.

In other words, for doing what you propose above we would have to design
complete schema and API for variant A anyway to make sure we do not lock
ourselves, so we are not getting any saving by doing so.


> - Define a *default* location, which is the backup for any other
> location but always with lower priority to any other explicitly defined
> backup locations.

I would rather *always* use the default location as backup for all other
locations. It does not require any API or schema (as it equals to "all
servers" except "servers in this location" which can be easily calculated on 
fly).

This can be later on extended in whatever direction we want without any
upgrade/migration problem.

More importantly, all the schema and API will be common for all other variants
anyway so we can start doing so and see how much time is left when it is done.


> - Weights for backup location servers are the same as the weight defined
> within the backup location itself, so no additional weights are defined
> for backups.

Yes, that was somehow implied in the variant A. Sorry for not mentioning it.
Weight is always relative number for servers inside one location.

-- 
Petr^2 Spacek

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-25 Thread Simo Sorce
On Thu, 2016-02-25 at 14:45 +0100, Petr Spacek wrote:
> Variant C
> -
> An alternative is to be lazy and dumb. Maybe it would be enough for
> the first
> round ...
> 
> We would retain
> [first step - no change from variant A]
> * create locations
> * assign 'main' (aka 'primary' aka 'home') servers to locations
> ++ specify weights for the 'main' servers in given location, i.e.
> manually
> input (server, weight) tuples
> 
> Then, backups would be auto-generated set of all remaining servers
> from all
> other locations.
> 
> Additional storage complexity: 0
> 
> This covers the scenario "always prefer local servers and use remote
> only as
> fallback" easily. It does not cover any other scenario.
> 
> This might be sufficient for the first run and would allow us to
> gather some
> feedback from the field.
> 
> Now I'm inclined to this variant :-)

To be honest, this is all I always had in mind, for the first step.

To recap:
- define a location with the list of servers (perhaps location is a
property of server objects so you can have only one location per server,
and if you remove the server it is automatically removed from the
location w/o additional work or referential integrity necessary), if
weight is not defined (default) then they all have the same weight.

- Allow to specify backup locations in the location object, priorities
are calculated automatically and all backup locations have same weight.

- Define a *default* location, which is the backup for any other
location but always with lower priority to any other explicitly defined
backup locations.

- Weights for backup location servers are the same as the weight defined
within the backup location itself, so no additional weights are defined
for backups.

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-25 Thread Petr Spacek
On 24.2.2016 15:25, Simo Sorce wrote:
> On Wed, 2016-02-24 at 10:00 +0100, Martin Kosek wrote:
>> On 02/23/2016 06:59 PM, Petr Spacek wrote:
>>> On 23.2.2016 18:14, Simo Sorce wrote:
>> ...
 More seriously I think it is a great idea, but too premature to get all
 the way there now. We need to build schema and CLI that will allow us to
 get there without having to completely change interfaces if at all
 possible or minimizing any disruption in the tools.
>>>
>>> Actually the backwards compatibility is the main worry which led to this 
>>> idea
>>> with links.
>>>
>>> If we release first version of locations with custom priorities etc. we will
>>> have support the schema (which will be different) and API (which will be 
>>> later
>>> unnecessary) forever.
>>>
>>> If we skip this intermediate phase with hand-made configuration we can save
>>> all the headache with upgrades to more automatic solution later on.
>>>
>>>
>>> Maybe we should invert the order:
>>> Start with locations + links with administrative metric and add 
>>> hand-tweaking
>>> capabilities later (if necessary).
>>>
>>> IMHO locations + links with administrative metric will be easier to 
>>> implement
>>> than the first version.
>>>
>>> Just thinking aloud ...
>>
>> Makes sense to me, I would have the same worry as Petr, that we would break
>> something if we decide moving to links based solution later.
> 
> Maybe I am missing something, but in order to generate the proper SRV
> records we need priority and weights anyway, either by entering them
> manually or by autogenerating them from some other piece of information
> in the framework. So given this information is needed anyway why would
> it become a problem to retain it in the future if we enable a tool the
> simply autogenerates this information ?

Let me clarify this:
You are right, in the end we always somehow get to priorities and weights.

TL;DR version
=
The difference is subtle details how we get priorities and if we store them in
LDAP and represent them in API (or not). It will simplify things if we do not
expose them. I'm not convinced that we *need* to expose them in the first round.


TL version
==

In the high level the process is always as follows:
1. input tuples (location, server, weight) for all primary servers assigned to
locations
2. input or derive (location, server, priority) for all backups
3. generate SRV records using priority groups combined from the previous two 
steps

Now we are trying to decide if step (2) is "input" OR "derive" priorities for
backup servers.


Variants


Variant A
-
If we let the user to do everything manually (no links etc.) we need to
provide following schema + API + user interface:
[first step - same in both variants]
* create locations
* assign 'main' (aka 'primary' aka 'home') servers to locations
++ specify weights for the 'main' servers in given location, i.e. manually
input (server, weight) tuples

[second step]
* specify backup servers for each location
++ assign (server, priority, weight) information for each non-main server
++ for S servers and L locations we need to represent up to
   S * L tuples (server, priority, weight) and provide means to manage it
++ most importantly, maintenance complexity of backups grows any time you add
one of (server OR location)
++ this would be a nightmare to manage. For simple cases this require some
'include' mechanism to declare one location as backup for another location.
This include complicates things significantly as it has a lot of corner cases
and requires different LDAP schema when compared to direct servers assignment.



Variant B
-
If we let the user only specify locations + links with costs we need to
provide following schema + API + user interface:
[first step - no change from variant A]
* create locations
* assign 'main' (aka 'primary' aka 'home') servers to locations
++ specify weights for the 'main' servers in given location, i.e. manually
input (server, weight) tuples

[second step]
* create links between locations
++ manually assign point-to-point information + administrative cost
++ for S servers and L locations we need to represent up to
   L^2 tuples (from, to, cost) and provide means to manage it
++ storage can be optimized to great extent if there is a lot of links with
equal cost, typically a full-mesh interconnections can be represented by
single object in LDAP
* generate backups (i.e. priority assignment) using usual routing algorithms.
Priority does not need to be neither exposed to user nor stored in LDAP at all.
++ most importantly, maintenance complexity of backups grows while you add
locations *but* you do not need to manually go though backup configuration for
(potentially) all locations every time as you add/change/remove servers in
existing locations (which you have to do with variant A, unless you use some
smart includes ...).


Please note that variant B with (links, costs) do not use explicit priority

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-24 Thread Simo Sorce
On Wed, 2016-02-24 at 10:00 +0100, Martin Kosek wrote:
> On 02/23/2016 06:59 PM, Petr Spacek wrote:
> > On 23.2.2016 18:14, Simo Sorce wrote:
> ...
> >> More seriously I think it is a great idea, but too premature to get all
> >> the way there now. We need to build schema and CLI that will allow us to
> >> get there without having to completely change interfaces if at all
> >> possible or minimizing any disruption in the tools.
> > 
> > Actually the backwards compatibility is the main worry which led to this 
> > idea
> > with links.
> > 
> > If we release first version of locations with custom priorities etc. we will
> > have support the schema (which will be different) and API (which will be 
> > later
> > unnecessary) forever.
> > 
> > If we skip this intermediate phase with hand-made configuration we can save
> > all the headache with upgrades to more automatic solution later on.
> > 
> > 
> > Maybe we should invert the order:
> > Start with locations + links with administrative metric and add 
> > hand-tweaking
> > capabilities later (if necessary).
> > 
> > IMHO locations + links with administrative metric will be easier to 
> > implement
> > than the first version.
> > 
> > Just thinking aloud ...
> 
> Makes sense to me, I would have the same worry as Petr, that we would break
> something if we decide moving to links based solution later.

Maybe I am missing something, but in order to generate the proper SRV
records we need priority and weights anyway, either by entering them
manually or by autogenerating them from some other piece of information
in the framework. So given this information is needed anyway why would
it become a problem to retain it in the future if we enable a tool the
simply autogenerates this information ?

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-24 Thread Martin Kosek
On 02/23/2016 06:59 PM, Petr Spacek wrote:
> On 23.2.2016 18:14, Simo Sorce wrote:
...
>> More seriously I think it is a great idea, but too premature to get all
>> the way there now. We need to build schema and CLI that will allow us to
>> get there without having to completely change interfaces if at all
>> possible or minimizing any disruption in the tools.
> 
> Actually the backwards compatibility is the main worry which led to this idea
> with links.
> 
> If we release first version of locations with custom priorities etc. we will
> have support the schema (which will be different) and API (which will be later
> unnecessary) forever.
> 
> If we skip this intermediate phase with hand-made configuration we can save
> all the headache with upgrades to more automatic solution later on.
> 
> 
> Maybe we should invert the order:
> Start with locations + links with administrative metric and add hand-tweaking
> capabilities later (if necessary).
> 
> IMHO locations + links with administrative metric will be easier to implement
> than the first version.
> 
> Just thinking aloud ...

Makes sense to me, I would have the same worry as Petr, that we would break
something if we decide moving to links based solution later.

-- 
Manage your subscription for the Freeipa-devel mailing list:
https://www.redhat.com/mailman/listinfo/freeipa-devel
Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code


Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-23 Thread Petr Spacek
On 23.2.2016 18:14, Simo Sorce wrote:
>> > Petr Vobornik mentioned an important question:
>> > Should we care about non-IPA services?
>> > 
>> > IMHO it is a valid point. It complicates things a lot as soon as we start
>> > introducing 'locations per service'. It is certainly doable but I would 
>> > like
>> > to avoid it.
>> > 
>> > It seems easy enough to support custom services as long as there is only 
>> > one
>> > set of locations (which match IPA locations). It would be a management
>> > nightmare to support N parallel locations for distinct sets of services.
>> > 
>> > As far as I can tell, AD can live with only 1 set of locations and that 
>> > sounds
>> > reasonable thing to support in the IPA management interface to me.
> I think one set of Locations is fine, but we need to be able to assign
> services to locations independently from "servers" in some cases, I
> think.
> 
> Mostly because a server can have service 1 and 3 but not service 2 and
> another server can have service 2 and 3 but not 1.

Hmm, I do not follow. Where is the problem? You can assign both servers to the
location and get all three services available, the SRV records from all
servers will just combine.

We should have checks in IPA so it will not allow you to create a Frankenstein
IPA server which is missing LDAP or so but this will be needed only when we
decide to containerize - or when we allow to select individual services
instead of servers :-)

For custom services, you are on your own. Still, I would say that assigning
services (instead of servers) to location is making it *more* error prone as
there is bigger chance of omitting something (like selecting only service 1
from server A) and not the other way around.

What did I miss?

[...]

 > >> Priority groups are harder because they express metric based on:
 > >> * communication costs,
 > >> * fail-over requirements,
 > >> * other political requirements in given deployment.
 > >> These are hard things to see from layer 7.
 > >>
 > >> Theoretically we can provide ipa-advise plugin to generate some 
 > >> initial set of
 > >> groups but this is going to be complicated and error prone.
 > >>
 > >> E.g. we can use ICMP ping or LDAP base DN search timings and use some
 > >> clustering algorithm to create priority groups using measured values.
 > >> This could work if we use some smart-enough clustering algorithm (= AI
 > >> library). And of course, we would have to do measurements from at 
 > >> least one
 > >> server in each location to properly define groups for each location 
 > >> ...
 > >>
 > >> It is not that easy as it might seem and I do not see an easy 
 > >> solution.
 > >>
 > >>
 > >> Maybe we should take evolutional approach:
 > >> Implement this 'expert' UI which exposes groups & weights to the user 
 > >> first.
 > >> (It will be necessary for special cases anyway.) When this is done, 
 > >> we can
 > >> play with it, do some usability testing (we can ask RH IT to see if 
 > >> it makes
 > >> sense to them, for example.)
 > >>
 > >> Later we can extend this with a 'simple' variant of UI based on 
 > >> feedback or
 > >> add the generator). This does not even need to happen in the same 
 > >> release.
 > >>
 > >> IMHO it would be better to start with something and refine it later 
 > >> on because
 > >> right now we are just hand-waving and have no idea what users 
 > >> actually do and
 > >> want.
>>> > > 
>>> > > As long as we establish a proper CLI I am ok with implementing a very
>>> > > bare bone UI first and improving it only later.
>>> > > 
>>> > > Btw we probably want to have this information reported by the topology
>>> > > view, and used to automatically group servers there based on location,
>>> > > so I CCed Petr to see if there is anything that would make that job
>>> > > easier/harder.
>> > 
>> > 
>> > We were kicking ideas around the drawing board in the Brno office. Finally,
>> > after many iterations we arrived to this:
>> > 
>> > Wouldn't it be easier to implement concept of sites and links between 
>> > sites at
>> > the same time? (In the AD spirit.)
>> > 
>> > If we knew the locations/sites and links between them, we could compute
>> > priority groups etc. algorithmicaly. Then only remaining thing is weight,
>> > which can have default and admin do not have to touch it if not necessary.
> You would need to add weights to links, because just the fact there is a
> link tells you nothing about how big the link is between 2 locations, it

Oh, sure, I was thinking about link metric implicitly :-)


> also tells you nothing abut the number of clients in a location which
> may influence how you want to distribute them around.

Do you have an example in mind? It sounds weird to me that you want to
distribute clients outside of local site. If I understand you correctly it
means that the 

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-23 Thread Simo Sorce
On Tue, 2016-02-23 at 18:04 +0100, Petr Spacek wrote:
> On 23.2.2016 15:19, Simo Sorce wrote:
> > On Tue, 2016-02-23 at 12:43 +0100, Petr Spacek wrote:
> >> On 23.2.2016 11:00, Jan Cholasta wrote:
> >>> Hi,
> >>>
> >>> On 19.2.2016 16:31, Simo Sorce wrote:
>  On Fri, 2016-02-19 at 08:58 +0100, Petr Spacek wrote:
> > On 4.2.2016 18:21, Petr Spacek wrote:
> >> On 3.2.2016 18:41, Petr Spacek wrote:
> >>> Hello,
> >>>
> >>> I've updated the design page
> >>> http://www.freeipa.org/page/V4/DNS_Location_Mechanism
> >>>
> >>> Namely it now contains 'Version 2'.
> >>
> >> Okay, here is the idea how we can make it flexible:
> >> http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Implementation
> >
> > Hello,
> >
> > I'm thinking about LDAP schema for DNS locations.
> >
> > Purpose
> > ===
> > * Allow admins to define any number of locations.
> > * 1 DNS server advertises at most 1 location.
> > * 1 location generally contains set of services with different 
> > priorities and
> > weights (in DNS SRV terms).
> > * Express server & service priority for each defined location in a way 
> > which
> > is granular and flexible and ad the same time easy to manage.
> >
> >
> > Proposal
> > 
> > a) Container for locations
> > --
> > cn=locations,cn=ipa,cn=etc,dc=example,dc=com
> >
> >
> > b) 1 location
> > -
> > Attributes:
> > 2.16.840.1.113730.3.8.5.32 idnsLocationMember
> > Server/service assigned to a DNS Location. Usually used to define 'main'
> > servers for that location. Should it point to service DNs to be sure we 
> > have
> > smooth upgrade to containers?
> >>>
> >>> Services always live on a host (call it server or not), so IMO it makes 
> >>> sense
> >>> to point to servers.
> >>
> >> Fine with me. We just need something which will be able to accommodate
> >> containerization without upgrade headache.
> >>
> >> Do I undestand correctly that 1 container is going to have 1 host object 
> >> with
> >> one service object inside it?
> >>
> >> Like:
> >> cn=container
> >> - cn=DNS, cn=container
> >> ?
> > 
> > Do we think we will ever need to define different locations on a per
> > service basis ?
> 
> No, I hope that we will avoid this.
> 
> I suppose that when we containerize IPA things will get 'more interesting',
> but hopefully not in direction 'location per service'. If we have the
> possibility to have 1 DNS container and 2 CA containers attached to 1 LDAP
> container things will be complicated ...
> 
> I would expect that we will need some additional logic to ensure that one
> location advertises all the services (so e.g. LDAP is not missing in that
> particular location). I would not go beyond that.
> 
> > We based or hypothesis on the fact we only have one location and at most
> > different weights per services ?
> > Is there anything in here that will make it hard for us should we change
> > our mind in future ? (I think the single _tcp DNAME may be an
> > architectural issue anyway, but that could be resolved perhaps by moving
> > the location DNAME on a per service basis in future should we need it ?
> 
> Yes, that is a possible approach but I hope it will not be necessary.
> 
> Hopefully the logic for assigning containers/server to locations can be made
> smart enough to guarantee that we do not need more layers of CNAME/DNAME hacks
> or so.
> 
> The DNAME trick should be seen as a cheap way to emulate views without
> actually using views. If you want more fancy things, go and use views on
> full-featured DNS server ...
> 
> 
> > 2.16.840.1.113730.3.8.5.33 idnsBackupLocation
> > Pointer to another location. Sucks in all servers from that location as 
> > one
> > group with the same priority. Easy to use with _default location where 
> > all
> > 'other' servers are used as backup.
> >
> > These two attributes use sub-type priority and 
> > relativeweight.
> > This is the only way I could express all the information without need 
> > for
> > separate objects.
> >>>
> >>> I don't see the benefit here. What is wrong with separate objects? Why is 
> >>> it
> >>> necessary to reinvent the wheel and abuse attribute sub-types for this, 
> >>> losing
> >>> schema integrity checks provided by DS and making the implementation more
> >>> complex along the way?
> >>
> >> AFAIK Simo did not like separate objects because we could not use 
> >> referential
> >> integrity plugin to prune references to removed servers.
> >>
> >> This can surely be done in framework, I do not insist on subtypes.
> >>
> >>
> >> Talk is cheap, show me your schema :-)
> > 
> > I had a preference, but I m ok also with multiplle objects one per
> > server/service if we think this will make things easier to handle.
> > 
> > Given we are going to need a Server object 

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-23 Thread Petr Spacek
On 23.2.2016 15:19, Simo Sorce wrote:
> On Tue, 2016-02-23 at 12:43 +0100, Petr Spacek wrote:
>> On 23.2.2016 11:00, Jan Cholasta wrote:
>>> Hi,
>>>
>>> On 19.2.2016 16:31, Simo Sorce wrote:
 On Fri, 2016-02-19 at 08:58 +0100, Petr Spacek wrote:
> On 4.2.2016 18:21, Petr Spacek wrote:
>> On 3.2.2016 18:41, Petr Spacek wrote:
>>> Hello,
>>>
>>> I've updated the design page
>>> http://www.freeipa.org/page/V4/DNS_Location_Mechanism
>>>
>>> Namely it now contains 'Version 2'.
>>
>> Okay, here is the idea how we can make it flexible:
>> http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Implementation
>
> Hello,
>
> I'm thinking about LDAP schema for DNS locations.
>
> Purpose
> ===
> * Allow admins to define any number of locations.
> * 1 DNS server advertises at most 1 location.
> * 1 location generally contains set of services with different priorities 
> and
> weights (in DNS SRV terms).
> * Express server & service priority for each defined location in a way 
> which
> is granular and flexible and ad the same time easy to manage.
>
>
> Proposal
> 
> a) Container for locations
> --
> cn=locations,cn=ipa,cn=etc,dc=example,dc=com
>
>
> b) 1 location
> -
> Attributes:
> 2.16.840.1.113730.3.8.5.32 idnsLocationMember
> Server/service assigned to a DNS Location. Usually used to define 'main'
> servers for that location. Should it point to service DNs to be sure we 
> have
> smooth upgrade to containers?
>>>
>>> Services always live on a host (call it server or not), so IMO it makes 
>>> sense
>>> to point to servers.
>>
>> Fine with me. We just need something which will be able to accommodate
>> containerization without upgrade headache.
>>
>> Do I undestand correctly that 1 container is going to have 1 host object with
>> one service object inside it?
>>
>> Like:
>> cn=container
>> - cn=DNS, cn=container
>> ?
> 
> Do we think we will ever need to define different locations on a per
> service basis ?

No, I hope that we will avoid this.

I suppose that when we containerize IPA things will get 'more interesting',
but hopefully not in direction 'location per service'. If we have the
possibility to have 1 DNS container and 2 CA containers attached to 1 LDAP
container things will be complicated ...

I would expect that we will need some additional logic to ensure that one
location advertises all the services (so e.g. LDAP is not missing in that
particular location). I would not go beyond that.

> We based or hypothesis on the fact we only have one location and at most
> different weights per services ?
> Is there anything in here that will make it hard for us should we change
> our mind in future ? (I think the single _tcp DNAME may be an
> architectural issue anyway, but that could be resolved perhaps by moving
> the location DNAME on a per service basis in future should we need it ?

Yes, that is a possible approach but I hope it will not be necessary.

Hopefully the logic for assigning containers/server to locations can be made
smart enough to guarantee that we do not need more layers of CNAME/DNAME hacks
or so.

The DNAME trick should be seen as a cheap way to emulate views without
actually using views. If you want more fancy things, go and use views on
full-featured DNS server ...


> 2.16.840.1.113730.3.8.5.33 idnsBackupLocation
> Pointer to another location. Sucks in all servers from that location as 
> one
> group with the same priority. Easy to use with _default location where all
> 'other' servers are used as backup.
>
> These two attributes use sub-type priority and 
> relativeweight.
> This is the only way I could express all the information without need for
> separate objects.
>>>
>>> I don't see the benefit here. What is wrong with separate objects? Why is it
>>> necessary to reinvent the wheel and abuse attribute sub-types for this, 
>>> losing
>>> schema integrity checks provided by DS and making the implementation more
>>> complex along the way?
>>
>> AFAIK Simo did not like separate objects because we could not use referential
>> integrity plugin to prune references to removed servers.
>>
>> This can surely be done in framework, I do not insist on subtypes.
>>
>>
>> Talk is cheap, show me your schema :-)
> 
> I had a preference, but I m ok also with multiplle objects one per
> server/service if we think this will make things easier to handle.
> 
> Given we are going to need a Server object in DNS anyway (so that things
> are self contained for non IPA use cases) then I think the referential
> integrity thing goes out the window.

Back to the drawing board! :-)


> [...]
> 
> Attributes:
> 2.16.840.1.113730.3.8.5.34 idnsAdvertisedLocation
> Pointer to a idnsLocation object. On DNS service object / external server.

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-23 Thread Petr Vobornik

On 02/19/2016 04:31 PM, Simo Sorce wrote:

On Fri, 2016-02-19 at 08:58 +0100, Petr Spacek wrote:

On 4.2.2016 18:21, Petr Spacek wrote:

On 3.2.2016 18:41, Petr Spacek wrote:

Hello,

I've updated the design page
http://www.freeipa.org/page/V4/DNS_Location_Mechanism

Namely it now contains 'Version 2'.


Okay, here is the idea how we can make it flexible:
http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Implementation


Hello,

I'm thinking about LDAP schema for DNS locations.

Purpose
===
* Allow admins to define any number of locations.
* 1 DNS server advertises at most 1 location.
* 1 location generally contains set of services with different priorities and
weights (in DNS SRV terms).
* Express server & service priority for each defined location in a way which
is granular and flexible and ad the same time easy to manage.


Proposal

a) Container for locations
--
cn=locations,cn=ipa,cn=etc,dc=example,dc=com


b) 1 location
-
Attributes:
2.16.840.1.113730.3.8.5.32 idnsLocationMember
Server/service assigned to a DNS Location. Usually used to define 'main'
servers for that location. Should it point to service DNs to be sure we have
smooth upgrade to containers?

2.16.840.1.113730.3.8.5.33 idnsBackupLocation
Pointer to another location. Sucks in all servers from that location as one
group with the same priority. Easy to use with _default location where all
'other' servers are used as backup.

These two attributes use sub-type priority and relativeweight.
This is the only way I could express all the information without need for
separate objects.


Object classes:
2.16.840.1.113730.3.8.6.7  idnsLocation
MAY ( idnsLocationMember $ idnsBackupLocation )


1st example:
Location CZ:
- servers czserver1, czserver2
- priority=1
- relative weight = 50 % each
- if both CZ servers fail, use servers in location UK as backup (priority 2)
- if all CZ and UK servers fail, use servers in location US as backup
(priority 3) - servers on the other continent are used only as option of last
resort
DN: cn=cz,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
objectClass: idnsLocation
idnsLocationMember;priority1;relativeweight50:
cn=czserver1,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsLocationMember;priority1;relativeweight50:
cn=czserver2,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority2: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority3: cn=us,cn=locations,cn=ipa,cn=etc,dc=example,dc=com

Location UK:
- servers ukserver1, ukserver2
- priority=1
- server ukserver1 is a new beefy machine so it can handle 3 times more load
than ukserver2, thus relative weights 75 % and 25 %
- if both UK servers fail, use servers in location CZ as backup (priority 2)
- if all CZ and UK servers fail, use servers in location US as backup
(priority 3) - servers on the other continent are used only as option of last
resort
DN: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
objectClass: idnsLocation
idnsLocationMember;priority1;relativeweight3:
cn=ukserver1,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsLocationMember;priority1;relativeweight1:
cn=ukserver2,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority2: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority3: cn=us,cn=locations,cn=ipa,cn=etc,dc=example,dc=com

Location US:
- servers usserver1, usserver2
- priority=1
- relative weight = 50 % each
- if both US servers fail, use servers in location CZ and UK as backup
(priority 2) - it is over ocean anyway, so US clients will not make any
difference between CZ and UK locations
DN: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
objectClass: idnsLocation
idnsLocationMember;priority1;relativeweight50:
cn=ukserver1,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsLocationMember;priority1;relativeweight50:
cn=ukserver2,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority2: cn=cz,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority2: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com


Resulting DNS SRV records (generated by FreeIPA framework). Please note that
only numbers in SRV records matter only relatively. Priorities work as group,
weights are relative only inside the group. Absolute values above are used
only in algorithm which generates SRV records:
Location CZ:
_kerberos._udp SRV 1 50 czserver1
_kerberos._udp SRV 1 50 czserver2
_kerberos._udp SRV 2 75 ukserver1
_kerberos._udp SRV 2 25 ukserver1
_kerberos._udp SRV 3 50 usserver1
_kerberos._udp SRV 3 50 usserver2

Location UK:
_kerberos._udp SRV 1 75 ukserver1
_kerberos._udp SRV 1 25 ukserver1
_kerberos._udp SRV 2 50 czserver1
_kerberos._udp SRV 2 50 czserver2
_kerberos._udp SRV 3 50 usserver1
_kerberos._udp SRV 3 50 usserver2

Location US:
_kerberos._udp SRV 1 50 usserver1
_kerberos._udp SRV 1 50 usserver2
_kerberos._udp SRV 2 250 czserver1
_kerberos._udp SRV 2 250 czserver2
_kerberos._udp SRV 2 

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-23 Thread Simo Sorce
On Tue, 2016-02-23 at 12:43 +0100, Petr Spacek wrote:
> On 23.2.2016 11:00, Jan Cholasta wrote:
> > Hi,
> > 
> > On 19.2.2016 16:31, Simo Sorce wrote:
> >> On Fri, 2016-02-19 at 08:58 +0100, Petr Spacek wrote:
> >>> On 4.2.2016 18:21, Petr Spacek wrote:
>  On 3.2.2016 18:41, Petr Spacek wrote:
> > Hello,
> >
> > I've updated the design page
> > http://www.freeipa.org/page/V4/DNS_Location_Mechanism
> >
> > Namely it now contains 'Version 2'.
> 
>  Okay, here is the idea how we can make it flexible:
>  http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Implementation
> >>>
> >>> Hello,
> >>>
> >>> I'm thinking about LDAP schema for DNS locations.
> >>>
> >>> Purpose
> >>> ===
> >>> * Allow admins to define any number of locations.
> >>> * 1 DNS server advertises at most 1 location.
> >>> * 1 location generally contains set of services with different priorities 
> >>> and
> >>> weights (in DNS SRV terms).
> >>> * Express server & service priority for each defined location in a way 
> >>> which
> >>> is granular and flexible and ad the same time easy to manage.
> >>>
> >>>
> >>> Proposal
> >>> 
> >>> a) Container for locations
> >>> --
> >>> cn=locations,cn=ipa,cn=etc,dc=example,dc=com
> >>>
> >>>
> >>> b) 1 location
> >>> -
> >>> Attributes:
> >>> 2.16.840.1.113730.3.8.5.32 idnsLocationMember
> >>> Server/service assigned to a DNS Location. Usually used to define 'main'
> >>> servers for that location. Should it point to service DNs to be sure we 
> >>> have
> >>> smooth upgrade to containers?
> > 
> > Services always live on a host (call it server or not), so IMO it makes 
> > sense
> > to point to servers.
> 
> Fine with me. We just need something which will be able to accommodate
> containerization without upgrade headache.
> 
> Do I undestand correctly that 1 container is going to have 1 host object with
> one service object inside it?
> 
> Like:
> cn=container
> - cn=DNS, cn=container
> ?

Do we think we will ever need to define different locations on a per
service basis ?
We based or hypothesis on the fact we only have one location and at most
different weights per services ?
Is there anything in here that will make it hard for us should we change
our mind in future ? (I think the single _tcp DNAME may be an
architectural issue anyway, but that could be resolved perhaps by moving
the location DNAME on a per service basis in future should we need it ?

> >>> 2.16.840.1.113730.3.8.5.33 idnsBackupLocation
> >>> Pointer to another location. Sucks in all servers from that location as 
> >>> one
> >>> group with the same priority. Easy to use with _default location where all
> >>> 'other' servers are used as backup.
> >>>
> >>> These two attributes use sub-type priority and 
> >>> relativeweight.
> >>> This is the only way I could express all the information without need for
> >>> separate objects.
> > 
> > I don't see the benefit here. What is wrong with separate objects? Why is it
> > necessary to reinvent the wheel and abuse attribute sub-types for this, 
> > losing
> > schema integrity checks provided by DS and making the implementation more
> > complex along the way?
> 
> AFAIK Simo did not like separate objects because we could not use referential
> integrity plugin to prune references to removed servers.
> 
> This can surely be done in framework, I do not insist on subtypes.
> 
> 
> Talk is cheap, show me your schema :-)

I had a preference, but I m ok also with multiplle objects one per
server/service if we think this will make things easier to handle.

Given we are going to need a Server object in DNS anyway (so that things
are self contained for non IPA use cases) then I think the referential
integrity thing goes out the window.

[...]

> >>> Attributes:
> >>> 2.16.840.1.113730.3.8.5.34 idnsAdvertisedLocation
> >>> Pointer to a idnsLocation object. On DNS service object / external server.
> >>> Single-valued.
> > 
> > IMO this should be attribute of server rather than service,
> > given that
> > idnsLocationMember points to servers rather than services.
> 
> The main reason is why idnsAdvertisedLocation is tied to DNS service is that a
> IPA server without a DNS server cannot advertise anything, so the attribute
> does not make sense on all server objects.
> 
> Also, the attribute can be (in future) used on external DNS server. The
> external server is not going to be an IPA server, it will be just
> representation of a DNS endpoint. Likely this external DNS server is not going
> to reside in cn=masters at all. It might be in cn=dns so somewhere else.

If possible I'd tie it to the DNS server's DNS object, or a new object
in the DNS hierarchy.
We may want to have locations for servers that are not IPA Server at
all, like the preferred local XMPP server or other things like that.

> So it seemed to me that it would be good to tie this to 'DNS endpoint' object
> instead of IPA server object.
> 
> 

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-23 Thread Petr Spacek
On 23.2.2016 11:00, Jan Cholasta wrote:
> Hi,
> 
> On 19.2.2016 16:31, Simo Sorce wrote:
>> On Fri, 2016-02-19 at 08:58 +0100, Petr Spacek wrote:
>>> On 4.2.2016 18:21, Petr Spacek wrote:
 On 3.2.2016 18:41, Petr Spacek wrote:
> Hello,
>
> I've updated the design page
> http://www.freeipa.org/page/V4/DNS_Location_Mechanism
>
> Namely it now contains 'Version 2'.

 Okay, here is the idea how we can make it flexible:
 http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Implementation
>>>
>>> Hello,
>>>
>>> I'm thinking about LDAP schema for DNS locations.
>>>
>>> Purpose
>>> ===
>>> * Allow admins to define any number of locations.
>>> * 1 DNS server advertises at most 1 location.
>>> * 1 location generally contains set of services with different priorities 
>>> and
>>> weights (in DNS SRV terms).
>>> * Express server & service priority for each defined location in a way which
>>> is granular and flexible and ad the same time easy to manage.
>>>
>>>
>>> Proposal
>>> 
>>> a) Container for locations
>>> --
>>> cn=locations,cn=ipa,cn=etc,dc=example,dc=com
>>>
>>>
>>> b) 1 location
>>> -
>>> Attributes:
>>> 2.16.840.1.113730.3.8.5.32 idnsLocationMember
>>> Server/service assigned to a DNS Location. Usually used to define 'main'
>>> servers for that location. Should it point to service DNs to be sure we have
>>> smooth upgrade to containers?
> 
> Services always live on a host (call it server or not), so IMO it makes sense
> to point to servers.

Fine with me. We just need something which will be able to accommodate
containerization without upgrade headache.

Do I undestand correctly that 1 container is going to have 1 host object with
one service object inside it?

Like:
cn=container
- cn=DNS, cn=container
?

>>> 2.16.840.1.113730.3.8.5.33 idnsBackupLocation
>>> Pointer to another location. Sucks in all servers from that location as one
>>> group with the same priority. Easy to use with _default location where all
>>> 'other' servers are used as backup.
>>>
>>> These two attributes use sub-type priority and 
>>> relativeweight.
>>> This is the only way I could express all the information without need for
>>> separate objects.
> 
> I don't see the benefit here. What is wrong with separate objects? Why is it
> necessary to reinvent the wheel and abuse attribute sub-types for this, losing
> schema integrity checks provided by DS and making the implementation more
> complex along the way?

AFAIK Simo did not like separate objects because we could not use referential
integrity plugin to prune references to removed servers.

This can surely be done in framework, I do not insist on subtypes.


Talk is cheap, show me your schema :-)

>>> Object classes:
>>> 2.16.840.1.113730.3.8.6.7  idnsLocation
>>> MAY ( idnsLocationMember $ idnsBackupLocation )
>>>
>>>
>>> 1st example:
>>> Location CZ:
>>> - servers czserver1, czserver2
>>> - priority=1
>>> - relative weight = 50 % each
>>> - if both CZ servers fail, use servers in location UK as backup (priority 2)
>>> - if all CZ and UK servers fail, use servers in location US as backup
>>> (priority 3) - servers on the other continent are used only as option of 
>>> last
>>> resort
>>> DN: cn=cz,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
>>> objectClass: idnsLocation
>>> idnsLocationMember;priority1;relativeweight50:
>>> cn=czserver1,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
>>> idnsLocationMember;priority1;relativeweight50:
>>> cn=czserver2,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
>>> idnsBackupLocation;priority2:
>>> cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
>>> idnsBackupLocation;priority3:
>>> cn=us,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
>>>
>>> Location UK:
>>> - servers ukserver1, ukserver2
>>> - priority=1
>>> - server ukserver1 is a new beefy machine so it can handle 3 times more load
>>> than ukserver2, thus relative weights 75 % and 25 %
>>> - if both UK servers fail, use servers in location CZ as backup (priority 2)
>>> - if all CZ and UK servers fail, use servers in location US as backup
>>> (priority 3) - servers on the other continent are used only as option of 
>>> last
>>> resort
>>> DN: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
>>> objectClass: idnsLocation
>>> idnsLocationMember;priority1;relativeweight3:
>>> cn=ukserver1,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
>>> idnsLocationMember;priority1;relativeweight1:
>>> cn=ukserver2,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
>>> idnsBackupLocation;priority2:
>>> cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
>>> idnsBackupLocation;priority3:
>>> cn=us,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
>>>
>>> Location US:
>>> - servers usserver1, usserver2
>>> - priority=1
>>> - relative weight = 50 % each
>>> - if both US servers fail, use servers in location CZ and UK as backup
>>> (priority 2) - it is over ocean anyway, so US clients will not make any
>>> difference between CZ and UK 

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-23 Thread Jan Cholasta

Hi,

On 19.2.2016 16:31, Simo Sorce wrote:

On Fri, 2016-02-19 at 08:58 +0100, Petr Spacek wrote:

On 4.2.2016 18:21, Petr Spacek wrote:

On 3.2.2016 18:41, Petr Spacek wrote:

Hello,

I've updated the design page
http://www.freeipa.org/page/V4/DNS_Location_Mechanism

Namely it now contains 'Version 2'.


Okay, here is the idea how we can make it flexible:
http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Implementation


Hello,

I'm thinking about LDAP schema for DNS locations.

Purpose
===
* Allow admins to define any number of locations.
* 1 DNS server advertises at most 1 location.
* 1 location generally contains set of services with different priorities and
weights (in DNS SRV terms).
* Express server & service priority for each defined location in a way which
is granular and flexible and ad the same time easy to manage.


Proposal

a) Container for locations
--
cn=locations,cn=ipa,cn=etc,dc=example,dc=com


b) 1 location
-
Attributes:
2.16.840.1.113730.3.8.5.32 idnsLocationMember
Server/service assigned to a DNS Location. Usually used to define 'main'
servers for that location. Should it point to service DNs to be sure we have
smooth upgrade to containers?


Services always live on a host (call it server or not), so IMO it makes 
sense to point to servers.




2.16.840.1.113730.3.8.5.33 idnsBackupLocation
Pointer to another location. Sucks in all servers from that location as one
group with the same priority. Easy to use with _default location where all
'other' servers are used as backup.

These two attributes use sub-type priority and relativeweight.
This is the only way I could express all the information without need for
separate objects.


I don't see the benefit here. What is wrong with separate objects? Why 
is it necessary to reinvent the wheel and abuse attribute sub-types for 
this, losing schema integrity checks provided by DS and making the 
implementation more complex along the way?





Object classes:
2.16.840.1.113730.3.8.6.7  idnsLocation
MAY ( idnsLocationMember $ idnsBackupLocation )


1st example:
Location CZ:
- servers czserver1, czserver2
- priority=1
- relative weight = 50 % each
- if both CZ servers fail, use servers in location UK as backup (priority 2)
- if all CZ and UK servers fail, use servers in location US as backup
(priority 3) - servers on the other continent are used only as option of last
resort
DN: cn=cz,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
objectClass: idnsLocation
idnsLocationMember;priority1;relativeweight50:
cn=czserver1,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsLocationMember;priority1;relativeweight50:
cn=czserver2,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority2: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority3: cn=us,cn=locations,cn=ipa,cn=etc,dc=example,dc=com

Location UK:
- servers ukserver1, ukserver2
- priority=1
- server ukserver1 is a new beefy machine so it can handle 3 times more load
than ukserver2, thus relative weights 75 % and 25 %
- if both UK servers fail, use servers in location CZ as backup (priority 2)
- if all CZ and UK servers fail, use servers in location US as backup
(priority 3) - servers on the other continent are used only as option of last
resort
DN: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
objectClass: idnsLocation
idnsLocationMember;priority1;relativeweight3:
cn=ukserver1,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsLocationMember;priority1;relativeweight1:
cn=ukserver2,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority2: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority3: cn=us,cn=locations,cn=ipa,cn=etc,dc=example,dc=com

Location US:
- servers usserver1, usserver2
- priority=1
- relative weight = 50 % each
- if both US servers fail, use servers in location CZ and UK as backup
(priority 2) - it is over ocean anyway, so US clients will not make any
difference between CZ and UK locations
DN: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
objectClass: idnsLocation
idnsLocationMember;priority1;relativeweight50:
cn=ukserver1,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsLocationMember;priority1;relativeweight50:
cn=ukserver2,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority2: cn=cz,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority2: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com


Resulting DNS SRV records (generated by FreeIPA framework). Please note that
only numbers in SRV records matter only relatively. Priorities work as group,
weights are relative only inside the group. Absolute values above are used
only in algorithm which generates SRV records:
Location CZ:
_kerberos._udp SRV 1 50 czserver1
_kerberos._udp SRV 1 50 czserver2
_kerberos._udp SRV 2 75 ukserver1
_kerberos._udp SRV 2 25 ukserver1
_kerberos._udp SRV 3 50 usserver1
_kerberos._udp SRV 3 50 usserver2

Location UK:

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-19 Thread Simo Sorce
On Fri, 2016-02-19 at 08:58 +0100, Petr Spacek wrote:
> On 4.2.2016 18:21, Petr Spacek wrote:
> > On 3.2.2016 18:41, Petr Spacek wrote:
> >> Hello,
> >>
> >> I've updated the design page
> >> http://www.freeipa.org/page/V4/DNS_Location_Mechanism
> >>
> >> Namely it now contains 'Version 2'.
> > 
> > Okay, here is the idea how we can make it flexible:
> > http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Implementation
> 
> Hello,
> 
> I'm thinking about LDAP schema for DNS locations.
> 
> Purpose
> ===
> * Allow admins to define any number of locations.
> * 1 DNS server advertises at most 1 location.
> * 1 location generally contains set of services with different priorities and
> weights (in DNS SRV terms).
> * Express server & service priority for each defined location in a way which
> is granular and flexible and ad the same time easy to manage.
> 
> 
> Proposal
> 
> a) Container for locations
> --
> cn=locations,cn=ipa,cn=etc,dc=example,dc=com
> 
> 
> b) 1 location
> -
> Attributes:
> 2.16.840.1.113730.3.8.5.32 idnsLocationMember
> Server/service assigned to a DNS Location. Usually used to define 'main'
> servers for that location. Should it point to service DNs to be sure we have
> smooth upgrade to containers?
> 
> 2.16.840.1.113730.3.8.5.33 idnsBackupLocation
> Pointer to another location. Sucks in all servers from that location as one
> group with the same priority. Easy to use with _default location where all
> 'other' servers are used as backup.
> 
> These two attributes use sub-type priority and relativeweight.
> This is the only way I could express all the information without need for
> separate objects.
> 
> 
> Object classes:
> 2.16.840.1.113730.3.8.6.7  idnsLocation
> MAY ( idnsLocationMember $ idnsBackupLocation )
> 
> 
> 1st example:
> Location CZ:
> - servers czserver1, czserver2
> - priority=1
> - relative weight = 50 % each
> - if both CZ servers fail, use servers in location UK as backup (priority 2)
> - if all CZ and UK servers fail, use servers in location US as backup
> (priority 3) - servers on the other continent are used only as option of last
> resort
> DN: cn=cz,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
> objectClass: idnsLocation
> idnsLocationMember;priority1;relativeweight50:
> cn=czserver1,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
> idnsLocationMember;priority1;relativeweight50:
> cn=czserver2,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
> idnsBackupLocation;priority2: 
> cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
> idnsBackupLocation;priority3: 
> cn=us,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
> 
> Location UK:
> - servers ukserver1, ukserver2
> - priority=1
> - server ukserver1 is a new beefy machine so it can handle 3 times more load
> than ukserver2, thus relative weights 75 % and 25 %
> - if both UK servers fail, use servers in location CZ as backup (priority 2)
> - if all CZ and UK servers fail, use servers in location US as backup
> (priority 3) - servers on the other continent are used only as option of last
> resort
> DN: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
> objectClass: idnsLocation
> idnsLocationMember;priority1;relativeweight3:
> cn=ukserver1,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
> idnsLocationMember;priority1;relativeweight1:
> cn=ukserver2,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
> idnsBackupLocation;priority2: 
> cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
> idnsBackupLocation;priority3: 
> cn=us,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
> 
> Location US:
> - servers usserver1, usserver2
> - priority=1
> - relative weight = 50 % each
> - if both US servers fail, use servers in location CZ and UK as backup
> (priority 2) - it is over ocean anyway, so US clients will not make any
> difference between CZ and UK locations
> DN: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
> objectClass: idnsLocation
> idnsLocationMember;priority1;relativeweight50:
> cn=ukserver1,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
> idnsLocationMember;priority1;relativeweight50:
> cn=ukserver2,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
> idnsBackupLocation;priority2: 
> cn=cz,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
> idnsBackupLocation;priority2: 
> cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
> 
> 
> Resulting DNS SRV records (generated by FreeIPA framework). Please note that
> only numbers in SRV records matter only relatively. Priorities work as group,
> weights are relative only inside the group. Absolute values above are used
> only in algorithm which generates SRV records:
> Location CZ:
> _kerberos._udp SRV 1 50 czserver1
> _kerberos._udp SRV 1 50 czserver2
> _kerberos._udp SRV 2 75 ukserver1
> _kerberos._udp SRV 2 25 ukserver1
> _kerberos._udp SRV 3 50 usserver1
> _kerberos._udp SRV 3 50 usserver2
> 
> Location UK:
> _kerberos._udp SRV 1 75 ukserver1
> _kerberos._udp SRV 1 25 ukserver1
> _kerberos._udp SRV 2 50 czserver1
> _kerberos._udp SRV 2 50 

Re: [Freeipa-devel] Locations design v2: LDAP schema & user interface

2016-02-18 Thread Petr Spacek
On 4.2.2016 18:21, Petr Spacek wrote:
> On 3.2.2016 18:41, Petr Spacek wrote:
>> Hello,
>>
>> I've updated the design page
>> http://www.freeipa.org/page/V4/DNS_Location_Mechanism
>>
>> Namely it now contains 'Version 2'.
> 
> Okay, here is the idea how we can make it flexible:
> http://www.freeipa.org/page/V4/DNS_Location_Mechanism#Implementation

Hello,

I'm thinking about LDAP schema for DNS locations.

Purpose
===
* Allow admins to define any number of locations.
* 1 DNS server advertises at most 1 location.
* 1 location generally contains set of services with different priorities and
weights (in DNS SRV terms).
* Express server & service priority for each defined location in a way which
is granular and flexible and ad the same time easy to manage.


Proposal

a) Container for locations
--
cn=locations,cn=ipa,cn=etc,dc=example,dc=com


b) 1 location
-
Attributes:
2.16.840.1.113730.3.8.5.32 idnsLocationMember
Server/service assigned to a DNS Location. Usually used to define 'main'
servers for that location. Should it point to service DNs to be sure we have
smooth upgrade to containers?

2.16.840.1.113730.3.8.5.33 idnsBackupLocation
Pointer to another location. Sucks in all servers from that location as one
group with the same priority. Easy to use with _default location where all
'other' servers are used as backup.

These two attributes use sub-type priority and relativeweight.
This is the only way I could express all the information without need for
separate objects.


Object classes:
2.16.840.1.113730.3.8.6.7  idnsLocation
MAY ( idnsLocationMember $ idnsBackupLocation )


1st example:
Location CZ:
- servers czserver1, czserver2
- priority=1
- relative weight = 50 % each
- if both CZ servers fail, use servers in location UK as backup (priority 2)
- if all CZ and UK servers fail, use servers in location US as backup
(priority 3) - servers on the other continent are used only as option of last
resort
DN: cn=cz,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
objectClass: idnsLocation
idnsLocationMember;priority1;relativeweight50:
cn=czserver1,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsLocationMember;priority1;relativeweight50:
cn=czserver2,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority2: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority3: cn=us,cn=locations,cn=ipa,cn=etc,dc=example,dc=com

Location UK:
- servers ukserver1, ukserver2
- priority=1
- server ukserver1 is a new beefy machine so it can handle 3 times more load
than ukserver2, thus relative weights 75 % and 25 %
- if both UK servers fail, use servers in location CZ as backup (priority 2)
- if all CZ and UK servers fail, use servers in location US as backup
(priority 3) - servers on the other continent are used only as option of last
resort
DN: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
objectClass: idnsLocation
idnsLocationMember;priority1;relativeweight3:
cn=ukserver1,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsLocationMember;priority1;relativeweight1:
cn=ukserver2,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority2: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority3: cn=us,cn=locations,cn=ipa,cn=etc,dc=example,dc=com

Location US:
- servers usserver1, usserver2
- priority=1
- relative weight = 50 % each
- if both US servers fail, use servers in location CZ and UK as backup
(priority 2) - it is over ocean anyway, so US clients will not make any
difference between CZ and UK locations
DN: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
objectClass: idnsLocation
idnsLocationMember;priority1;relativeweight50:
cn=ukserver1,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsLocationMember;priority1;relativeweight50:
cn=ukserver2,cn=masters,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority2: cn=cz,cn=locations,cn=ipa,cn=etc,dc=example,dc=com
idnsBackupLocation;priority2: cn=uk,cn=locations,cn=ipa,cn=etc,dc=example,dc=com


Resulting DNS SRV records (generated by FreeIPA framework). Please note that
only numbers in SRV records matter only relatively. Priorities work as group,
weights are relative only inside the group. Absolute values above are used
only in algorithm which generates SRV records:
Location CZ:
_kerberos._udp SRV 1 50 czserver1
_kerberos._udp SRV 1 50 czserver2
_kerberos._udp SRV 2 75 ukserver1
_kerberos._udp SRV 2 25 ukserver1
_kerberos._udp SRV 3 50 usserver1
_kerberos._udp SRV 3 50 usserver2

Location UK:
_kerberos._udp SRV 1 75 ukserver1
_kerberos._udp SRV 1 25 ukserver1
_kerberos._udp SRV 2 50 czserver1
_kerberos._udp SRV 2 50 czserver2
_kerberos._udp SRV 3 50 usserver1
_kerberos._udp SRV 3 50 usserver2

Location US:
_kerberos._udp SRV 1 50 usserver1
_kerberos._udp SRV 1 50 usserver2
_kerberos._udp SRV 2 250 czserver1
_kerberos._udp SRV 2 250 czserver2
_kerberos._udp SRV 2 375 ukserver1
_kerberos._udp SRV 2 125 ukserver1


2nd example:
- 10 locations