On 04/17/2012 06:42 AM, Simo Sorce wrote:
On Tue, 2012-04-17 at 01:13 +0200, Ondrej Hamada wrote:
Sorry for inactivity, I was struggling with a lot of school stuff.

I've summed up the main goals, do you agree on them or should I
add/remove any?

Create Hub and Consumer types of replica with following features:

* Hub is read-only

* Hub interconnects Masters with Consumers or Masters with Hubs
     or Hubs with other Hubs

* Hub is hidden in the network topology

* Consumer is read-only

* Consumer interconnects Masters/Hubs with clients

* Write operations should be forwarded to Master

* Consumer should be able to log users into system without
     communication with master
We need to define how this can be done, it will almost certainly mean
part of the consumer is writable, plus it also means you need additional
access control and policies, on what the Consumer should be allowed to

* Consumer should cache user's credentials
Ok what credentials ? As I explained earlier Kerberos creds cannot
really be cached. Either they are transferred with replication or the
KDC needs to be change to do chaining. Neither I consider as 'caching'.
A password obtained through an LDAP bind could be cached, but I am not
sure it is worth it.

* Caching of credentials should be configurable
See above.

* CA server should not be allowed on Hubs and Consumers
Missing points:
- Masters should not transfer KRB keys to HUBs/Consumers by default.

- We need selective replication if you want to allow distributing a
partial set of Kerberos credentials to consumers. With Hubs it becomes
complicated to decide what to replicate about credentials.


Can you please have a look at this draft and comment it please?

Design document draft: More types of replicas in FreeIPA


Create Hub and Consumer types of replica with following features:

* Hub is read-only

* Hub interconnects Masters with Consumers or Masters with Hubs
    or Hubs with other Hubs

* Hub is hidden in the network topology

* Consumer is read-only

* Consumer interconnects Masters/Hubs with clients

* Write operations should be forwarded to Master
Do we need to specify how this is done ? Referrals vs Chain-on-update ?

* Consumer should be able to log users into system without
    communication with master

* Consumer should be able to store user's credentials
Can you expand on this ? Do you mean user keys ?

* Storing of credentials should be configurable and disabled by default

* Credentials expiration on replica should be configurable
What does this mean ?

* CA server should not be allowed on Hubs and Consumers


- SSSD is currently supposed to cooperate with one LDAP server only
Is this a problem in having an LDAP server that doesn't also have a KDC
on the same host ? Or something else ?

- OpenLDAP client and its support for referrals
Should we avoid referrals and use chain-on-update ?
What does it mean for access control ?
How do consumers authenticate to masters ?
Should we use s4u2proxy ?

- 389-DS allows replication of whole suffix only
What kind of filters do we think we need ? We can already exclude
specific attributes from replication.

fractional replication had originally planned to support search filters in addition to attribute lists - I think Ondrej wants to include or exclude certain entries from being replicated

- Storing credentials and allowing authentication against Consumer server


389-DS allows replication of whole suffix only:

* Rich said that they are planning to allow the fractional replication
in DS to
    use LDAP filters. It will allow us to do selective replication what
is mainly
    important for replication of user's credentials.
I guess we want to do this to selectively prevent replication of only
some kerberos keys ? Based on groups ? Would filtes allow that using
memberof ?

Using filters with fractional replication would allow you to include or exclude anything that can be expressed as an LDAP search filter


Forwarding of requests in LDAP:

* use existing 389-DS plugin "Chain-on-update" - we can try it as a proof of
concept solution, but for real deployment it won't be very cool solution
as it
will increase the demands on Hubs.
Why do you think it would increase demands for hubs ? Doesn't the
consumer directly contact the masters skipping the hubs ?

Yeah, not sure what you mean here, unless you are taking the document http://port389.org/wiki/Howto:ChainOnUpdate as the only way to implement chain on update - it is not - that document was taken from an early proof-of-concept for a planned deployment at a customer many years ago.

* better way is to use the referrals. The master server(s) to be referred
    might be:
     1) specified at install time
This is not really useful, as it would break updates every time the
specified master is offline, it would also require some work to
reconfigure stuff if the mastrer is retired.

     2) looked up in DNS records
Probably easier to look up in LDAP, we have a record for each master in
the domain.

     3) find master dynamically - Consumers and Hubs will be in fact master
        servers (from 389-DS point of view), this means that every
consumer or hub
        knows his direct suppliers a they know their suppliers ...
Not clear what this means, can you elaborate ?

    ISSUE: support for referrals in OpenLDAP client
We've had quite some issue with referrals indeed, and a lot of client
software dopes not properly handle referrals. That would leave a bunch
of clients unable to modify the Directory. OTOH very few clients need to
modify the directory, so maybe that's good enough.

* SSSD must be improved to allow cooperation with more than one LDAP server
Can you elaborate what you think is missing in SSSD ? Is it about the
need to fix referrals handling ? Or something else ?


Authentication and replication of credentials:

* authentication policies, every user must authenticate against master
server by
If users always contact the master, what are the consumers for ?
Need to elaborate on this and explain.

    - if the authentication is successful and proper policy is set for
him, the
      user will be added into a specific user group. Each consumer will
have one
      of these groups. These groups will be used by LDAP filters in
      replication to distribute the Krb creds to the chosen Consumers only.
Why should this depend on authentication ??
Keep in mind that changing filters will not cause any replication to
occur, replication would occur only when a change happens. Therefore
placing a user in one group should happen before the kerberos keys are
Also in order to move a user from one group to another, which would
theoretically cause deletion of credentials from a group of servers and
distribution to another we will probably need a plugin.
This plugin would take care of intercepting this special membership
On servers that loos membership this plugin would go and delete locally
stored keys from the user(s) that lost membership.
On servers that gained membership they would have to go to one of the
master and fetch the keys and store them locally, this would need to be
in a way that prevent replication and retain the master modification
time so that later replication events will not conflict in any way.
There i also the problem of rekeying and having different master keys on
hubs/consumers, not an easy problem, and would require quite some custom
changes to the replication protocol for these special entries.

    - The groups will be created and modified on the master, so they will get
      replicated to all Hubs and Consumers. Hubs make it more complicated
as they
      must know which groups are relevant for them. Because of that I
suppose that
      every Hub will have to know about all its 'subordinates' - this
      will have to be generated dynamically - probably on every change to the
      replication topology (adding/removing replicas is usually not a very
      frequent operation)
Hubs will simply be made members of these groups just like consumers.
All members of a group are authorized to do something with that group
membership. The grouping part doesn't seem complicated to me, but I may
have missed a detail, care to elaborate what you see as difficult ?

    - The policy must also specify the credentials expiration time. If
user tries to
      authenticate with expired credential, he will be refused and
redirected to Master
      server for authentication.
How is this different from current status ? All accounts already have
password expiration times and account expiration times. What am I
missing ?

ISSUE: How to deal with creds. expiration in replication? The replication of
      credential to the Consumer could be stopped by removing the user
from the
      Consumer specific user group (mentioned above). The easiest way
would be to
      delete him when he tries to auth.
See above, we need a plugin IMO.

  with expired credentials or do a
      check (intervals specified in policy) and delete all expired creds.
It's not clear to me what we mean by expired creds, what am I missing ?

      of the removal of expired creds. we will have to grant the Consumer the
      permission to delete users from the Consumer specific user group
(but only
      deleting, adding users will be possible on Masters only).
I do not understand this.

Offline authentication:

* Consumer (and Hub) must allow write operations just for a small set of
    attributes: last login date and time, count of unsuccessful logins
and the
            lockup of account
This shouldn't be a problem, we already do that with masters, the trick
is in non replicating those attributes so that they never conflict.

    - to be able to do that, both Consumers and Hubs must be Masters(from
    389-DS point of view).
This doesn't sound right at all. All server can always write locally,
what prevents them from doing so are referrals/configuration. Consumers
and hubs do not and cannot be masters.

  When the Master<->Consumer connection is
broken, the
    lockup information is saved only locally and will be pushed to Master
    on connection restoration. I suppose that only the lockup information
    be replicated. In case of lockup the user will have to authenticate
    Master server only.
What is the lookup information ? What connection is broken ? There
aren't persistent connections between masters and consumers (esp. when
hubs are in between there are none).

Transfer of Krb keys:

* Consumer server will have to have realm krbtgt.
I guess you mean "a krbtgt usuable in the realm", not 'the' realm
krbtgt, right ?

  This means that we
will have
    to distribute every Consumer's krbtgt to the Master servers.
It's the other way around. All keys are generated on the masters just
like with any other principal key, and then replicated to consumers.

Masters will
    need to have a logic for using those keys instead of the normal krbtgt to
    perform operations when user's krbtgt are presented to a different
Yes, we will need potentially quite invasive changes to the KDC when the
'krbtgt' is involved. We will need to plan this ahead with MIT to
validate our idea or see if they have different ideas on how to solve
this problem.


Freeipa-devel mailing list

Reply via email to