On 04/19/2012 05:28 PM, Simo Sorce wrote:
> On Thu, 2012-04-19 at 16:29 -0400, Dmitri Pal wrote:
>> On 04/19/2012 03:44 PM, Simo Sorce wrote:
>>> On Thu, 2012-04-19 at 15:00 -0400, Dmitri Pal wrote:
>>>> Local server is the central hub for the authentications in the remote
>>>> office. The client machines with SSSD or LDAP clients might not have
>>>> access to the central datacenter directly. Another reason for having
>>>> such login server is to reduce the traffic between the remote office
>>>> and
>>>> central data center by pointing to the clients to the login server for
>>>> the identity lookups and cached authentication via LDAP with pam proxy
>>>> and SSSD under it.
>>>>
>>>> The SSSD will be on the server.
>>>>
>>>>> Also we need to keep in mind that we cannot assume SSSD to be always
>>>>> available on clients.
>>>> We can assume that SSSD can be used on this login server via pam
>>>> proxy.
>>> Sorry I do not get how SSSD is going to make any difference on the
>>> server.
>>>
>>>> I agree with the questions and challenges. This is something for
>>>> Ondrej
>>>> to research as an owner of the thesis but I would think that it should
>>>> be up to the admin to define what to synch. We have a finite list of
>>>> the
>>>> things that we can pre-create filter for.
>>>>
>>>> Think about some kind of the container that would define what to
>>>> replicate.
>>>> The object would contain following attributes:
>>>>
>>>> uuid - unique rule id
>>>> name - descriptive name
>>>> description - comment
>>>> filter - ldap filter
>>>> base DN - base DN
>>>> scope - scope of the search
>>>> attrs - list of attributes to sync
>>>> enabled - whether the rule is enabled
>>>>
>>>> we can define (but disable by default) rules for SUDO, HBAC, SELinux,
>>>> Hosts etc and tel admin to enable an tweak.
>>>>
>>> This would be really complicated to manage. At the very least we need to
>>> define what's the expected user experience and data set access, then we
>>> can start devising technical solutions.
>>>
>>> What we really want to protect here, at the core, is user keys, for user
>>> and services not located in the branch office. All the info that is
>>> accessible from the branch office and is not super sensitive can always
>>> be replicated. The cost of choosing what to replicate and what not
>>> should be deferred to a later "enhancement". I would try to get the
>>> mechanism right for the authentication subset first, and deal with more
>>> fine-grained 'exclusions' later, once we solved that problem and gained
>>> experience with at least one aspect of data segregation.
>>>
>>> Simo.
>>>
>> I do not see a problem and it looks like you do not see the solution
>> that I see. So let me try again:
>>
>> 1) Consumer (remote office server) will sync in whatever user and other
>> data that we decide is safe to sync (may be configurable but this is a
>> separate question).
>> 2) It will not sync the kerberos keys .
>> 3) The authentication between clients and Consumer will be done using
>> LDAP since eSSO is not a requirement so Kerberos tickets are not needed
>> 4) The consumer would use pam proxy to do the authentication against the
>> real master on the client behalf
>> 5) We will configure SSSD under the pam proxy to point to the real
>> master. If access to the realm master is broken the authentication in
>> the remote office would still be successful as they would be performed
>> against credentials cached by SSSD.
>> Online case: client -> ldap auth -> Consumer -> pam proxy -> PAM ->
>> pam_sss -> SSSD -> real master
>> Offline case: client -> ldap auth -> Consumer -> pam proxy -> PAM ->
>> pam_sss -> SSSD -> SSSD cached credentials
>> 6) We can assume that the Consumer is the latest RHEL and this can
>> leverage SSSD capabilities regardles of what OS or versions the actual
>> clients in the remote office run.
> Well if we have to go through all these steps just to cache the user
> password we can also simply replicate the userPassword attribute for
> select users and just not the kerberos keys. Much simpler architecture
> and gives you the same results w/o any chaining at all.

Even better.

>> If we need Kerberos SSO in the remote office this is a different use case.
>> Does this make sense now?
> Yes, but I was assuming Kerberos to be naturally necessary. If an
> organization deploys kerberized services then naturally users will have
> to use them. Case in point the Web UI for FreeIPA itself.
>
> Proxying pure LDAP is a solved problem, there are a number of
> meta-directories that could probably also allow caching LDAP binds, I
> think we want to concentrate on a solution that gives you the full
> functionality of a freeIPA realm, but that's just me. I am not opposed
> to a lesser solution, just not sure we should concentrate too many
> energies into it.
>
> Simo.
>
I asked the question earlier in the thread. I said that there are two
options: login server and eSSO server. Ondrej indicated that he wants to
focus on the pure login server. This is why I continued exploring this
option. IMO these are two different use cases and two different
solutions that we should provide. One solution a striped down DS replica
like we discussed above.

The eSSO solution is more challenging. I agree.
So far I think I have a way to solve the problem if we allow storing
user passwords using symmetric keys in the LDAP. This would allow us to
decrypt it in memory and generate a kerberos hash at need.
If this is a non starter I do not see a good way to provide a solution
that allow someone to get Kerberos tickets in the remote office and do
not replicate kerberos hashes.

But let us assume that it is OK to encrypt user passwords with a long
key (like master kerberos key) and store encrypted passwords in another
subtype of the userPassword attribute.
So when a user password is set or updated such attribute is created.

Now admin decides to create a special "remote" replica. As a part of
this operation a new master key, specific for that replica, will be
generated. It is different a Kerberos key from the Kerberos key used for
the rest of masters in the domain.  This key is installed on the remote
replica.
As replication starts (first push) between some normal master and remote
replica the replication plugin will detect that one side of this
agreement is a normal master while the other side is a special remote
server. In this case the replication plugin would instead of taking
userPassword attribute as is from the normal master would get the
userPassword that can be decrypted will use the master key for the
remote replica to generate password hash on the fly and inject into the
replication stream instaed of the userPassword attribute stored on real
master.

Using this approach we have several benefits:

1) Remote office can enjoy the eSSO
2) If the remote replica is jeopardized there is nothing in it that can
be used to attack central server without cracking the kerberos hashes.
3) Authentication against the remote server would not be respected
against the resources in the central server creating confined environment
4) Remote office can operate with eSSO even if there is no connection to
the central server.

So we get security and convenience at the same time.

For the password changes the remote server kpasswd should be updated to
proxy the password change to the real server. So change will happen on
the real master and then replicated back to the remote server using the
logic described above. To avoid the latency we can generate the new
password hash right away and save it locally without waiting for the
full replication cycle to get back to the remote server.
Remote server would not have a CA. It might have DNS. There will be one
way replication agreement from the central server to that remote server
and the ACIs would be configured in such a way that no change can be
done locally except probably local failure counts and password update by
KDC. The data replicated will be controlled by the flexible set of
filters that can be controlled by the admin when the replication
agreement is defined as we discussed earlier in the thread.

If the remote office needs more than one server for redundancy purpose
another remote server can be created in the same way. Since there is no
data updates there is no need to replicate anything between the two
remote servers.

Seems like a decent approach.
In future there might also be an option to use one way domains trusts
and subdomains but I am not exactly sure how that might work.

-- 
Thank you,
Dmitri Pal

Sr. Engineering Manager IPA project,
Red Hat Inc.


-------------------------------
Looking to carve out IT costs?
www.redhat.com/carveoutcosts/



_______________________________________________
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Reply via email to