On Thu, 2012-04-19 at 18:25 -0400, Dmitri Pal wrote:
> On 04/19/2012 05:28 PM, Simo Sorce wrote:
> > On Thu, 2012-04-19 at 16:29 -0400, Dmitri Pal wrote:
> >> On 04/19/2012 03:44 PM, Simo Sorce wrote:
> >>> On Thu, 2012-04-19 at 15:00 -0400, Dmitri Pal wrote:
> >>>> Local server is the central hub for the authentications in the remote
> >>>> office. The client machines with SSSD or LDAP clients might not have
> >>>> access to the central datacenter directly. Another reason for having
> >>>> such login server is to reduce the traffic between the remote office
> >>>> and
> >>>> central data center by pointing to the clients to the login server for
> >>>> the identity lookups and cached authentication via LDAP with pam proxy
> >>>> and SSSD under it.
> >>>>
> >>>> The SSSD will be on the server.
> >>>>
> >>>>> Also we need to keep in mind that we cannot assume SSSD to be always
> >>>>> available on clients.
> >>>> We can assume that SSSD can be used on this login server via pam
> >>>> proxy.
> >>> Sorry I do not get how SSSD is going to make any difference on the
> >>> server.
> >>>
> >>>> I agree with the questions and challenges. This is something for
> >>>> Ondrej
> >>>> to research as an owner of the thesis but I would think that it should
> >>>> be up to the admin to define what to synch. We have a finite list of
> >>>> the
> >>>> things that we can pre-create filter for.
> >>>>
> >>>> Think about some kind of the container that would define what to
> >>>> replicate.
> >>>> The object would contain following attributes:
> >>>>
> >>>> uuid - unique rule id
> >>>> name - descriptive name
> >>>> description - comment
> >>>> filter - ldap filter
> >>>> base DN - base DN
> >>>> scope - scope of the search
> >>>> attrs - list of attributes to sync
> >>>> enabled - whether the rule is enabled
> >>>>
> >>>> we can define (but disable by default) rules for SUDO, HBAC, SELinux,
> >>>> Hosts etc and tel admin to enable an tweak.
> >>>>
> >>> This would be really complicated to manage. At the very least we need to
> >>> define what's the expected user experience and data set access, then we
> >>> can start devising technical solutions.
> >>>
> >>> What we really want to protect here, at the core, is user keys, for user
> >>> and services not located in the branch office. All the info that is
> >>> accessible from the branch office and is not super sensitive can always
> >>> be replicated. The cost of choosing what to replicate and what not
> >>> should be deferred to a later "enhancement". I would try to get the
> >>> mechanism right for the authentication subset first, and deal with more
> >>> fine-grained 'exclusions' later, once we solved that problem and gained
> >>> experience with at least one aspect of data segregation.
> >>>
> >>> Simo.
> >>>
> >> I do not see a problem and it looks like you do not see the solution
> >> that I see. So let me try again:
> >>
> >> 1) Consumer (remote office server) will sync in whatever user and other
> >> data that we decide is safe to sync (may be configurable but this is a
> >> separate question).
> >> 2) It will not sync the kerberos keys .
> >> 3) The authentication between clients and Consumer will be done using
> >> LDAP since eSSO is not a requirement so Kerberos tickets are not needed
> >> 4) The consumer would use pam proxy to do the authentication against the
> >> real master on the client behalf
> >> 5) We will configure SSSD under the pam proxy to point to the real
> >> master. If access to the realm master is broken the authentication in
> >> the remote office would still be successful as they would be performed
> >> against credentials cached by SSSD.
> >> Online case: client -> ldap auth -> Consumer -> pam proxy -> PAM ->
> >> pam_sss -> SSSD -> real master
> >> Offline case: client -> ldap auth -> Consumer -> pam proxy -> PAM ->
> >> pam_sss -> SSSD -> SSSD cached credentials
> >> 6) We can assume that the Consumer is the latest RHEL and this can
> >> leverage SSSD capabilities regardles of what OS or versions the actual
> >> clients in the remote office run.
> > Well if we have to go through all these steps just to cache the user
> > password we can also simply replicate the userPassword attribute for
> > select users and just not the kerberos keys. Much simpler architecture
> > and gives you the same results w/o any chaining at all.
> 
> Even better.
> 
> >> If we need Kerberos SSO in the remote office this is a different use case.
> >> Does this make sense now?
> > Yes, but I was assuming Kerberos to be naturally necessary. If an
> > organization deploys kerberized services then naturally users will have
> > to use them. Case in point the Web UI for FreeIPA itself.
> >
> > Proxying pure LDAP is a solved problem, there are a number of
> > meta-directories that could probably also allow caching LDAP binds, I
> > think we want to concentrate on a solution that gives you the full
> > functionality of a freeIPA realm, but that's just me. I am not opposed
> > to a lesser solution, just not sure we should concentrate too many
> > energies into it.
> >
> > Simo.
> >
> I asked the question earlier in the thread. I said that there are two
> options: login server and eSSO server. Ondrej indicated that he wants to
> focus on the pure login server. This is why I continued exploring this
> option. IMO these are two different use cases and two different
> solutions that we should provide. One solution a striped down DS replica
> like we discussed above.

Ok.

> The eSSO solution is more challenging. I agree.
> So far I think I have a way to solve the problem if we allow storing
> user passwords using symmetric keys in the LDAP. This would allow us to
> decrypt it in memory and generate a kerberos hash at need.
> If this is a non starter I do not see a good way to provide a solution
> that allow someone to get Kerberos tickets in the remote office and do
> not replicate kerberos hashes.

I think it is a non-starter to store a clear-text password even if
reversibly encrypted, but I also think we do not need to do that all.

> But let us assume that it is OK to encrypt user passwords with a long
> key (like master kerberos key) and store encrypted passwords in another
> subtype of the userPassword attribute.
> So when a user password is set or updated such attribute is created.
> 
> Now admin decides to create a special "remote" replica. As a part of
> this operation a new master key, specific for that replica, will be
> generated. It is different a Kerberos key from the Kerberos key used for
> the rest of masters in the domain.  This key is installed on the remote
> replica.
> As replication starts (first push) between some normal master and remote
> replica the replication plugin will detect that one side of this
> agreement is a normal master while the other side is a special remote
> server. In this case the replication plugin would instead of taking
> userPassword attribute as is from the normal master would get the
> userPassword that can be decrypted will use the master key for the
> remote replica to generate password hash on the fly and inject into the
> replication stream instaed of the userPassword attribute stored on real
> master.

Ok, this come close to a proper solution but not quite.
So first of all, kerberos keys are available in the master, we do not
need to also store the clear txt password and regenerate them, but we do
need to be able to convey them to the replica wrapped in a different
master key.

Let me explain what I had in mind to attack this problem (I hinted at it
in a previous email).

The simplest way is to use a principal named after the branch office
replica as the 'local'-master and (possibly a different one)
'local'-krbtgt key, this way the key is available both to the FreeIPA
masters and to the branch replica.

Assume a realm named IPA.COM we would have principals named something
like krbtgt-branch1.ipa....@ipa.com

As for transmitting these keys we have 2 options.
a) similar to what you describe above, unwrap kerberos keys before
replication and re-wrap them in the branch office master key.
b) store multiple copies of the keys already wrapped with the various
branch office master keys so that the replication plugin doesn't need to
do expensive crypto but only select the right pair

With a) the disadvantage are that it is an expensive operation, and also
makes it hard to deal with hubs (if we want to prevent hubs from getting
access to access krb keys).

With b) the disadvantage is that you have to create all other copies and
keep them around. So any principal may end up having as many copies are
there are branch offices in the degenerate case. It also make it
difficult to deal with master key changes (both main master key and
branch office master keys).

Both approaches complicate a bit the initial replica setup, but not
terribly so.

> Using this approach we have several benefits:
> 
> 1) Remote office can enjoy the eSSO
> 2) If the remote replica is jeopardized there is nothing in it that can
> be used to attack central server without cracking the kerberos hashes.
> 3) Authentication against the remote server would not be respected
> against the resources in the central server creating confined environment
> 4) Remote office can operate with eSSO even if there is no connection to
> the central server.

Not so fast! :-)
User keys are not all is needed.
Re-encrypting user keys is only the first (and simplest) step.
Then you have to deal with a) the 'krbtgt' key problem and b) the Ticket
granting Service problem. They are solvable problems (also because MS
neatly demonstrated it can be solved in their RODC), but requires some
more work on the KDC side.

So problem (a): the krbtgt.
When a user in a branch office obtain a ticket from the branch office
KDC it will get this ticket encrypted with the branch office krbtgt key
which is different from the other branch offices or the main ipa server
krbtgt key.
This means that if the user then tries to obtain a ticket from another
KDC, this other KDC needs to be able to recognize that a different
krbtgt key was used and find out the right key to decrypt the user
ticket to verify its TGT is valid before it can proceed.
We also have a worse reverse problem, where a user obtained a krbtgt for
the ipa masters but then tries to get a service ticket from the branch
office TGS which does not have access to the main krbtgt at all (or it
could impersonate any user in the realm).
So for all these case we need a mechanism in the KDC to i) recognize the
situation ii) find out if it can decrypt the user tgt iii) either
redirect the user to a TGS server that can decrypt the tgt or 'proxy'
the request to a KDC that can do that.

The problem in (b) is of the same nature, but stems from the problem
that the branch office KDC will not have the keys for a lot (or all) of
the service principal for the various servers in the organization.
So after having solved problem (a) now we need again to either redirect
the user to a TGS that have access to those keys so it can provide a
ticket, or we need 'proxy' the ticket request to a server that can.


In general we should try to avoid tring to 'redirect' because referrals
are not really well supported (and not really well standardized yet) by
many clients and I am not sure they could actually convey the
information we need to anyways, so trying to redirect would end up being
an inferior solution as it would work only for a tiny subset of clients
that have 'enhanced' libraries.
OTOH proxying has a couple of technical problems we need to solve.
The first is that we need a protocol to handle that, but using something
derived from the s4u2 extensions may work out. The second problem is
that it requires network operations in the KDC, and our KDC is not
currently multithreaded. However thanks to the recent changes to make
the KDC more async friendly this may not be a problem after all. As the
KDC could shelve the request until a timeout occurs or the remote KDC
replies with the answer we need.


> So we get security and convenience at the same time.

I am all for it :)

> For the password changes the remote server kpasswd should be updated to
> proxy the password change to the real server.

We do not need this, kerberos differentiate between kdcs and kpasswd
servers, so all we need is to not start any kadmin/kpasswd server on the
branch replica and set SRV records for kpasswd that point exclusively to
masters.

>  So change will happen on
> the real master and then replicated back to the remote server using the
> logic described above.

Yes, but w/o the proxying :)

>  To avoid the latency we can generate the new
> password hash right away and save it locally without waiting for the
> full replication cycle to get back to the remote server.

Replication latency is another issue I was not going to address now, but
we will definitely need to deal with it somehow, the simplest way when
SSSD is involved is to point krb libs temporarily to the master where we
just operated the password change, and revert back to the local branch
office only later. But I would defer thinking about this problem for
now.

> Remote server would not have a CA.

Agreed.

> It might have DNS.

Agreed.

> There will be one
> way replication agreement from the central server to that remote server
> and the ACIs would be configured in such a way that no change can be
> done locally except probably local failure counts and password update by
> KDC.

We may need some other changes, but this is a technicality, we will
start as strict as possible and relax when needed.

>  The data replicated will be controlled by the flexible set of
> filters that can be controlled by the admin when the replication
> agreement is defined as we discussed earlier in the thread.

Yeah well we should make it simple at start, I would probably default to
replicate almost everything except krb keys. If people have a valid need
for replicating less we will hear soon enough. But I would avoid getting
too fancy with fractional replication because LDAP clients are generally
not very smart and we do not want to have them broken by default because
they were not able to chase referrals.
Also a branch office is effectively a cache for most purposes so we
should copy locally as much as possible.

> If the remote office needs more than one server for redundancy purpose
> another remote server can be created in the same way. Since there is no
> data updates there is no need to replicate anything between the two
> remote servers.

True but to keep down bandwidth consumption it may be desirable to have
one replicate from the central masters and the other from the first
server. The krbtgt and master keys will not be a real problem as we can
define these keys per location instead of per server, so multiple
servers in one location would use the same set of keys (which is what
you want anyways so clients can use either server w/o issues cause by
different krbtgt keys per server).

> Seems like a decent approach.
> In future there might also be an option to use one way domains trusts
> and subdomains but I am not exactly sure how that might work.

I wasn't going to mention it, but using a "subdomain" would make the
kerberos part easier to manage to some degree, at least from the pov of
managing to provide tickets for services not in the branch replica, as
it would basically give a way to 'redirect' clients by pretending the
masters that have the keys are another realm. However this introduces a
different set of issues I am not going to delve into deeply now, but
that I think would make the problem probably more irksome to solve.

As for 'real' subdomains, I am not sure I see much value in
differentiating between a 'subdomain' and a separated normal trusted
realm that just happen to live in a DNS subdomain.

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York

_______________________________________________
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Reply via email to