On Fri, 30 May 2014, Sumit Bose wrote:
On Fri, May 30, 2014 at 12:23:53AM -0400, Dmitri Pal wrote:
On 05/29/2014 01:31 PM, Simo Sorce wrote:
>On Thu, 2014-05-29 at 18:50 +0200, Petr Spacek wrote:
>>On 29.5.2014 13:48, Sumit Bose wrote:
>>>== slapi-nis plugin/compat tree ==
>>>The compat tree offers a simplified LDAP tree with user and group data
>>>for legacy clients. No data for this tree is stored on disk but it is
>>>always created on the fly. It has to be noted that legacy clients might
>>>be one of the major users of the user-views because chances are that
>>>they were attached to the legacy systems with legacy ID management which
>>>should be replaced by IPA.
>>>
>>>In contrast to the extdom plugin it is not possible to determine the
>>>client based on the DN because connection might be anonymous. The
>>>Slapi_PBlock contains the IP address of the client in
>>>SLAPI_CONN_CLIENTNETADDR. Finding the matching client object in the IPA
>>>tree requires a reverse-DNS lookup which might be unreliable. If the
>>>reverse-DNS lookup was successful the slapi-nos plugin can follow the
>>>same steps as the extdom plugin to lookup up and apply the view.
>>Do we really want to base security decisions on reverse DNS resolution?
>No we do not want to play these games.
>
>>That
>>will be insecure. Attacker could tamper with reverse DNS to change UID/GID
>>mapping ... Maybe we can store IP->view mapping in the LDAP database. That
>>should be reliable if we assume that only TCP is used for connection to LDAP
>>database.
>It is not just about it being insecure, it is about it being wrong.
>As soon as you have a bunch of clients behind a NAT this pans goes belly
>up.
>
>>>As a alternative slapi-nis can provide one tree for each view.
>This is the only alternative, if we decide to pursue it.
>
>Simo.
>
Can we at least do something like CoS and use the base compat tree and
overwrite just uid/gid on the fly instead of using the whole another view?
That would reduce the size of the additional views significantly and would
save cycles used for keeping each view in sync with underlying DB. In this
case there will be still one view and dynamic overwrite in the search
results.


If we do not want to support all configured views what about making it
configurable which view are delivered in separate trees by slapi-nis?
-
I do not know much about the slap-nis internal, but I could image that
the memory requirement for a two layer cache (one global for the data
from SSSD (default view) and one for each view with the override data)
might be an issue. It would be nice if Alexander or Nalin can explain
some of the current bottlenecks in slap-nis to see were it will get
worse when supporting user-views in multiple trees?
Pretty bad.

slapi-nis only serves the entries that were not found with the normal
search path. This implies there is always cost of a search over the main
tree. If a base DN is too generic, slapi-nis will happily return
everything what is cached and fits the filter.

Now, we cannot rely on a client connection properties to segregate
the connections to different cached sub-trees. Reverse DNS is bad, IP
address handling is unreliable. The only way to differentiate would be
to have different base DN supplied by each client, maybe in the form of
multi-valued RDN (cn=compat+view=viewname,$SUFFIX). However, that would
add complication to slapi-nis map cache as data is evaluated once and
then inserted into the map cache while in this case a different view
would mean need to re-evaluate part of the entry or partially modify
returned object on the fly.

Technically the latter is possible. Suppose on the first hit to
slapi-nis for a uid=foo we would insert its entry into the map cache
where each uidNumber/gidNumber has view name appended:

uidNumber: 12344556
gidNumber: 12344556
uidNumber_legacy_client1: 110000
gidNumber_legacy_client1: 110001
uidNumber_legacy_client2: 210000
gidNumber_legacy_client2: 210001
uidNumber_legacy_client3: 310000
gidNumber_legacy_client3: 310001

And then have a filter for the resulting object fetched from the
map cache prior to pushing it out to the client.

The main trouble here is that we have to post process the data in map
cache and also the fact that we'll have more triggers to invalidate
cached objects, including the ones coming from nss requests which
officially have no DN in the main tree to get a trigger for them.


Maybe running a separate 389ds instance for the user-views might be an
alternative here as well? Alexanders work for the Global Catalog might
come handy here because it sets up and configures a separate instance
which reads data from the main one.
I don't think it makes problem any easier as we really have to solve the
issue of addressing client connection differences first. Whether the
data is served by a separate instance or not is irrelevant here.
--
/ Alexander Bokovoy

_______________________________________________
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Reply via email to