Re: [Freeipa-devel] [PATCH] 0481 permission-find: Cache the root entry for legacy permissions
On 03/10/2014 12:05 PM, Petr Viktorin wrote: On 03/07/2014 04:45 PM, Martin Kosek wrote: On 02/28/2014 03:51 PM, Petr Viktorin wrote: Hello, This reduces LDAP searches in permission-find when there are legacy permissions. The root entry (which contains all legacy permission ACIs) is only looked up once. There is a conflict on one line. But when I manually resolved it, the patch worked for me. We got from 176 OPS per ipa permission-find to ~96. This should be OK for now. Martin I don't see the conflict. Perhaps I mistakenly based this patch on something that's now pushed (though this applies cleanly to master from a week ago, too...). Could you check again? Ok, I probably simply applied your permission fixes in wrong order. ACK. Pushed to master: 34c3d309d99d0ebe5eb0b935d356e30d8866c139 Martin ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel
Re: [Freeipa-devel] DNSSEC: upgrade path to Vault
On 10.3.2014 12:08, Martin Kosek wrote: On 03/10/2014 11:49 AM, Petr Spacek wrote: On 7.3.2014 17:33, Dmitri Pal wrote: I do not think it is the right architectural approach to try to fix a specific use case with one off solution while we already know that we need a key storage. I would rather do things right and reusable than jam them into the currently proposed release boundaries. I want to make clear that I'm not opposed to Vault in general. I'm questioning the necessity of Vault from the beginning because it will delay DNSSEC significantly. +1, I also now see number of scenarios where Vault will be needed. One of the proposals in this thread is to use something simple for DNSSEC keys (with few lines of Python code) and migrate DNSSEC with Vault when Vault is available and stable enough (in some later release). I understand that Vault brings a lot of work to the table. But let us do it right and if it does not fit into 4.0 let us do it in 4.1. We will have one huge release with DNSSEC + Vault at once if we to postpone DNSSEC to the same release as Vault. As a result, it would be harder to debug it because we will have to find if something is broken because of: - DNSSEC-IPA integration - Vault-IPA integration - DNSSEC-Vault integration. I don't think it is a good idea to make such huge release. Release early, release often I must say I tend to agree with Petr. If the poor man's solution of DNSSEC without Vault does not cost us too much time and it would seem that the Vault is not going to squeeze in 4.0 deadlines, I would rather release the poor man's solution in 4.0 and switch to Vault when it's available in 4.1. This would let our users test the non-Vault parts of our DNSSEC solution instead of waiting to test a perfect solution. Yesterday we have agreed that DNSSEC support is not going to depend on Vault from the beginning and that we can migrate to Vault later. Here I'm proposing safe upgrade path from non-vault to Vault solution. After all, it seems relatively easy. TL;DR version = Use information in cn=masters to detect if there are old replicas and temporarily convert new keys from Vault to LDAP storage (until all old replicas are deleted). Full version IPA 4.0 is going to have OpenDNSSEC key generator on a single IPA server and separate key import/export daemon (i.e. script called from cron or something like that) on all IPA servers. In 4.0, we can add new LDAP objects for DNSSEC-related IPA services (please propose better names :-): - key generator: cn=OpenDNSSECv1,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example - key imported/exporter: cn=DNSSECKeyImporterv1,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example Initial state before upgrade: - N IPA 4.0 replicas - N DNSSECKeyImporterv1 service instances (i.e. key distribution for IPA 4.0) - 1 OpenDNSSECv1 service instance (key generator) Now we want to upgrade a first replica to IPA 4.1. For simplicity, let's add a *requirement* to upgrade the replica with OpenDNSSECv1 first. (We can generalize the procedure if this requirement is not acceptable.) Upgrade procedure: - stop OpenDNSSECv1 service - stop DNSSECKeyImporterv1 service - convert OpenDNSSECv1 database to OpenDNSSECv2 This step is not related to Vault. We need to covert local SQLite database from single-node OpenDNSSEC to LDAP-backed distributed OpenDNSSEC. - convert private keys from LDAP to Vault *but let them in LDAP for a while*. - walk through cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example and check if there are any other replicas with DNSSECKeyImporterv1 service: a) No such replica exists - delete old-fashioned keys from LDAP. b) Another replica with DNSSECKeyImporterv1 service exists: - *Temporarily* run DNSSECKeyImporterv2 which will do one-way key conversion from Vault to LDAP. - DNSSECKeyImporterv2 can check e.g. daily if all DNSSECKeyImporterv1 instances were deleted or not. Then it can delete old-fashioned keys from LDAP and also stop itself when all old replicas were deleted (and compatibility mode is not needed anymore). This approach removes time constraints from upgrade procedure and prevents DNS servers from failing when update is delayed etc. As a result, admin can upgrade replica-by-replica at will. -- Petr^2 Spacek ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel
Re: [Freeipa-devel] DNSSEC: upgrade path to Vault
On 03/11/2014 11:33 AM, Petr Spacek wrote: On 10.3.2014 12:08, Martin Kosek wrote: On 03/10/2014 11:49 AM, Petr Spacek wrote: On 7.3.2014 17:33, Dmitri Pal wrote: I do not think it is the right architectural approach to try to fix a specific use case with one off solution while we already know that we need a key storage. I would rather do things right and reusable than jam them into the currently proposed release boundaries. I want to make clear that I'm not opposed to Vault in general. I'm questioning the necessity of Vault from the beginning because it will delay DNSSEC significantly. +1, I also now see number of scenarios where Vault will be needed. One of the proposals in this thread is to use something simple for DNSSEC keys (with few lines of Python code) and migrate DNSSEC with Vault when Vault is available and stable enough (in some later release). I understand that Vault brings a lot of work to the table. But let us do it right and if it does not fit into 4.0 let us do it in 4.1. We will have one huge release with DNSSEC + Vault at once if we to postpone DNSSEC to the same release as Vault. As a result, it would be harder to debug it because we will have to find if something is broken because of: - DNSSEC-IPA integration - Vault-IPA integration - DNSSEC-Vault integration. I don't think it is a good idea to make such huge release. Release early, release often I must say I tend to agree with Petr. If the poor man's solution of DNSSEC without Vault does not cost us too much time and it would seem that the Vault is not going to squeeze in 4.0 deadlines, I would rather release the poor man's solution in 4.0 and switch to Vault when it's available in 4.1. This would let our users test the non-Vault parts of our DNSSEC solution instead of waiting to test a perfect solution. Yesterday we have agreed that DNSSEC support is not going to depend on Vault from the beginning and that we can migrate to Vault later. Here I'm proposing safe upgrade path from non-vault to Vault solution. After all, it seems relatively easy. TL;DR version = Use information in cn=masters to detect if there are old replicas and temporarily convert new keys from Vault to LDAP storage (until all old replicas are deleted). Full version IPA 4.0 is going to have OpenDNSSEC key generator on a single IPA server and separate key import/export daemon (i.e. script called from cron or something like that) on all IPA servers. In 4.0, we can add new LDAP objects for DNSSEC-related IPA services (please propose better names :-): - key generator: cn=OpenDNSSECv1,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example cn=DNSSEC,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example DNSSEC will be translated by FreeIPA to appropriate service name. This can vary between platforms. v1 can be an attribute of the entry, I would not add it's to name. - key imported/exporter: cn=DNSSECKeyImporterv1,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example I am thinking it may be sufficient to have just: cn=DNSSEC,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example for all DNSSEC empowered masters and then just have: ipaConfigString: keygenerator ... in the master VM. I am just trying to be future agnostic and avoid hardcoding service names and implementations details in cn=masters. FreeIPA on master would know what services to run when it is a keygenerator or not. Initial state before upgrade: - N IPA 4.0 replicas - N DNSSECKeyImporterv1 service instances (i.e. key distribution for IPA 4.0) - 1 OpenDNSSECv1 service instance (key generator) Now we want to upgrade a first replica to IPA 4.1. For simplicity, let's add a *requirement* to upgrade the replica with OpenDNSSECv1 first. (We can generalize the procedure if this requirement is not acceptable.) Upgrade procedure: - stop OpenDNSSECv1 service - stop DNSSECKeyImporterv1 service - convert OpenDNSSECv1 database to OpenDNSSECv2 This step is not related to Vault. We need to covert local SQLite database from single-node OpenDNSSEC to LDAP-backed distributed OpenDNSSEC. - convert private keys from LDAP to Vault *but let them in LDAP for a while*. - walk through cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example and check if there are any other replicas with DNSSECKeyImporterv1 service: In my proposal, one would just search for cn=DNSSEC,cn=*,cn=masters... with filter (ipaConfigString=version 1). a) No such replica exists - delete old-fashioned keys from LDAP. b) Another replica with DNSSECKeyImporterv1 service exists: - *Temporarily* run DNSSECKeyImporterv2 which will do one-way key conversion from Vault to LDAP. - DNSSECKeyImporterv2 can check e.g. daily if all DNSSECKeyImporterv1 instances were deleted or not. Then it can delete old-fashioned keys from LDAP and also stop itself when all old replicas were deleted (and compatibility mode is not
Re: [Freeipa-devel] DNSSEC: upgrade path to Vault
On 11.3.2014 12:21, Martin Kosek wrote: On 03/11/2014 11:33 AM, Petr Spacek wrote: On 10.3.2014 12:08, Martin Kosek wrote: On 03/10/2014 11:49 AM, Petr Spacek wrote: On 7.3.2014 17:33, Dmitri Pal wrote: I do not think it is the right architectural approach to try to fix a specific use case with one off solution while we already know that we need a key storage. I would rather do things right and reusable than jam them into the currently proposed release boundaries. I want to make clear that I'm not opposed to Vault in general. I'm questioning the necessity of Vault from the beginning because it will delay DNSSEC significantly. +1, I also now see number of scenarios where Vault will be needed. One of the proposals in this thread is to use something simple for DNSSEC keys (with few lines of Python code) and migrate DNSSEC with Vault when Vault is available and stable enough (in some later release). I understand that Vault brings a lot of work to the table. But let us do it right and if it does not fit into 4.0 let us do it in 4.1. We will have one huge release with DNSSEC + Vault at once if we to postpone DNSSEC to the same release as Vault. As a result, it would be harder to debug it because we will have to find if something is broken because of: - DNSSEC-IPA integration - Vault-IPA integration - DNSSEC-Vault integration. I don't think it is a good idea to make such huge release. Release early, release often I must say I tend to agree with Petr. If the poor man's solution of DNSSEC without Vault does not cost us too much time and it would seem that the Vault is not going to squeeze in 4.0 deadlines, I would rather release the poor man's solution in 4.0 and switch to Vault when it's available in 4.1. This would let our users test the non-Vault parts of our DNSSEC solution instead of waiting to test a perfect solution. Yesterday we have agreed that DNSSEC support is not going to depend on Vault from the beginning and that we can migrate to Vault later. Here I'm proposing safe upgrade path from non-vault to Vault solution. After all, it seems relatively easy. TL;DR version = Use information in cn=masters to detect if there are old replicas and temporarily convert new keys from Vault to LDAP storage (until all old replicas are deleted). Full version IPA 4.0 is going to have OpenDNSSEC key generator on a single IPA server and separate key import/export daemon (i.e. script called from cron or something like that) on all IPA servers. In 4.0, we can add new LDAP objects for DNSSEC-related IPA services (please propose better names :-): - key generator: cn=OpenDNSSECv1,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example cn=DNSSEC,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example DNSSEC will be translated by FreeIPA to appropriate service name. This can vary between platforms. v1 can be an attribute of the entry, I would not add it's to name. - key imported/exporter: cn=DNSSECKeyImporterv1,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example I am thinking it may be sufficient to have just: cn=DNSSEC,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example for all DNSSEC empowered masters and then just have: ipaConfigString: keygenerator ... in the master VM. I am just trying to be future agnostic and avoid hardcoding service names and implementations details in cn=masters. FreeIPA on master would know what services to run when it is a keygenerator or not. Initial state before upgrade: - N IPA 4.0 replicas - N DNSSECKeyImporterv1 service instances (i.e. key distribution for IPA 4.0) - 1 OpenDNSSECv1 service instance (key generator) Now we want to upgrade a first replica to IPA 4.1. For simplicity, let's add a *requirement* to upgrade the replica with OpenDNSSECv1 first. (We can generalize the procedure if this requirement is not acceptable.) Upgrade procedure: - stop OpenDNSSECv1 service - stop DNSSECKeyImporterv1 service - convert OpenDNSSECv1 database to OpenDNSSECv2 This step is not related to Vault. We need to covert local SQLite database from single-node OpenDNSSEC to LDAP-backed distributed OpenDNSSEC. - convert private keys from LDAP to Vault *but let them in LDAP for a while*. - walk through cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example and check if there are any other replicas with DNSSECKeyImporterv1 service: In my proposal, one would just search for cn=DNSSEC,cn=*,cn=masters... with filter (ipaConfigString=version 1). Why not :-) I do not care as long as it is unambiguous. a) No such replica exists - delete old-fashioned keys from LDAP. b) Another replica with DNSSECKeyImporterv1 service exists: - *Temporarily* run DNSSECKeyImporterv2 which will do one-way key conversion from Vault to LDAP. - DNSSECKeyImporterv2 can check e.g. daily if all DNSSECKeyImporterv1 instances were deleted or not. Then it can delete old-fashioned keys from LDAP and also stop itself when all old replicas were deleted (and
Re: [Freeipa-devel] [PATCH 0044] Periodically refresh global ipa-kdb configuration
On Mon, Feb 24, 2014 at 02:26:27PM -0500, Nathaniel McCallum wrote: Before this patch, ipa-kdb would load global configuration on startup and never update it. This means that if global configuration is changed, the KDC never receives the new configuration until it is restarted. This patch enables caching of the global configuration with a timeout of 60 seconds. https://fedorahosted.org/freeipa/ticket/4153 From 7daeae56671d7b3049b0341aad66c96877431bbe Mon Sep 17 00:00:00 2001 From: Nathaniel McCallum npmccal...@redhat.com Date: Mon, 24 Feb 2014 14:19:13 -0500 Subject: [PATCH] Periodically refresh global ipa-kdb configuration Before this patch, ipa-kdb would load global configuration on startup and never update it. This means that if global configuration is changed, the KDC never receives the new configuration until it is restarted. This patch enables caching of the global configuration with a timeout of 60 seconds. https://fedorahosted.org/freeipa/ticket/4153 I have only read the code and it looks sane, so depending on how much you consider my word about code-reading worth, ack. However, my gut feeling is that my preferred way of handling the issue (without knowing much about the background of the ticket) would probably be a HUP signal handler or something similar, rather than polling for current values with the value timeout. This patch introduces small nondeterminism to the behaviour when the usage of the new values cannot really be enfoced by the admin (without the daemon restart). -- Jan Pazdziora Principal Software Engineer, Identity Management Engineering, Red Hat ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel
Re: [Freeipa-devel] [PATCH 0044] Periodically refresh global ipa-kdb configuration
On Tue, 11 Mar 2014, Jan Pazdziora wrote: On Mon, Feb 24, 2014 at 02:26:27PM -0500, Nathaniel McCallum wrote: Before this patch, ipa-kdb would load global configuration on startup and never update it. This means that if global configuration is changed, the KDC never receives the new configuration until it is restarted. This patch enables caching of the global configuration with a timeout of 60 seconds. https://fedorahosted.org/freeipa/ticket/4153 From 7daeae56671d7b3049b0341aad66c96877431bbe Mon Sep 17 00:00:00 2001 From: Nathaniel McCallum npmccal...@redhat.com Date: Mon, 24 Feb 2014 14:19:13 -0500 Subject: [PATCH] Periodically refresh global ipa-kdb configuration Before this patch, ipa-kdb would load global configuration on startup and never update it. This means that if global configuration is changed, the KDC never receives the new configuration until it is restarted. This patch enables caching of the global configuration with a timeout of 60 seconds. https://fedorahosted.org/freeipa/ticket/4153 I have only read the code and it looks sane, so depending on how much you consider my word about code-reading worth, ack. However, my gut feeling is that my preferred way of handling the issue (without knowing much about the background of the ticket) would probably be a HUP signal handler or something similar, rather than polling for current values with the value timeout. This patch introduces small nondeterminism to the behaviour when the usage of the new values cannot really be enfoced by the admin (without the daemon restart). Thing is, we need the update to happen when other, non-root process makes the changes -- in our case when IPA server running under httpd privileges performs series of MS-RPC calls towards smbd which lead to creating certain LDAP objects. httpd couldn't send SIGHUP to a process not owned by httpd's effective user (non-root). -- / Alexander Bokovoy ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel
Re: [Freeipa-devel] [PATCH] 0471 permission_add: Remove permission entry if adding the ACI fails
On Fri, Feb 21, 2014 at 03:30:22PM +0100, Petr Viktorin wrote: Hello, A permission object was not removed in permission-add when adding the ACI failed. Here is a fix. https://fedorahosted.org/freeipa/ticket/4187 Earlier we agreed that patch authors should bug the reviewer. I guess now this means I should set Patch-review-by in the ticket, right? So: Martin, you reviewed the other ACI patches so I think you should continue. If you don't agree, unset the field in the ticket. -- Petr³ From 5ad2066b71b09248d348a5c4c85ef2ace0c553a4 Mon Sep 17 00:00:00 2001 From: Petr Viktorin pvikt...@redhat.com Date: Fri, 21 Feb 2014 13:58:15 +0100 Subject: [PATCH] permission_add: Remove permission entry if adding the ACI fails https://fedorahosted.org/freeipa/ticket/4187 --- ipalib/plugins/permission.py | 15 ++- ipatests/test_xmlrpc/test_permission_plugin.py | 25 + 2 files changed, 39 insertions(+), 1 deletion(-) diff --git a/ipalib/plugins/permission.py b/ipalib/plugins/permission.py index 64deb99ef98583daf0419a240aa8852b0262874d..cb6f18b478735920bbf6cef4febc91481631c560 100644 --- a/ipalib/plugins/permission.py +++ b/ipalib/plugins/permission.py @@ -812,7 +812,20 @@ def pre_callback(self, ldap, dn, entry, attrs_list, *keys, **options): return dn def post_callback(self, ldap, dn, entry, *keys, **options): -self.obj.add_aci(entry) +try: +self.obj.add_aci(entry) +except Exception: +# Adding the ACI failed. +# We want to be 100% sure the ACI is not there, so try to +# remove it. (This is a no-op if the ACI was not added.) +self.obj.remove_aci(entry) +# Remove the entry +try: +self.api.Backend['ldap2'].delete_entry(entry) +except errors.NotFound: +pass +# Re-raise original exception +raise self.obj.postprocess_result(entry, options) return dn I'm not totally happy about this patch. What happens when the ACI is already in LDAP and some part of that self.obj.add_aci(entry) operation fails? Won't you go and instead of doing noop, remove the ACI instead? -- Jan Pazdziora Principal Software Engineer, Identity Management Engineering, Red Hat ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel
[Freeipa-devel] [PATCH] 0148: ipa-sam: when deleting subtree, deal with possible LDAP errors
Hi, after discussing with Petr Spacek, following patch fixes ticket 4224. -- / Alexander Bokovoy From 83803494757e078c3a2850ddbb5eb886fd067dd1 Mon Sep 17 00:00:00 2001 From: Alexander Bokovoy aboko...@redhat.com Date: Tue, 11 Mar 2014 16:28:12 +0200 Subject: [PATCH 3/3] ipa-sam: when deleting subtree make sure to deal with LDAP failures https://fedorahosted.org/freeipa/ticket/4224 --- daemons/ipa-sam/ipa_sam.c | 13 +++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/daemons/ipa-sam/ipa_sam.c b/daemons/ipa-sam/ipa_sam.c index 1ca504d..7a8eeb4 100644 --- a/daemons/ipa-sam/ipa_sam.c +++ b/daemons/ipa-sam/ipa_sam.c @@ -2456,10 +2456,16 @@ static int delete_subtree(struct ldapsam_privates *ldap_state, char* dn) rc = smbldap_search(ldap_state-smbldap_state, dn, scope, filter, NULL, 0, result); TALLOC_FREE(filter); - if (result != NULL) { - smbldap_talloc_autofree_ldapmsg(dn, result); + if (rc != LDAP_SUCCESS) { + return rc; } + if (result == NULL) { + return LDAP_NO_MEMORY; + } + + smbldap_talloc_autofree_ldapmsg(dn, result); + for (entry = ldap_first_entry(state, result); entry != NULL; entry = ldap_next_entry(state, entry)) { @@ -2467,6 +2473,9 @@ static int delete_subtree(struct ldapsam_privates *ldap_state, char* dn) /* remove child entries */ if ((entry_dn != NULL) (strcmp(entry_dn, dn) != 0)) { rc = smbldap_delete(ldap_state-smbldap_state, entry_dn); + if (rc != LDAP_SUCCESS) { + return rc; + } } } rc = smbldap_delete(ldap_state-smbldap_state, dn); -- 1.8.3.1 ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel
Re: [Freeipa-devel] [PATCH] 0148: ipa-sam: when deleting subtree, deal with possible LDAP errors
On 11.3.2014 15:32, Alexander Bokovoy wrote: after discussing with Petr Spacek, following patch fixes ticket 4224. Code seems okay but I didn't do functional test. -- Petr^2 Spacek ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel
Re: [Freeipa-devel] [PATCH] 0471 permission_add: Remove permission entry if adding the ACI fails
On 03/11/2014 03:08 PM, Jan Pazdziora wrote: On Fri, Feb 21, 2014 at 03:30:22PM +0100, Petr Viktorin wrote: Hello, A permission object was not removed in permission-add when adding the ACI failed. Here is a fix. https://fedorahosted.org/freeipa/ticket/4187 Earlier we agreed that patch authors should bug the reviewer. I guess now this means I should set Patch-review-by in the ticket, right? So: Martin, you reviewed the other ACI patches so I think you should continue. If you don't agree, unset the field in the ticket. -- Petr³ From 5ad2066b71b09248d348a5c4c85ef2ace0c553a4 Mon Sep 17 00:00:00 2001 From: Petr Viktorin pvikt...@redhat.com Date: Fri, 21 Feb 2014 13:58:15 +0100 Subject: [PATCH] permission_add: Remove permission entry if adding the ACI fails https://fedorahosted.org/freeipa/ticket/4187 --- ipalib/plugins/permission.py | 15 ++- ipatests/test_xmlrpc/test_permission_plugin.py | 25 + 2 files changed, 39 insertions(+), 1 deletion(-) diff --git a/ipalib/plugins/permission.py b/ipalib/plugins/permission.py index 64deb99ef98583daf0419a240aa8852b0262874d..cb6f18b478735920bbf6cef4febc91481631c560 100644 --- a/ipalib/plugins/permission.py +++ b/ipalib/plugins/permission.py @@ -812,7 +812,20 @@ def pre_callback(self, ldap, dn, entry, attrs_list, *keys, **options): return dn def post_callback(self, ldap, dn, entry, *keys, **options): -self.obj.add_aci(entry) +try: +self.obj.add_aci(entry) +except Exception: +# Adding the ACI failed. +# We want to be 100% sure the ACI is not there, so try to +# remove it. (This is a no-op if the ACI was not added.) +self.obj.remove_aci(entry) +# Remove the entry +try: +self.api.Backend['ldap2'].delete_entry(entry) +except errors.NotFound: +pass +# Re-raise original exception +raise self.obj.postprocess_result(entry, options) return dn I'm not totally happy about this patch. What happens when the ACI is already in LDAP and some part of that self.obj.add_aci(entry) operation fails? Won't you go and instead of doing noop, remove the ACI instead? Unfortunately, yes, these operations are racy. When something fails, or when doing two operations simultaneously, it is possible that the objects are not both added. If that happens, it is the ACI that should be missing. The permission is added first, and the ACI is deleted first. This means that when things fail, access is denied, which is both more secure and easier to spot than having a stray ACI floating around. (In the long term, I'd really like to see a DS plugin for permission/ACI sync, so we can leverage transactions -- IPA is really the wrong layer to re-implement transactions in.) To answer your question, if the permission+ACI is already in LDAP, the call will fail with a DuplicateEntry error and post_callback won't get called. For the case that another permission_add command is called to add a permission of the same name, the existence of the permission entry acts as a lock: while it's there, the other permission_add will fail, and removing it (releasing the lock) is the last thing done in the error handler. I guess it would be good to add a comment saying this. -- Petr³ ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel
Re: [Freeipa-devel] [PATCH 0044] Periodically refresh global ipa-kdb configuration
On Tue, 2014-03-11 at 16:05 +0200, Alexander Bokovoy wrote: On Tue, 11 Mar 2014, Jan Pazdziora wrote: On Mon, Feb 24, 2014 at 02:26:27PM -0500, Nathaniel McCallum wrote: Before this patch, ipa-kdb would load global configuration on startup and never update it. This means that if global configuration is changed, the KDC never receives the new configuration until it is restarted. This patch enables caching of the global configuration with a timeout of 60 seconds. https://fedorahosted.org/freeipa/ticket/4153 From 7daeae56671d7b3049b0341aad66c96877431bbe Mon Sep 17 00:00:00 2001 From: Nathaniel McCallum npmccal...@redhat.com Date: Mon, 24 Feb 2014 14:19:13 -0500 Subject: [PATCH] Periodically refresh global ipa-kdb configuration Before this patch, ipa-kdb would load global configuration on startup and never update it. This means that if global configuration is changed, the KDC never receives the new configuration until it is restarted. This patch enables caching of the global configuration with a timeout of 60 seconds. https://fedorahosted.org/freeipa/ticket/4153 I have only read the code and it looks sane, so depending on how much you consider my word about code-reading worth, ack. However, my gut feeling is that my preferred way of handling the issue (without knowing much about the background of the ticket) would probably be a HUP signal handler or something similar, rather than polling for current values with the value timeout. This patch introduces small nondeterminism to the behaviour when the usage of the new values cannot really be enfoced by the admin (without the daemon restart). Thing is, we need the update to happen when other, non-root process makes the changes -- in our case when IPA server running under httpd privileges performs series of MS-RPC calls towards smbd which lead to creating certain LDAP objects. httpd couldn't send SIGHUP to a process not owned by httpd's effective user (non-root). Yes but this is not really the way to go. The proper fix is to use syncrepl/persistent search to get a notification of when we need to reload the configuration. On the patch itself I have a NACK due to this syntax used in various places: function()-attribute don't. do. that. ever. assign whatever come from the function to a local variable and *check* it is not null, *then* use it. Simo. -- Simo Sorce * Red Hat, Inc * New York ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel
Re: [Freeipa-devel] FreeIPA ConnId connector for usage with Apache Syncope
Hi guys, I hope to explain in a few words what we are doing with ConnID and IPA. Comments in-line. On 03/10/2014 10:57 PM, Dmitri Pal wrote: On 03/10/2014 03:14 PM, Petr Viktorin wrote: On 03/10/2014 07:17 PM, Dmitri Pal wrote: On 03/10/2014 08:24 AM, Petr Viktorin wrote: On 03/07/2014 04:39 PM, Marco Di Sabatino Di Diodoro wrote: Hi all, Il giorno 03/feb/2014, alle ore 11:41, Francesco Chicchiriccò ilgro...@apache.org mailto:ilgro...@apache.org ha scritto: On 31/01/2014 18:57, Dmitri Pal wrote: On 01/31/2014 08:17 AM, Francesco Chicchiriccò wrote: [...] I am actually not sure if it is lightweight connector could actually be better than a loaded connector (e.g. without proxy), from a deployment point of view, unless you are saying either that (a) a smart proxy is already available that can be reused The idea can be reused as a starting point. IMO the easiest would be to look at the patches and use same machinery but implement different commands. or that (b) incorporating the smart proxy that we are going to develop into FreeIPA will easily happen. ^ quote left here deliberately [...] We start to implementing a FreeIPA ConnId connector for Apache Syncope. We have to implement all identity operations defined by the ConnId framework. I would like to know the implementation status of the Smart/Proxy and if we can use it to all the identity operations. I'm reviewing the Foreman Smart proxy patches now. They're not in the FreeIPA repository yet. However the remaining issues were with packaging, code organization, naming. The Smart Proxy is now specific to Foreman provisioning; it is not a full REST interface so it will probably not support all operations you need. For a full REST interface, patches are welcome but the core FreeIPA team has other priorities at the moment. The RFE ticket is here: https://fedorahosted.org/freeipa/ticket/4168. For user provisioning you do not need a full REST api. You need to have a similar proxy but just for user related operations. So the smart proxy can be used as a model to do what you need to implement for Syncope integration. You'd be building two bridges (IPA--REST REST--ConnID) when you could build just one. Unless you already have a suitable generic REST connector already, I don't think it's your best option. From this thread it seems to me that JSON-RPC--ConnID would not require significantly more work than just the REST--ConnID part. What are the operations you need to implement? Can you list them? They were listed earlier in the thread, and [5]. It is usually easy to take something that is already working like smart proxy and change the entry points to the ones that you need. I am not familiar with the architecture of the connectors. Are they separate processes? Are they daemons? Are they forked per request? Connection to IPA needs to be authenticated. If the connection to IPA happens from a single process like smart proxy you do not need to worry about machinery related to authentication and session managment. It is already implemented. This is why I was suggesting to use smart proxy. IMO REST vs. JSON is not that big deal. They are similar. Doing things right from the authentication POV and session management are much harder. But if we do not see a value in using smart proxy even like a reference point for ConnID I would not insist. Basically a ConnID bundle (ConnID is framework used by Apache Syncope to connect the external resources) is a Java library developed to invoke the following operations from Apache Syncope to the target resource: AUTHENTICATE CREATE UPDATE UPDATE_ATTRIBUTE_VALUES DELETE RESOLVE_USERNAME SCHEMA SEARCH SYNC TEST For example, ConnID already has an Active Directory bundle [9] and an LDAP bundle [10]. As you already know, our goal is to develop a new bundle to invoke the provisioning operations on IPA server installation. From ConnID development point of view, the first thing is to choose a right way (to read protocol/interfaces) to communicate with the server. Briefly the right way needs: *) a long term support interfaces; *) an interfaces that allows all user / group provisioning operations; *) a way which leaves ConnID developers totally independent from (in this case) the FreeIPA development. Starting from this introduction we think that the right way is to use JSON-RPC interfaces, with particular attention to authentication and session management, as suggested by you. Do we have to consider other critical factors before starting to work? Massi Otherwise, we will instead specialize the CMD connector [12] to feature the FreeIPA command-line interface (as suggested at the beginning of this thread). There will be potentially need, in this case, to include the ConnId connector server into the Syncope deployment architecture, but this is a supported pattern. Have you looked at JSON-RPC interface mentioned earlier in this thread, and [6]? It might be
[Freeipa-devel] [PATCH] 460 ipa-replica-install never checks for 7389 port
When creating replica from a Dogtag 9 based IPA server, the port 7389 which is required for the installation is never checked by ipa-replica-conncheck even though it knows that it is being installed from the Dogtag 9 based FreeIPA. If the 7389 port would be blocked by firewall, installation would stuck with no hint to user. Make sure that the port configuration parsed from replica info file is used consistently in the installers. https://fedorahosted.org/freeipa/ticket/4240 -- Martin Kosek mko...@redhat.com Supervisor, Software Engineering - Identity Management Team Red Hat Inc. From e7273b69f21db44bda38f5ffbc84eabbaae2a943 Mon Sep 17 00:00:00 2001 From: Martin Kosek mko...@redhat.com Date: Tue, 11 Mar 2014 16:28:19 +0100 Subject: [PATCH] ipa-replica-install never checks for 7389 port When creating replica from a Dogtag 9 based IPA server, the port 7389 which is required for the installation is never checked by ipa-replica-conncheck even though it knows that it is being installed from the Dogtag 9 based FreeIPA. If the 7389 port would be blocked by firewall, installation would stuck with no hint to user. Make sure that the port configuration parsed from replica info file is used consistently in the installers. https://fedorahosted.org/freeipa/ticket/4240 --- install/tools/ipa-ca-install | 17 + install/tools/ipa-replica-install | 18 ++ ipaserver/install/cainstance.py | 12 +--- ipaserver/install/installutils.py | 16 4 files changed, 32 insertions(+), 31 deletions(-) diff --git a/install/tools/ipa-ca-install b/install/tools/ipa-ca-install index 4edd26d337a50eebe686daae539c257f706e0158..bb3e595a3df47f00b3929f546db7b04dd7eda32a 100755 --- a/install/tools/ipa-ca-install +++ b/install/tools/ipa-ca-install @@ -30,7 +30,7 @@ from ipaserver.install import installutils, service from ipaserver.install import certs from ipaserver.install.installutils import (HostnameLocalhost, ReplicaConfig, expand_replica_info, read_replica_info, get_host_name, BadHostError, -private_ccache) +private_ccache, read_replica_info_dogtag_port) from ipaserver.install import dsinstance, cainstance, bindinstance from ipaserver.install.replication import replica_conn_check from ipapython import version @@ -159,31 +159,24 @@ def main(): sys.exit(0) config.dir = dir config.setup_ca = True +config.ca_ds_port = read_replica_info_dogtag_port(config.dir) if not ipautil.file_exists(config.dir + /cacert.p12): print 'CA cannot be installed in CA-less setup.' sys.exit(1) -portfile = config.dir + /dogtag_directory_port.txt -if not ipautil.file_exists(portfile): -dogtag_master_ds_port = str(dogtag.Dogtag9Constants.DS_PORT) -else: -with open(portfile) as fd: -dogtag_master_ds_port = fd.read() - if not options.skip_conncheck: replica_conn_check( config.master_host_name, config.host_name, config.realm_name, True, -dogtag_master_ds_port, options.admin_password) +config.ca_ds_port, options.admin_password) if options.skip_schema_check: root_logger.info(Skipping CA DS schema check) else: -cainstance.replica_ca_install_check(config, dogtag_master_ds_port) +cainstance.replica_ca_install_check(config) # Configure the CA if necessary -CA = cainstance.install_replica_ca( -config, dogtag_master_ds_port, postinstall=True) +CA = cainstance.install_replica_ca(config, postinstall=True) # We need to ldap_enable the CA now that DS is up and running CA.ldap_enable('CA', config.host_name, config.dirman_password, diff --git a/install/tools/ipa-replica-install b/install/tools/ipa-replica-install index 0e7aefef48d47fefa290607e0604c014d9469fdd..e039fd1e7cb213b3269d0a5d2305a96f68e36e29 100755 --- a/install/tools/ipa-replica-install +++ b/install/tools/ipa-replica-install @@ -37,8 +37,8 @@ from ipaserver.install import memcacheinstance from ipaserver.install import otpdinstance from ipaserver.install.replication import replica_conn_check, ReplicationManager from ipaserver.install.installutils import (ReplicaConfig, expand_replica_info, -read_replica_info ,get_host_name, -BadHostError, private_ccache) +read_replica_info, get_host_name, BadHostError, private_ccache, +read_replica_info_dogtag_port) from ipaserver.plugins.ldap2 import ldap2 from ipaserver.install import cainstance from ipalib import api, errors, util @@ -534,6 +534,7 @@ def main(): sys.exit(0) config.dir = dir config.setup_ca = options.setup_ca +config.ca_ds_port = read_replica_info_dogtag_port(config.dir) if config.setup_ca and not ipautil.file_exists(config.dir + /cacert.p12): print 'CA cannot be installed in CA-less setup.' @@ -541,18 +542,11 @@
Re: [Freeipa-devel] [PATCH] 0471 permission_add: Remove permission entry if adding the ACI fails
On 11.3.2014 16:09, Petr Viktorin wrote: On 03/11/2014 03:08 PM, Jan Pazdziora wrote: On Fri, Feb 21, 2014 at 03:30:22PM +0100, Petr Viktorin wrote: Hello, A permission object was not removed in permission-add when adding the ACI failed. Here is a fix. https://fedorahosted.org/freeipa/ticket/4187 Earlier we agreed that patch authors should bug the reviewer. I guess now this means I should set Patch-review-by in the ticket, right? So: Martin, you reviewed the other ACI patches so I think you should continue. If you don't agree, unset the field in the ticket. -- Petr³ From 5ad2066b71b09248d348a5c4c85ef2ace0c553a4 Mon Sep 17 00:00:00 2001 From: Petr Viktorin pvikt...@redhat.com Date: Fri, 21 Feb 2014 13:58:15 +0100 Subject: [PATCH] permission_add: Remove permission entry if adding the ACI fails https://fedorahosted.org/freeipa/ticket/4187 --- ipalib/plugins/permission.py | 15 ++- ipatests/test_xmlrpc/test_permission_plugin.py | 25 + 2 files changed, 39 insertions(+), 1 deletion(-) diff --git a/ipalib/plugins/permission.py b/ipalib/plugins/permission.py index 64deb99ef98583daf0419a240aa8852b0262874d..cb6f18b478735920bbf6cef4febc91481631c560 100644 --- a/ipalib/plugins/permission.py +++ b/ipalib/plugins/permission.py @@ -812,7 +812,20 @@ def pre_callback(self, ldap, dn, entry, attrs_list, *keys, **options): return dn def post_callback(self, ldap, dn, entry, *keys, **options): -self.obj.add_aci(entry) +try: +self.obj.add_aci(entry) +except Exception: +# Adding the ACI failed. +# We want to be 100% sure the ACI is not there, so try to +# remove it. (This is a no-op if the ACI was not added.) +self.obj.remove_aci(entry) +# Remove the entry +try: +self.api.Backend['ldap2'].delete_entry(entry) +except errors.NotFound: +pass +# Re-raise original exception +raise self.obj.postprocess_result(entry, options) return dn I'm not totally happy about this patch. What happens when the ACI is already in LDAP and some part of that self.obj.add_aci(entry) operation fails? Won't you go and instead of doing noop, remove the ACI instead? Unfortunately, yes, these operations are racy. When something fails, or when doing two operations simultaneously, it is possible that the objects are not both added. If that happens, it is the ACI that should be missing. The permission is added first, and the ACI is deleted first. This means that when things fail, access is denied, which is both more secure and easier to spot than having a stray ACI floating around. (In the long term, I'd really like to see a DS plugin for permission/ACI sync, so we can leverage transactions -- IPA is really the wrong layer to re-implement transactions in.) This calls for https://fedorahosted.org/389/ticket/581 [RFE] Support LDAP transactions :-) Maybe we should always add a comment about each particular use case to give it the right priority ... Petr^2 Spacek ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel
Re: [Freeipa-devel] [PATCH] 459 Avoid passing non-terminated string to is_master_host
On 03/07/2014 10:21 AM, Alexander Bokovoy wrote: On Fri, 07 Mar 2014, Martin Kosek wrote: When string is not terminated, queries with corrupted base may be sent to LDAP: ... cn=ipa1.example.comgarbage,cn=masters... https://fedorahosted.org/freeipa/ticket/4214 -- Martin Kosek mko...@redhat.com Supervisor, Software Engineering - Identity Management Team Red Hat Inc. From 74bb082c7c286e9911f1a376ed9ce25845857672 Mon Sep 17 00:00:00 2001 From: Martin Kosek mko...@redhat.com Date: Fri, 7 Mar 2014 10:06:52 +0100 Subject: [PATCH] Avoid passing non-terminated string to is_master_host When string is not terminated, queries with corrupted base may be sent to LDAP: ... cn=ipa1.example.comgarbage,cn=masters... https://fedorahosted.org/freeipa/ticket/4214 --- daemons/ipa-kdb/ipa_kdb_mspac.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/daemons/ipa-kdb/ipa_kdb_mspac.c b/daemons/ipa-kdb/ipa_kdb_mspac.c index 9137cd5ad1e6166fd5d6e765fab2c8178ca0587c..c1b018cc80402c2c3488487aee1d9709b902c5b4 100644 --- a/daemons/ipa-kdb/ipa_kdb_mspac.c +++ b/daemons/ipa-kdb/ipa_kdb_mspac.c @@ -488,13 +488,14 @@ static krb5_error_code ipadb_fill_info3(struct ipadb_context *ipactx, } data = krb5_princ_component(ipactx-context, princ, 1); -strres = malloc(data-length); +strres = malloc(data-length+1); if (strres == NULL) { krb5_free_principal(ipactx-kcontext, princ); return ENOENT; } memcpy(strres, data-data, data-length); +strres[data-length] = '\0'; krb5_free_principal(ipactx-kcontext, princ); /* Only add PAC to TGT to services on IPA masters to allow querying Obvious ACK. Pushed to: master: 740298d1208e92c264ef5752ac3fe6adf1240790 ipa-3-3: 0430d0eb2b605290e34b9392a902ef2114a2d743 Martin ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel
Re: [Freeipa-devel] [PATCH] 460 ipa-replica-install never checks for 7389 port
On 03/11/2014 04:33 PM, Martin Kosek wrote: When creating replica from a Dogtag 9 based IPA server, the port 7389 which is required for the installation is never checked by ipa-replica-conncheck even though it knows that it is being installed from the Dogtag 9 based FreeIPA. If the 7389 port would be blocked by firewall, installation would stuck with no hint to user. Make sure that the port configuration parsed from replica info file is used consistently in the installers. https://fedorahosted.org/freeipa/ticket/4240 ACK -- Petr³ ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel
Re: [Freeipa-devel] [PATCH] 459 Avoid passing non-terminated string to is_master_host
On Tuesday, March 11, 2014 04:55:52 PM Martin Kosek wrote: On 03/07/2014 10:21 AM, Alexander Bokovoy wrote: On Fri, 07 Mar 2014, Martin Kosek wrote: When string is not terminated, queries with corrupted base may be sent to LDAP: ... cn=ipa1.example.comgarbage,cn=masters... https://fedorahosted.org/freeipa/ticket/4214 -- Martin Kosek mko...@redhat.com Supervisor, Software Engineering - Identity Management Team Red Hat Inc. From 74bb082c7c286e9911f1a376ed9ce25845857672 Mon Sep 17 00:00:00 2001 From: Martin Kosek mko...@redhat.com Date: Fri, 7 Mar 2014 10:06:52 +0100 Subject: [PATCH] Avoid passing non-terminated string to is_master_host When string is not terminated, queries with corrupted base may be sent to LDAP: ... cn=ipa1.example.comgarbage,cn=masters... https://fedorahosted.org/freeipa/ticket/4214 --- daemons/ipa-kdb/ipa_kdb_mspac.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/daemons/ipa-kdb/ipa_kdb_mspac.c b/daemons/ipa-kdb/ipa_kdb_mspac.c index 9137cd5ad1e6166fd5d6e765fab2c8178ca0587c..c1b018cc80402c2c3488487aee1d970 9b902c5b4 100644 --- a/daemons/ipa-kdb/ipa_kdb_mspac.c +++ b/daemons/ipa-kdb/ipa_kdb_mspac.c @@ -488,13 +488,14 @@ static krb5_error_code ipadb_fill_info3(struct ipadb_context *ipactx, } data = krb5_princ_component(ipactx-context, princ, 1); -strres = malloc(data-length); +strres = malloc(data-length+1); if (strres == NULL) { krb5_free_principal(ipactx-kcontext, princ); return ENOENT; } memcpy(strres, data-data, data-length); +strres[data-length] = '\0'; krb5_free_principal(ipactx-kcontext, princ); /* Only add PAC to TGT to services on IPA masters to allow querying Obvious ACK. Pushed to: master: 740298d1208e92c264ef5752ac3fe6adf1240790 ipa-3-3: 0430d0eb2b605290e34b9392a902ef2114a2d743 Martin Thank you guys. -A -- Anthony - http://messinet.com - http://messinet.com/~amessina/gallery 8F89 5E72 8DF0 BCF0 10BE 9967 92DC 35DC B001 4A4E signature.asc Description: This is a digitally signed message part. ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel
Re: [Freeipa-devel] [PATCH] 460 ipa-replica-install never checks for 7389 port
On 03/11/2014 04:59 PM, Petr Viktorin wrote: On 03/11/2014 04:33 PM, Martin Kosek wrote: When creating replica from a Dogtag 9 based IPA server, the port 7389 which is required for the installation is never checked by ipa-replica-conncheck even though it knows that it is being installed from the Dogtag 9 based FreeIPA. If the 7389 port would be blocked by firewall, installation would stuck with no hint to user. Make sure that the port configuration parsed from replica info file is used consistently in the installers. https://fedorahosted.org/freeipa/ticket/4240 ACK Pushed to: master: 0be66e9a67e433d36b9e4c00a17b45393d51a888 ipa-3-3: 892daaa79dba5473b30816d97e15b27d7d4b9d58 ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel
[Freeipa-devel] [PATCH] 0149: ipa-sam: ipa-sam: cache gid to sid and uid to sid requests in idmap cache
Hi, Add idmap_cache calls to ipa-sam to prevent huge numbers of LDAP calls to the directory service for gid/uid-sid resolution. Additionally, this patch further reduces number of queries by: - fast fail on uidNumber=0 which doesn't exist in FreeIPA, - return fallback group correctly when looking up user primary group as is done during init, - checking for group objectclass in case insensitive way Based on the patch by Jason Woods de...@jasonwoods.me.uk https://fedorahosted.org/freeipa/ticket/4234 and https://bugzilla.redhat.com/show_bug.cgi?id=1073829 https://bugzilla.redhat.com/show_bug.cgi?id=1074314 -- / Alexander Bokovoy From de5e03f7f7bf707c00b11569998b68b5c87744ed Mon Sep 17 00:00:00 2001 From: Alexander Bokovoy aboko...@redhat.com Date: Fri, 7 Mar 2014 16:38:24 + Subject: [PATCH 2/2] ipa-sam: cache gid to sid and uid to sid requests in idmap cache Add idmap_cache calls to ipa-sam to prevent huge numbers of LDAP calls to the directory service for gid/uid-sid resolution. Additionally, this patch further reduces number of queries by: - fast fail on uidNumber=0 which doesn't exist in FreeIPA, - return fallback group correctly when looking up user primary group as is done during init, - checking for group objectclass in case insensitive way Based on the patch by Jason Woods de...@jasonwoods.me.uk https://fedorahosted.org/freeipa/ticket/4234 and https://bugzilla.redhat.com/show_bug.cgi?id=1073829 https://bugzilla.redhat.com/show_bug.cgi?id=1074314 --- daemons/ipa-sam/ipa_sam.c | 125 +- 1 file changed, 113 insertions(+), 12 deletions(-) diff --git a/daemons/ipa-sam/ipa_sam.c b/daemons/ipa-sam/ipa_sam.c index 7a8eeb4..4eee3a6 100644 --- a/daemons/ipa-sam/ipa_sam.c +++ b/daemons/ipa-sam/ipa_sam.c @@ -82,6 +82,28 @@ struct trustAuthInOutBlob { struct AuthenticationInformationArray previous;/* [subcontext(0),flag(LIBNDR_FLAG_REMAINING)] */ }/* [gensize,public,nopush] */; +/* from generated idmap.h - hopefully OK */ +enum id_type +#ifndef USE_UINT_ENUMS + { + ID_TYPE_NOT_SPECIFIED, + ID_TYPE_UID, + ID_TYPE_GID, + ID_TYPE_BOTH +} +#else + { __donnot_use_enum_id_type=0x7FFF} +#define ID_TYPE_NOT_SPECIFIED ( 0 ) +#define ID_TYPE_UID ( 1 ) +#define ID_TYPE_GID ( 2 ) +#define ID_TYPE_BOTH ( 3 ) +#endif +; + +struct unixid { + uint32_t id; + enum id_type type; +}/* [public] */; enum ndr_err_code ndr_pull_trustAuthInOutBlob(struct ndr_pull *ndr, int ndr_flags, struct trustAuthInOutBlob *r); /*available in libndr-samba.so */ bool sid_check_is_builtin(const struct dom_sid *sid); /* available in libpdb.so */ @@ -91,6 +113,7 @@ char *sid_string_talloc(TALLOC_CTX *mem_ctx, const struct dom_sid *sid); /* avai char *sid_string_dbg(const struct dom_sid *sid); /* available in libsmbconf.so */ char *escape_ldap_string(TALLOC_CTX *mem_ctx, const char *s); /* available in libsmbconf.so */ bool secrets_store(const char *key, const void *data, size_t size); /* available in libpdb.so */ +void idmap_cache_set_sid2unixid(const struct dom_sid *sid, struct unixid *unix_id); /* available in libsmbconf.so */ #define LDAP_PAGE_SIZE 1024 #define LDAP_OBJ_SAMBASAMACCOUNT ipaNTUserAttrs @@ -750,8 +773,8 @@ static bool ldapsam_sid_to_id(struct pdb_methods *methods, } for (c = 0; values[c] != NULL; c++) { - if (strncmp(LDAP_OBJ_GROUPMAP, values[c]-bv_val, - values[c]-bv_len) == 0) { + if (strncasecmp(LDAP_OBJ_GROUPMAP, values[c]-bv_val, + values[c]-bv_len) == 0) { break; } } @@ -769,6 +792,9 @@ static bool ldapsam_sid_to_id(struct pdb_methods *methods, } unixid_from_gid(id, strtoul(gid_str, NULL, 10)); + + idmap_cache_set_sid2unixid(sid, id); + ret = true; goto done; } @@ -785,8 +811,11 @@ static bool ldapsam_sid_to_id(struct pdb_methods *methods, unixid_from_uid(id, strtoul(value, NULL, 10)); + idmap_cache_set_sid2unixid(sid, id); + ret = true; done: + TALLOC_FREE(mem_ctx); return ret; } @@ -806,6 +835,18 @@ static bool ldapsam_uid_to_sid(struct pdb_methods *methods, uid_t uid, int rc; enum idmap_error_code err; TALLOC_CTX *tmp_ctx = talloc_stackframe(); + struct unixid id; + + /* Fast fail if we get a request for uidNumber=0 because it currently +* will never exist in the directory +* Saves an expensive LDAP call of which failure will never be cached +*/ + if (uid == 0) { + DEBUG(3, (ERROR: Received request for uid %u, + fast failing as it will never exist\n, + (unsigned int)uid)); + goto done; + } filter =
Re: [Freeipa-devel] DNSSEC: upgrade path to Vault
On 03/11/2014 07:53 AM, Petr Spacek wrote: On 11.3.2014 12:21, Martin Kosek wrote: On 03/11/2014 11:33 AM, Petr Spacek wrote: On 10.3.2014 12:08, Martin Kosek wrote: On 03/10/2014 11:49 AM, Petr Spacek wrote: On 7.3.2014 17:33, Dmitri Pal wrote: I do not think it is the right architectural approach to try to fix a specific use case with one off solution while we already know that we need a key storage. I would rather do things right and reusable than jam them into the currently proposed release boundaries. I want to make clear that I'm not opposed to Vault in general. I'm questioning the necessity of Vault from the beginning because it will delay DNSSEC significantly. +1, I also now see number of scenarios where Vault will be needed. One of the proposals in this thread is to use something simple for DNSSEC keys (with few lines of Python code) and migrate DNSSEC with Vault when Vault is available and stable enough (in some later release). I understand that Vault brings a lot of work to the table. But let us do it right and if it does not fit into 4.0 let us do it in 4.1. We will have one huge release with DNSSEC + Vault at once if we to postpone DNSSEC to the same release as Vault. As a result, it would be harder to debug it because we will have to find if something is broken because of: - DNSSEC-IPA integration - Vault-IPA integration - DNSSEC-Vault integration. I don't think it is a good idea to make such huge release. Release early, release often I must say I tend to agree with Petr. If the poor man's solution of DNSSEC without Vault does not cost us too much time and it would seem that the Vault is not going to squeeze in 4.0 deadlines, I would rather release the poor man's solution in 4.0 and switch to Vault when it's available in 4.1. This would let our users test the non-Vault parts of our DNSSEC solution instead of waiting to test a perfect solution. Yesterday we have agreed that DNSSEC support is not going to depend on Vault from the beginning and that we can migrate to Vault later. Here I'm proposing safe upgrade path from non-vault to Vault solution. After all, it seems relatively easy. TL;DR version = Use information in cn=masters to detect if there are old replicas and temporarily convert new keys from Vault to LDAP storage (until all old replicas are deleted). Full version IPA 4.0 is going to have OpenDNSSEC key generator on a single IPA server and separate key import/export daemon (i.e. script called from cron or something like that) on all IPA servers. In 4.0, we can add new LDAP objects for DNSSEC-related IPA services (please propose better names :-): - key generator: cn=OpenDNSSECv1,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example cn=DNSSEC,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example DNSSEC will be translated by FreeIPA to appropriate service name. This can vary between platforms. v1 can be an attribute of the entry, I would not add it's to name. - key imported/exporter: cn=DNSSECKeyImporterv1,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example I am thinking it may be sufficient to have just: cn=DNSSEC,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example for all DNSSEC empowered masters and then just have: ipaConfigString: keygenerator ... in the master VM. I am just trying to be future agnostic and avoid hardcoding service names and implementations details in cn=masters. FreeIPA on master would know what services to run when it is a keygenerator or not. Initial state before upgrade: - N IPA 4.0 replicas - N DNSSECKeyImporterv1 service instances (i.e. key distribution for IPA 4.0) - 1 OpenDNSSECv1 service instance (key generator) Now we want to upgrade a first replica to IPA 4.1. For simplicity, let's add a *requirement* to upgrade the replica with OpenDNSSECv1 first. (We can generalize the procedure if this requirement is not acceptable.) Upgrade procedure: - stop OpenDNSSECv1 service - stop DNSSECKeyImporterv1 service - convert OpenDNSSECv1 database to OpenDNSSECv2 This step is not related to Vault. We need to covert local SQLite database from single-node OpenDNSSEC to LDAP-backed distributed OpenDNSSEC. - convert private keys from LDAP to Vault *but let them in LDAP for a while*. - walk through cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example and check if there are any other replicas with DNSSECKeyImporterv1 service: In my proposal, one would just search for cn=DNSSEC,cn=*,cn=masters... with filter (ipaConfigString=version 1). Why not :-) I do not care as long as it is unambiguous. a) No such replica exists - delete old-fashioned keys from LDAP. b) Another replica with DNSSECKeyImporterv1 service exists: - *Temporarily* run DNSSECKeyImporterv2 which will do one-way key conversion from Vault to LDAP. - DNSSECKeyImporterv2 can check e.g. daily if all DNSSECKeyImporterv1 instances were deleted or not. Then it can delete
Re: [Freeipa-devel] FreeIPA ConnId connector for usage with Apache Syncope
On 03/11/2014 11:29 AM, Massimiliano Perrone wrote: Hi guys, I hope to explain in a few words what we are doing with ConnID and IPA. Comments in-line. On 03/10/2014 10:57 PM, Dmitri Pal wrote: On 03/10/2014 03:14 PM, Petr Viktorin wrote: On 03/10/2014 07:17 PM, Dmitri Pal wrote: On 03/10/2014 08:24 AM, Petr Viktorin wrote: On 03/07/2014 04:39 PM, Marco Di Sabatino Di Diodoro wrote: Hi all, Il giorno 03/feb/2014, alle ore 11:41, Francesco Chicchiriccò ilgro...@apache.org mailto:ilgro...@apache.org ha scritto: On 31/01/2014 18:57, Dmitri Pal wrote: On 01/31/2014 08:17 AM, Francesco Chicchiriccò wrote: [...] I am actually not sure if it is lightweight connector could actually be better than a loaded connector (e.g. without proxy), from a deployment point of view, unless you are saying either that (a) a smart proxy is already available that can be reused The idea can be reused as a starting point. IMO the easiest would be to look at the patches and use same machinery but implement different commands. or that (b) incorporating the smart proxy that we are going to develop into FreeIPA will easily happen. ^ quote left here deliberately [...] We start to implementing a FreeIPA ConnId connector for Apache Syncope. We have to implement all identity operations defined by the ConnId framework. I would like to know the implementation status of the Smart/Proxy and if we can use it to all the identity operations. I'm reviewing the Foreman Smart proxy patches now. They're not in the FreeIPA repository yet. However the remaining issues were with packaging, code organization, naming. The Smart Proxy is now specific to Foreman provisioning; it is not a full REST interface so it will probably not support all operations you need. For a full REST interface, patches are welcome but the core FreeIPA team has other priorities at the moment. The RFE ticket is here: https://fedorahosted.org/freeipa/ticket/4168. For user provisioning you do not need a full REST api. You need to have a similar proxy but just for user related operations. So the smart proxy can be used as a model to do what you need to implement for Syncope integration. You'd be building two bridges (IPA--REST REST--ConnID) when you could build just one. Unless you already have a suitable generic REST connector already, I don't think it's your best option. From this thread it seems to me that JSON-RPC--ConnID would not require significantly more work than just the REST--ConnID part. What are the operations you need to implement? Can you list them? They were listed earlier in the thread, and [5]. It is usually easy to take something that is already working like smart proxy and change the entry points to the ones that you need. I am not familiar with the architecture of the connectors. Are they separate processes? Are they daemons? Are they forked per request? Connection to IPA needs to be authenticated. If the connection to IPA happens from a single process like smart proxy you do not need to worry about machinery related to authentication and session managment. It is already implemented. This is why I was suggesting to use smart proxy. IMO REST vs. JSON is not that big deal. They are similar. Doing things right from the authentication POV and session management are much harder. But if we do not see a value in using smart proxy even like a reference point for ConnID I would not insist. Basically a ConnID bundle (ConnID is framework used by Apache Syncope to connect the external resources) is a Java library developed to invoke the following operations from Apache Syncope to the target resource: AUTHENTICATE CREATE UPDATE UPDATE_ATTRIBUTE_VALUES DELETE RESOLVE_USERNAME SCHEMA SEARCH SYNC TEST For example, ConnID already has an Active Directory bundle [9] and an LDAP bundle [10]. As you already know, our goal is to develop a new bundle to invoke the provisioning operations on IPA server installation. From ConnID development point of view, the first thing is to choose a right way (to read protocol/interfaces) to communicate with the server. Briefly the right way needs: *) a long term support interfaces; *) an interfaces that allows all user / group provisioning operations; *) a way which leaves ConnID developers totally independent from (in this case) the FreeIPA development. Starting from this introduction we think that the right way is to use JSON-RPC interfaces, with particular attention to authentication and session management, as suggested by you. Do we have to consider other critical factors before starting to work? This seems reasonable. Here are some other questions that you might want to ask yourself starting the work. http://www.freeipa.org/page/General_considerations (there is no intent to scare you :-) ) HTH Dmitri Massi Otherwise, we will instead specialize the CMD connector [12] to feature the FreeIPA command-line interface (as suggested at the beginning of
Re: [Freeipa-devel] DNSSEC: upgrade path to Vault
On Tue, 2014-03-11 at 11:33 +0100, Petr Spacek wrote: Yesterday we have agreed that DNSSEC support is not going to depend on Vault from the beginning and that we can migrate to Vault later. Here I'm proposing safe upgrade path from non-vault to Vault solution. After all, it seems relatively easy. Ok let me put my 2c in. TL;DR version = Use information in cn=masters to detect if there are old replicas and temporarily convert new keys from Vault to LDAP storage (until all old replicas are deleted). I think this is not necessary, we do not support running an infrastructure for long times with mixed major IPA versions. So we should not need to do this. Full version IPA 4.0 is going to have OpenDNSSEC key generator on a single IPA server and separate key import/export daemon (i.e. script called from cron or something like that) on all IPA servers. In 4.0, we can add new LDAP objects for DNSSEC-related IPA services (please propose better names :-): - key generator: cn=OpenDNSSECv1,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example As Martin said - DNSSEC, the version is irrelevant here unless you are proposing to be able to run v1 or v2 conditionally with the same code base, if not then on upgrade the new openDNSSEC code will simply be upgraded like all other freeipa components. We do not capture version number for any of them. - key imported/exporter: cn=DNSSECKeyImporterv1,cn=vm.example.com,cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example This is probably not needed as a separate key, if we are using systemd then we simply make one unit file depend on another so both start together in master, while on replica we only have 1 service to start anyway. If we are on sysv we will deploy our own init script that start both components at the same time on master, only one on replicas. The difference about what to start will be that only one master is configured as key generator, we do not even need to represent this in ldap in theory, because you cannot move the role by simply changing an LDAP entry anyway, and the difference can simply be a configuration file on the keygenerator master. Initial state before upgrade: - N IPA 4.0 replicas - N DNSSECKeyImporterv1 service instances (i.e. key distribution for IPA 4.0) - 1 OpenDNSSECv1 service instance (key generator) Now we want to upgrade a first replica to IPA 4.1. For simplicity, let's add a *requirement* to upgrade the replica with OpenDNSSECv1 first. (We can generalize the procedure if this requirement is not acceptable.) I think we can proceed with this restriction for now, either tat or the admin is required to stop and unconfigure the key generator service anyway. Upgrade procedure: - stop OpenDNSSECv1 service - stop DNSSECKeyImporterv1 service stpo DNSSEC (will stop both) - convert OpenDNSSECv1 database to OpenDNSSECv2 This step is not related to Vault. We need to covert local SQLite database from single-node OpenDNSSEC to LDAP-backed distributed OpenDNSSEC. - convert private keys from LDAP to Vault *but let them in LDAP for a while*. This is only true *if* we'll decide to move storage to the Vault, we may not want to, or it may happen in a separate release. - walk through cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example and check if there are any other replicas with DNSSECKeyImporterv1 service: a) No such replica exists - delete old-fashioned keys from LDAP. I say we do this step unconditionally if we move to the vault, all other DNS server have functional keys for a while anyway (should always have at least 1 month autonomy), and we clearly state people must upgrade the infrastructure in a week not in months. So we should never need to keep old keys. b) Another replica with DNSSECKeyImporterv1 service exists: - *Temporarily* run DNSSECKeyImporterv2 which will do one-way key conversion from Vault to LDAP. We do not need this, you must update all replicas to a vesion that knows what to do and then they'll find the new keys where they are. - DNSSECKeyImporterv2 can check e.g. daily if all DNSSECKeyImporterv1 instances were deleted or not. Then it can delete old-fashioned keys from LDAP and also stop itself when all old replicas were deleted (and compatibility mode is not needed anymore). We also avoid this. The *only* thing we really need to do IMO is that if a DNS server finds out it's key for a zone are expired then it shuts down itself and makes itself unavailable so clients will start falling over to another DNS server and the admin will have to troubleshoot and resolve out why the keys were not accessible. If the reason is that they forgot to update a replica then they should just proceed and update and the DNS server will restart after that (we may want to make sure we have a way to pull the latest key at upgrade or we have chick egg issue where replica update fails because DNS does not start). This approach removes time constraints from
Re: [Freeipa-devel] DNSSEC: upgrade path to Vault
On Tue, 2014-03-11 at 14:40 -0400, Simo Sorce wrote: The *only* thing we really need to do IMO is that if a DNS server finds out it's key for a zone are expired then it shuts down itself and makes itself unavailable so clients will start falling over to another DNS server and the admin will have to troubleshoot and resolve out why the keys were not accessible. If the reason is that they forgot to update a replica then they should just proceed and update and the DNS server will restart after that (we may want to make sure we have a way to pull the latest key at upgrade or we have chick egg issue where replica update fails because DNS does not start). I am thinking that in case we have some zones protected with DNSSEC and some that are not (do we handle this case ?) then what we could do is simply to stop serving the secured zone. Is there an error code we can return that will make clients try another DNS server if they have multiple configured ? Simo. -- Simo Sorce * Red Hat, Inc * New York ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel
Re: [Freeipa-devel] DNSSEC: upgrade path to Vault
On 03/11/2014 07:40 PM, Simo Sorce wrote: On Tue, 2014-03-11 at 11:33 +0100, Petr Spacek wrote: Yesterday we have agreed that DNSSEC support is not going to depend on Vault ... - walk through cn=masters,cn=ipa,cn=etc,dc=ipa,dc=example and check if there are any other replicas with DNSSECKeyImporterv1 service: a) No such replica exists - delete old-fashioned keys from LDAP. I say we do this step unconditionally if we move to the vault, all other DNS server have functional keys for a while anyway (should always have at least 1 month autonomy), and we clearly state people must upgrade the infrastructure in a week not in months. So we should never need to keep old keys. b) Another replica with DNSSECKeyImporterv1 service exists: - *Temporarily* run DNSSECKeyImporterv2 which will do one-way key conversion from Vault to LDAP. We do not need this, you must update all replicas to a vesion that knows what to do and then they'll find the new keys where they are. - DNSSECKeyImporterv2 can check e.g. daily if all DNSSECKeyImporterv1 instances were deleted or not. Then it can delete old-fashioned keys from LDAP and also stop itself when all old replicas were deleted (and compatibility mode is not needed anymore). We also avoid this. The *only* thing we really need to do IMO is that if a DNS server finds out it's key for a zone are expired then it shuts down itself and makes itself unavailable so clients will start falling over to another DNS server and the admin will have to troubleshoot and resolve out why the keys were not accessible. If the reason is that they forgot to update a replica then they should just proceed and update and the DNS server will restart after that (we may want to make sure we have a way to pull the latest key at upgrade or we have chick egg issue where replica update fails because DNS does not start). This approach removes time constraints from upgrade procedure and prevents DNS servers from failing when update is delayed etc. As a result, admin can upgrade replica-by-replica at will. I want time constraints and I want DNS server to fail fast. constraints are in the order of 1 month though, not a few days. I think 1 month is sufficient. Do we need to do this unconditionally for whole DNS service? Let's say admin has 2 zones, my-dnssec-testing-zone.test and my-production-zone.test and keys for my-dnssec-testing-zone.test expire. Is this a reason to shut down the whole DNS service? I do not think so. Could we return with NotAuth or ServFail instead? Would DNS client failover for other DNS server for that broken zone? Martin ___ Freeipa-devel mailing list Freeipa-devel@redhat.com https://www.redhat.com/mailman/listinfo/freeipa-devel