Re: [SSSD] IFP: use default limit if provided is 0
On 08/13/2015 12:48 PM, Pavel Březina wrote: From eef083f774988fe8e6b6a5a8513a163fd7558b55 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pavel=20B=C5=99ezina?=pbrez...@redhat.com Date: Thu, 13 Aug 2015 12:46:59 +0200 Subject: [PATCH] IFP: use default limit if provided is 0 Hi, CI: http://sssd-ci.duckdns.org/logs/job/21/49/summary.html I compiled it, ran it and it worked. ACK Petr ___ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
Re: [SSSD] [PATCH] [HBAC]: Better libhbac debuging
ping :-) ___ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
Re: [SSSD] [PATCH] sudo: use higher value wins when ordering rules
On Thu, Aug 13, 2015 at 05:17:32PM +0200, Jakub Hrozek wrote: ACK I'll just wait for CI results before pushing. * master: 52e3ee5c5ff2c5a4341041826a803ad42d2b2de7 ___ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
Re: [SSSD] [PATCHES] DYDNDS: update quality of input for nsupdate
On Fri, Aug 14, 2015 at 04:12:05PM +0200, Jakub Hrozek wrote: But I'm going to push the acked patches.. The first patches were pushed: 6fd5306145d98ea3bab7f32aa66475f610f388ce b42bf6c0c01db08208fb81d8295a2909d307284a 76604931b11594394a05df10f8370a1b8bb3e54b 4f2a07c422fa357ef6651bca8c48b8005280fa1d e4d6e9ccac14044d6bcd5a0dce7f45fdfab6bf3d 7c3cc1ee2914bc7b38a992c1af254fc76af5a1ad 8145ab51b05aa86b2f1a21b49383f55e50b0a2e3 Please respin the last one. ___ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
Re: [SSSD] [PATCH] DEBUG: Add new debug category for fail over
On Thu, Aug 13, 2015 at 07:18:27AM +0200, Lukas Slebodnik wrote: From 0aec877fb0d773a98a9d628aa1d9a89062ab0b9e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Michal=20=C5=BDidek?= mzi...@redhat.com Date: Mon, 10 Aug 2015 18:35:16 +0200 Subject: [PATCH] DEBUG: Add new debug category for fail over. --- src/providers/data_provider_fo.c | 30 ++ src/providers/dp_backend.h | 15 +++ src/tests/debug-tests.c | 2 +- src/util/debug.c | 2 +- src/util/util.h | 1 + 5 files changed, 40 insertions(+), 10 deletions(-) ACK LS * master: c4fb8f55f2894de431478ccfec63f9a97e090d0e ___ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
Re: [SSSD] IFP: use default limit if provided is 0
On Fri, Aug 14, 2015 at 01:07:34PM +0200, Petr Cech wrote: On 08/13/2015 12:48 PM, Pavel Březina wrote: From eef083f774988fe8e6b6a5a8513a163fd7558b55 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pavel=20B=C5=99ezina?=pbrez...@redhat.com Date: Thu, 13 Aug 2015 12:46:59 +0200 Subject: [PATCH] IFP: use default limit if provided is 0 Hi, CI: http://sssd-ci.duckdns.org/logs/job/21/49/summary.html I compiled it, ran it and it worked. ACK Petr * master: ef7de95fc4827a660254a942fa394f34ed9694a9 ___ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
Re: [SSSD] [PATCH] Switch ldap_user_certificate default to userCertificate; binary
On Thu, Aug 13, 2015 at 12:43:35PM +0200, Pavel Březina wrote: On 08/10/2015 12:59 PM, Jakub Hrozek wrote: Hi, the attached patches fix #2742. The first one makes sure we can print the certificate (or any binary attribute, really) safely. We only need to make sure to escape the attribute values before saving them to sysdb, because then ldb guarantees terminating them. The second just switches the attribute value. I tested using this howto: http://www.freeipa.org/page/V4/User_Certificates#How_to_Test You'll also want to use a recent enough IPA version, one that fixes: https://fedorahosted.org/freeipa/ticket/5173 Then, on the client, call: dbus-send --print-reply \ --system \ --dest=org.freedesktop.sssd.infopipe \ /org/freedesktop/sssd/infopipe/Users \ org.freedesktop.sssd.infopipe.Users.FindByCertificate \ string:$( openssl x509 cert.pem ) The result will be an object path. Ack. Thanks for the patience during the tmate.io review :-) Pushed to master: 32445affe3612428eddde043cdc672a01c189714 619e21ed9c7a71e35e53f38867b53ed974f1d36a ___ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
Re: [SSSD] [PATCH] Fetch one-way trust keytabs on sssd restart again
On Thu, Aug 13, 2015 at 09:52:40AM +0200, Pavel Březina wrote: On 08/12/2015 02:20 PM, Jakub Hrozek wrote: On Fri, Aug 07, 2015 at 12:22:39PM +0200, Pavel Březina wrote: On 07/30/2015 09:52 PM, Jakub Hrozek wrote: On Thu, Jul 30, 2015 at 09:46:11PM +0200, Jakub Hrozek wrote: Hi, the attached patches implement fetching the keytab for one-way trusts on each sssd restart. This is in order for admin to be able to call service sssd restart and have fresh keytabs in case the trust was re-established in the meantime. Even though retrieving the keytabs is quite expensive operation, restarting the sssd instance on the IPA server should be quite rare. Sorry, I shouldn't be sending patches before Coverity results arrive. Attached version fixes error handling in the first patch and fixes an unused variable in the second one. Hi, the code looks good. I just have an idea to move the talloc destructor that ensure the temporary file will get unlinked into sss_unique_file. We can provide a talloc context there and setup a destructor if requested. Something like: sss_unique_file(owner, file) if owner != NULL talloc_set_destructor Hi, please see the attached patches. Since the unique file code is not totally trivial (even though tested) I will move using the sss_unique_file() interface in other sssd code into a different patchset -- I would like to apply these patches to downstream and changing the mkstemp() calls might be too risky there. Ack. Thank you; pushed to master: d95bcfe23c574de7b6b7b44b52a0d4db5cc8529a db5f9ab3feb85aa444eab20428ca2b98801b6783 ___ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
[SSSD] [PATCH] pam: Incerease p11 child timeout
Hi, this patch is a hotfix for pam-srv-failing tests. Increasing the timeout to 30 seconds seems to be enough. I do not want to make it too big because the timeout is currently not configurable. I'd like to talk to Sumit about what he thinks the proper solution should be. I am not sure if it times out because we do something unnecessary in p11 child. See the simple patch. Michal From c737210b8ba34b3426b1f82326cdcdb765fc40ad Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Michal=20=C5=BDidek?= mzi...@redhat.com Date: Thu, 13 Aug 2015 14:03:24 +0200 Subject: [PATCH] pam: Incerease p11 child timeout Ticket: https://fedorahosted.org/sssd/ticket/2746 It was timeouting often in CI machines. --- src/responder/pam/pamsrv_cmd.c | 7 +-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/src/responder/pam/pamsrv_cmd.c b/src/responder/pam/pamsrv_cmd.c index 3b84fb8..53099a8 100644 --- a/src/responder/pam/pamsrv_cmd.c +++ b/src/responder/pam/pamsrv_cmd.c @@ -43,6 +43,9 @@ enum pam_verbosity { #define DEFAULT_PAM_VERBOSITY PAM_VERBOSITY_IMPORTANT +/* TODO: Should we make this configurable? */ +#define SSS_P11_CHILD_TIMEOUT 30 + static errno_t pam_null_last_online_auth_with_curr_token(struct sss_domain_info *domain, const char *username); @@ -1122,7 +1125,7 @@ static int pam_forwarder(struct cli_ctx *cctx, int pam_cmd) if (may_do_cert_auth(pctx, pd)) { req = pam_check_cert_send(cctx, cctx-ev, pctx-p11_child_debug_fd, - pctx-nss_db, 10, pd); + pctx-nss_db, SSS_P11_CHILD_TIMEOUT, pd); if (req == NULL) { DEBUG(SSSDBG_OP_FAILURE, pam_check_cert_send failed.\n); ret = ENOMEM; @@ -1338,7 +1341,7 @@ static void pam_forwarder_cb(struct tevent_req *req) if (may_do_cert_auth(pctx, pd)) { req = pam_check_cert_send(cctx, cctx-ev, pctx-p11_child_debug_fd, - pctx-nss_db, 10, pd); + pctx-nss_db, SSS_P11_CHILD_TIMEOUT, pd); if (req == NULL) { DEBUG(SSSDBG_OP_FAILURE, pam_check_cert_send failed.\n); ret = ENOMEM; -- 2.1.0 ___ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
Re: [SSSD] [PATCH] pam: Incerease p11 child timeout
On 08/14/2015 02:40 PM, Michal Židek wrote: Hi, this patch is a hotfix for pam-srv-failing tests. Increasing the timeout to 30 seconds seems to be enough. I do not want to make it too big because the timeout is currently not configurable. I'd like to talk to Sumit about what he thinks the proper solution should be. I am not sure When he returns from PTO that is. if it times out because we do something unnecessary in p11 child. See the simple patch. Michal ___ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel ___ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
Re: [SSSD] [PATCH] pam: Incerease p11 child timeout
On 14 August 2015 at 14:40, Michal Židek mzi...@redhat.com wrote: Hi, this patch is a hotfix for pam-srv-failing tests. Increasing the timeout to 30 seconds seems to be enough. I do not want to make it too big because the timeout is currently not configurable. I'd like to talk to Sumit about what he thinks the proper solution should be. I am not sure if it times out because we do something unnecessary in p11 child. One simple fix for this kind of failures is to use dbx/gdb/pstack to wait for a specific checkpoint in the code. For example http://svn.nrubsig.org/svn/people/gisburn/code/kdctest/test3.sh demonstrates this by waiting for the krb5kdc to enter the poll event loop: -- snip -- function run_kdc { typeset -x KRB5_KDC_PROFILE=${_.krb5_kdc_profile} typeset -x KRB5_CONFIG='/dev/null' (( _.kdc_pid != -1 )) return 1 krb5kdc -n -r ${_.realmname} (( _.kdc_pid=$! )) # Wait until the KDC becomes ready # (we probe the KDC process itself because a simple # $ sleep 10 # is not reliable when the system # is paging/swapping or simply too slow (e.g. # embedded system)) typeset pout integer i pres for (( i=100; i 0 ; i-- )) ; do sleep 0.25 pout=${ /usr/bin/pstack ${_.kdc_pid} 2'/dev/null' ; (( pres=$? )) ; } if (( pres != 0 )) || \ [[ ${pout} == ~(E)[[:space:]]+_*(epoll|poll) ]] ; then break fi done # KDC process still running ? kill -0 ${_.kdc_pid} 2'/dev/null' || \ { print -u2 -f $KDC failed.\n ; return 1 ; } return 0 } -- snip -- Bye, Roland -- __ . . __ (o.\ \/ /.o) rma...@redhat.com \__\/\/__/ IPA/Identity Management/Kerberos 5 /O /==\ O\ (;O/ \/ \O;) ___ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
Re: [SSSD] [WIP] [TEST]: Observation patch
On 08/13/2015 07:49 AM, Lukas Slebodnik wrote: On (12/08/15 17:57), Petr Cech wrote: Hi, I have explored in detail why the test responder_cache_req-tests failed so often. I created a new VM with RHEL 6.7. OBSERVATION: How we know, CI machines are under pressure, so I wrote simple cpu_braker, see [1]. I ran the tests 50 times with cpu_braker (average load 2.60, only 1 CPU). Results: [ RUN ] test_users_by_filter_multiple_domains_valid 0x2 != 0 src/tests/cmocka/test_responder_cache_req.c:1875: error: Failure! [ RUN ] test_users_by_filter_multiple_domains_valid 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1879: error: Failure! [ RUN ] test_groups_by_filter_valid 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1972: error: Failure! [ RUN ] test_groups_by_filter_multiple_domains_valid 0x2 != 0 src/tests/cmocka/test_responder_cache_req.c:2051: error: Failure! [ RUN ] test_groups_by_filter_multiple_domains_valid 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:2055: error: Failure! These errors say they failed to retrieve data from the cache. Tests inserts two test values into the cache at the beginning of their run, and then tries to pull it back. And sometime if they are under pressure, they fail. For a more detailed explanation, I added some printf(). I ran the test 25 times. The results: [ RUN ] test_users_by_filter_valid ... sysdb_store_user at [1439384336] (src/db/sysdb_ops.c:1882) ... cache_req_input_create at [1439384337] (src/responder/common/responder_cache_req.c:122) ... recent_filter = [(lastUpdate=1439384337)] (src/responder/common/responder_cache_req.c:44) ... sysdb_store_user at [1439384337] (src/db/sysdb_ops.c:1882) ... recent_filter = [(lastUpdate=1439384337)] (src/responder/common/responder_cache_req.c:44) 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1748: error: Failure! [ RUN ] test_users_by_filter_multiple_domains_valid ... sysdb_store_user at [1439384174] (src/db/sysdb_ops.c:1882) ... sysdb_store_user at [1439384174] (src/db/sysdb_ops.c:1882) ... cache_req_input_create at [1439384175] (src/responder/common/responder_cache_req.c:122) ... recent_filter = [(lastUpdate=1439384175)] (src/responder/common/responder_cache_req.c:44) ... recent_filter = [(lastUpdate=1439384175)] (src/responder/common/responder_cache_req.c:44) 0x2 != 0 src/tests/cmocka/test_responder_cache_req.c:1874: error: Failure! [ RUN ] test_groups_by_filter_valid ... sysdb_store_group at [1439385276] (src/db/sysdb_ops.c:2042) ... cache_req_input_create at [1439385277] (src/responder/common/responder_cache_req.c:122) ... recent_filter = [(lastUpdate=1439385277)] (src/responder/common/responder_cache_req.c:67) ... sysdb_store_group at [1439385277] (src/db/sysdb_ops.c:2042) ... recent_filter = [(lastUpdate=1439385277)] (src/responder/common/responder_cache_req.c:67) 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1971: error: Failure! [ RUN ] test_groups_by_filter_multiple_domains_valid ... sysdb_store_group at [1439385286] (src/db/sysdb_ops.c:2042) ... sysdb_store_group at [1439385287] (src/db/sysdb_ops.c:2042) ... cache_req_input_create at [1439385287] (src/responder/common/responder_cache_req.c:122) ... recent_filter = [(lastUpdate=1439385287)] (src/responder/common/responder_cache_req.c:67) ... recent_filter = [(lastUpdate=1439385287)] (src/responder/common/responder_cache_req.c:67) 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:2054: error: Failure! As we can see, we have discovered a new failing test test_users_by_filter_valid. REPRODUCER: Use cpu_braker [1] and observation patch [2] and try some iterations... # for i in {1..50} ; do ./responder_cache_req-tests ; done SOLUTION? The problem is caused by trying to retrieve records from the cache, with the time filter set. A time filter we have set by the time of the request creation. However, we create the request in our tests after inserting records into the cache. Therefore, it may vary the data records time and the time filter. So, solution is create the request and then insert records or create request and set: # req.req_start = req.req_start - 1. Please, can you help me? For example see function: test_users_by_filter_multiple_domains_valid() in src/tests/cmocka/test_responder_cache_req.c:1834 Regards Petr ATTACHMENTS: [1] cpu_braker.c [2] 0001-TEST-Observation-patch.patch From b58608eaadca863b28b0cc80b0588fa536d508b8 Mon Sep 17 00:00:00 2001 From: Petr Cech pc...@redhat.com Date: Wed, 12 Aug 2015 15:41:03 +0200 Subject: [PATCH] [TEST]: Observation patch This patch is part of reproducer, nothing more. Resolves: https://fedorahosted.org/sssd/ticket/2730 --- src/db/sysdb_ops.c |6 ++