URL: https://github.com/SSSD/sssd/pull/259 Author: fidencio Title: #259: RESPONDER: Also populate cr_domains when initializing the responders Action: synchronized
To pull the PR as Git branch: git remote add ghsssd https://github.com/SSSD/sssd git fetch ghsssd pull/259/head:pr259 git checkout pr259
From 30d9638273576edbc9f9f169efd27e3b77207603 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Fabiano=20Fid=C3=AAncio?= <[email protected]> Date: Wed, 3 May 2017 13:24:40 +0200 Subject: [PATCH] CACHE_REQ: Ensure the domains are updated for "filter" calls MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit As contacting the infopipe responers on a "filter" related call we may reach the situation where the cr_domains' list is not populated yet as the domains and subdomains list from the data provider are not processed yet. It may happen by sending a dbus-request to the infopipe as soon as it initializes (so schedule_get_domains_task() is not done yet) and I'm not sure whether there's a better solution to avoid this "race" for now than explicitly calling sss_dp_get_domains() when the cr_domains are not populated by the time cache_req_send() is called and keep the processing after sss_dp_get_domains() has finished. Resolves: https://pagure.io/SSSD/sssd/issue/3387 Signed-off-by: Fabiano FidĂȘncio <[email protected]> --- src/responder/common/cache_req/cache_req.c | 52 +++++++++++++++++++++++++++++- 1 file changed, 51 insertions(+), 1 deletion(-) diff --git a/src/responder/common/cache_req/cache_req.c b/src/responder/common/cache_req/cache_req.c index 797325a..0727f38 100644 --- a/src/responder/common/cache_req/cache_req.c +++ b/src/responder/common/cache_req/cache_req.c @@ -698,6 +698,8 @@ static errno_t cache_req_process_input(TALLOC_CTX *mem_ctx, struct cache_req *cr, const char *domain); +static void cache_req_get_domains_done(struct tevent_req *subreq); + static void cache_req_input_parsed(struct tevent_req *subreq); static errno_t cache_req_select_domains(struct tevent_req *req, @@ -753,12 +755,12 @@ struct tevent_req *cache_req_send(TALLOC_CTX *mem_ctx, goto done; } + state->domain_name = domain; ret = cache_req_process_input(state, req, cr, domain); if (ret != EOK) { goto done; } - state->domain_name = domain; ret = cache_req_select_domains(req, domain); done: @@ -787,6 +789,22 @@ static errno_t cache_req_process_input(TALLOC_CTX *mem_ctx, } if (cr->plugin->parse_name == false || domain != NULL) { + /* When reaching this point we may end up in a situation where + * the domains and subdomains from the data provider were not + * processed yet. + * + * In this case we just want to call sss_dp_get_domains() and + * do whatevet is needed when it's done. */ + if (cr->rctx->cr_domains == NULL) { + subreq = sss_dp_get_domains_send(mem_ctx, cr->rctx, false, domain); + if (subreq == NULL) { + return ENOMEM; + } + + tevent_req_set_callback(subreq, cache_req_get_domains_done, req); + return EAGAIN; + } + /* We do not want to parse the name. */ return cache_req_set_name(cr, cr->data->name.input); } @@ -812,6 +830,38 @@ static errno_t cache_req_process_input(TALLOC_CTX *mem_ctx, return EAGAIN; } +static void cache_req_get_domains_done(struct tevent_req *subreq) +{ + struct tevent_req *req; + struct cache_req_state *state; + errno_t ret; + + req = tevent_req_callback_data(subreq, struct tevent_req); + state = tevent_req_data(req, struct cache_req_state); + + ret = sss_dp_get_domains_recv(subreq); + talloc_free(subreq); + if (ret != EOK) { + goto done; + } + + ret = cache_req_set_name(state->cr, state->cr->data->name.input); + if (ret != EOK) { + goto done; + } + + ret = cache_req_select_domains(req, state->domain_name); + +done: + if (ret == EOK) { + tevent_req_done(req); + tevent_req_post(req, state->ev); + } else if (ret != EAGAIN) { + tevent_req_error(req, ret); + tevent_req_post(req, state->ev); + } +} + static void cache_req_input_parsed(struct tevent_req *subreq) { struct tevent_req *req;
_______________________________________________ sssd-devel mailing list -- [email protected] To unsubscribe send an email to [email protected]
