Thank you for getting back to me. Yeah, I agree with what you are saying. The
problem that I’m really trying to solve is the how to get them requested
reasonably often part. A good use case for my problem is basically;
1) Somebody starts an interactive job on a compute node (this is somewhat
unusual in it of itself). There’s a decent chance that nobody has done this
for weeks or months months in the first place. Since a large number of our
1000 or so users aren’t compute users theres a high probablity that we have a
substantial number of expired cached entries, possibly 500 or more for users in
2) They are navigating around on the filesystem and cd into /home and type ‘ls
This command will actually take upwards of an hour to execute (although it will
complete eventually). If an ‘ls -l’ on a Linux system takes more than a few
seconds people will think there’s a problem with the system.
Based on my experience even ‘nowait percentage’ has a difficult time with a
large number of records past the nowait threshold. For example, if there are
500 records past the expiration percentage threshold, the data provider will
get ‘busy’ which seems to effectively appears to block the nss responder,
instead of returning all 500 of those records from the cache and then queueing
500 data provider requests in the background to refresh the cache.
Right now the only ways I can seem to get around this is to do a regular ‘ls
-l’ to refresh the cache on our nodes, or just defer the problem by setting a
really high entry cache timeout. The cron approach is a little bit challenging
because we need to randomize invocation times because bulk cache refreshes
across the environment are going to cause high load on our domain controllers
(I know this because a single cache refresh causes ns-slapd to hit 100% and
sustain CPU utilization for the duration of the enumeration).
Is there anything crazy about setting the entry cache timeout on the client to
something arbitrarily high, like 5 years (other than knowing the cache is not
accurate)? Based on my knowledge a user’s groups are evaluated at login so
this should be a non-issue from a security standpoint.
> On Feb 1, 2017, at 1:55 AM, Jakub Hrozek <jhro...@redhat.com> wrote:
> On Tue, Jan 31, 2017 at 08:05:18PM +0000, Sullivan, Daniel [CRI] wrote:
>> I figured out what was going on with this issue. Basically cache timeouts
>> were causing a large number of uid numbers in an arbitrarily-timed directory
>> listing to have expired cache records, which causes those records to be
>> looked up again by the data provider (and thus blocking ‘ls -l’). To work
>> around this issue now we currently setting the entry_cache_timeout to
>> something arbitrarily high, i.e. 999999, I’m questioning whether or not this
>> is the best approach. I’d like to use something like
>> refresh_expired_interval, although based on my testing it appears that this
>> does not update records for a trusted AD domain. I’ve also tried using
>> enumeration, and that doesn’t seem to work either.
>> I suppose my question is this; is there a preferred method to keep cache
>> records up-to-date for a trusted AD domain? Right now I am thinking about
>> cron-tabbing an ‘ls -l’ of /home and allowing entry_cache_nowait_percentage
>> to fill this function, although that seems hacky to me.
>> Any advisement that could be provided would be greatly appreciated.
> If the entries are requested reasonably often (typically at least once
> per cache lifetime), then maybe just lowering the
> 'entry_cache_nowait_percentage' value so that the background check is
> performed more often might help.
> Manage your subscription for the Freeipa-users mailing list:
> Go to http://freeipa.org for more info on the project
Manage your subscription for the Freeipa-users mailing list:
Go to http://freeipa.org for more info on the project