We only have the cache configured at the keystone level, not sure that will
help, it will still require to retrieve the revoke tree from the cache to
validate the tree…
Modifying the values on the cache_time does not reduce the number of requests
on the cache, it increases the load on the database behind…
I am looking into other backends possibilities to reduce “hot key” issues, just
wondering what are you using, and for the replies everyone is using memcache…
Cheers,
Jose
Jose Castro Leon
CERN IT-CM-RPS tel: +41.22.76.74272
mob: +41.75.41.19222
fax: +41.22.76.67955
Office: 31-1-026 CH-1211 Geneve 23
email: [email protected]<mailto:[email protected]>
From: [email protected] [mailto:[email protected]] On Behalf Of Matt Fischer
Sent: Wednesday, June 22, 2016 5:07 AM
To: Sam Morrison <[email protected]>
Cc: Jose Castro Leon <[email protected]>;
[email protected]
Subject: Re: [Openstack-operators] [Openstack-Operators] Keystone cache
strategies
On Tue, Jun 21, 2016 at 7:04 PM, Sam Morrison
<[email protected]<mailto:[email protected]>> wrote:
On 22 Jun 2016, at 10:58 AM, Matt Fischer
<[email protected]<mailto:[email protected]>> wrote:
Have you setup token caching at the service level? Meaning a Memcache cluster
that glance, Nova etc would talk to directly? That will really cut down the
traffic.
Yeah we have that although the default cache time is 10 seconds for revocation
lists. I might just set that to some large number to limit this traffic a bit.
Sam
We have ours set to 60 seconds. I've fiddled around with it some, but I've
found that revocation events are damaging to perf no matter how much magic you
try to apply.
_______________________________________________
OpenStack-operators mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators