Reviewed: https://review.opendev.org/c/openstack/oslo.cache/+/932301 Committed: https://opendev.org/openstack/oslo.cache/commit/58b06c40d227af7ea5f70d61b485ba7392c343d1 Submitter: "Zuul (22348)" Branch: master
commit 58b06c40d227af7ea5f70d61b485ba7392c343d1 Author: Takashi Kajinami <[email protected]> Date: Sun Oct 13 23:49:07 2024 +0900 Support expiration time in backend Current implementation of expiration time relies on the generation time stored in actual cache data, thus expired cache records are not removed from backends automatically. Add the new option to additionally set the expiration time supported by the cache backend, so that operators can limit amount of spaces (especially memory spaces) used for cache data. Closes-Bug: #1578401 Change-Id: If61871f030560079482ecbbefeb940d8d3c18968 ** Changed in: oslo.cache Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1578401 Title: tokens in memcache have no/improper expiration Status in OpenStack Identity (keystone): Invalid Status in oslo.cache: Fix Released Bug description: tokens stored in memcache have no (improper) expiration data when set. Found on stable/mikata and stable/liberty using cachepool backend and the non-pooled backend. When you store in memcache you can optionally pass in a time at which the value is no good, a ttl. Keystone should be doing this for it's local token cache but it doesn't look like it is. Here is a dump of some slabs that have tokens in them, last field is expiration time: stats cachedump 11 10 ITEM 8302fc81f2ffb552d5ba8d3e5f0e182ee285786a [724 b; 1460583765 s] ITEM 2ffe5d0821302a8501068a8411ce1749cea0645b [776 b; 1460583765 s] ITEM eb6e6f7e9118133a9a98944da874ac1b59c5675b [724 b; 1460583765 s] ITEM ee076b853dd5e5956366854abf6c49dbdf5ee4c2 [723 b; 1460583765 s] Lets see if these are really tokens: get 8302fc81f2ffb552d5ba8d3e5f0e182ee285786a VALUE 8302fc81f2ffb552d5ba8d3e5f0e182ee285786a 1 724 cdogpile.cache.api CachedValue p0 ((dp1 S'access' p2 (dp3 S'token' p4 (dp5 S'issued_at' p6 S'2016-05-04T21:20:27.000000Z' p7 sS'expires' p8 S'2016-05-04T23:20:27.146911Z' p9 sS'id' p10 V<fernet token value here> Yep thats a Fernet token. Dumping older and older stuff, I can find cached tokens that are 24 hours old in here, 22 hours past our valid token deadline. So lets compare that to some tokens that keystone_authtoken middleware is caching for control services: stats cachedump 21 100 ITEM tokens/418b2c5a0e67d022b0578fbc4c96abf4a4509e94aca4ca1595167f8f01448957 [8463 b; 1462397763 s] ITEM tokens/2b5a26e3bdf4dec0caae141353297f0316b55daf683b4bc0fcd1ab7bf4ba8f9b [8312 b; 1462397539 s] ITEM tokens/778329eb53545cbd664fa67e6429f48692679f428077b48baa4991f13cc1817c [8312 b; 1462397538 s] ITEM tokens/b80b06cf688c37f8688c368a983c2fd533c662b7b3063c6a2665c59def958cdd [8312 b; 1462397537 s] ITEM tokens/61cd52b0654641a21c62831f6e5be9f0328898d05026d6bb91c787d79cb8b460 [8463 b; 1462397536 s] All have valid and different expiration times so it's respecting my settings. So what's that timestamp in the earlier list? Well it's 4/13/2016, 3:42:45 PM GMT-6:00 DST. That happens to be the last time memcache restarted and so I assume it's just filler. What's the impact? I've not figured out if there is one yet for sure. I have a token valid time of 2 hours and I had set cache time to the same. I did try to dig out an old token but it would not validate so I don't think there's a security issue. I suspect the main issue is that my keystone memcache always runs completely 100% full. We max memcache at 20% of RAM on a box, and that's a lot (20% of 256G). I suspect with no valid expiration time memcache is lazily evicting old tokens when it runs out of ram rather than replacing expired ones and not allocating more RAM. [PROD] mfisch@east-keystone-001:~$ cat /proc/3937/status Name: memcached State: S (sleeping) Tgid: 3937 Ngid: 3937 Pid: 3937 PPid: 1 TracerPid: 0 Uid: 65534 65534 65534 65534 Gid: 65534 65534 65534 65534 FDSize: 1024 Groups: 0 VmPeak: 54500552 kB VmSize: 54500552 kB <-- that's a lot of twinkies I feel this merits deeper investigation. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1578401/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : [email protected] Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp

