Reviewed:  https://review.openstack.org/611117
Committed: 
https://git.openstack.org/cgit/openstack/oslo.cache/commit/?id=685a05edfef24cb2300df24aa3417ad0ecd2c2c2
Submitter: Zuul
Branch:    master

commit 685a05edfef24cb2300df24aa3417ad0ecd2c2c2
Author: Morgan Fainberg <morgan.fainb...@gmail.com>
Date:   Tue Oct 16 10:50:33 2018 -0700

    Config dict arguments.url must be a list
    
    For memcache, arguments.url is assumed to be a list, this means
    that the underlying memcache library could not handle a case where
    a string (comma separated) was passed in. The idea is to force a
    split on ',' in the special case of backend_argument=url:...
    so that a list is passed to the backend.
    
    Change-Id: I65afb46455423d7f96e17c471330c8d74dc17cac
    Closes-Bug: #1743036


** Changed in: oslo.cache
       Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1743036

Title:
  Multiple memcached back-end instances breaks caching

Status in OpenStack Identity (keystone):
  Invalid
Status in oslo.cache:
  Fix Released

Bug description:
  Environment
  ~~~~~~~~~~~
  * Keystone deployed in containers, running on uWSGI (per openstack-ansible 
defaults)
  * Keystone baseline (as provided by OSA 16.0.2): 
6a67918f9d5f39564af8eacc57b80cba98242683 # HEAD of "stable/pike" as of 
28.09.2017
  * openstack-ansible version: 16.0.2
  * Target OS: Ubuntu 16.04 Xenial
  * keystone.conf: http://paste.openstack.org/show/643865/ ([cache] backend = 
dogpile.cache.memcached)

  Symptom
  ~~~~~~~
  Running keystone against multiple memcached backends (as per OSA standard 
deployment pattern) results in caching being completely defeated - meaning 
Keystone's performance is as if caching was disabled.

  Switching `backend_argument =
  url:<cache1ip>,<cache2ip>,<cache3ip>:11211` to `url:<cache1ip>:11211`
  restores caching (and performance).

  
  Analysis
  ~~~~~~~~
  Having turned on oslo.cache debugging with the following settings:

  `
  [DEFAULT]
  debug = True
  default_log_levels = 
amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=DEBUG,dogpile.core.dogpile=INFO

  [cache]
  debug_cache_backend = True
  `
  It becomes obvious that every attempt to retrieve a cached value fails.  This 
appears to be because a different key is generated (through hashing?) despite 
the paylod (eg: token) being identical.

  Evidence
  ~~~~~~~~
  The following log excerpts demonstrate the issue:
  * http://paste.openstack.org/show/643886/
  * http://paste.openstack.org/show/643887/

  This is generated through two subsequent attempts to validate the same
  token (payload shown on the last line of both logs is the same), 3
  seconds apart, within the same keystone uWSGI worker process, through
  the same client invokation:

  curl -X GET -H "X-Auth-Token: $ADMIN_TOKEN" -H "X-Subject-Token:
  $SUBJECT_TOKEN" http://${HOST}:35357/v3/auth/tokens  -w
  "\n\n%{time_total}\n\n"

  Yet the cache keys for both requests are different
  ('5bd08aa07bf8b4bcafeac88469769a4554297df6' vs
  '27eb98ba5a77f3701547be6ed06f3301e124e853').  Because the keys are
  different, the 2nd request (for the same token) doesn't hit the cache
  and the (same) token is stored a 2nd time in the cache with the 2nd
  key.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1743036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to