Hello community,

here is the log from the commit of package python-Flask-Caching for 
openSUSE:Factory checked in at 2020-06-03 20:34:14
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-Flask-Caching (Old)
 and      /work/SRC/openSUSE:Factory/.python-Flask-Caching.new.3606 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-Flask-Caching"

Wed Jun  3 20:34:14 2020 rev:2 rq:810985 version:1.9.0

Changes:
--------
--- 
/work/SRC/openSUSE:Factory/python-Flask-Caching/python-Flask-Caching.changes    
    2020-05-05 18:55:34.301436128 +0200
+++ 
/work/SRC/openSUSE:Factory/.python-Flask-Caching.new.3606/python-Flask-Caching.changes
      2020-06-03 20:34:37.673568568 +0200
@@ -1,0 +2,24 @@
+Wed Jun  3 02:41:23 UTC 2020 - Arun Persaud <[email protected]>
+
+- specfile:
+  * be more specific in %files section
+
+- update to version 1.9.0:
+  * Add an option to include the functions source code when generating
+    the cache key. PR `#156
+    <https://github.com/sh4nks/flask-caching/pull/156>`_.
+  * Add an feature that allows one to completely control the way how
+    cache keys are generating. For example, one can now implement a
+    function that generates cache the keys based on the POST-based
+    requests.  PR `#159
+    <https://github.com/sh4nks/flask-caching/pull/159>`_.
+  * Fix the cache backend naming collisions by renaming them from
+    "simple" to "simplecache", "null" to "nullcache" and "filesystem"
+    to "filesystemcache".
+  * Explicitly pass the "default_timeout" to "RedisCache" from
+    "RedisSentinelCache".
+  * Use "os.replace" instead of werkzeug's "rename" due to Windows
+    raising an "OSError" if the dst file already exist.
+  * Documentation updates and fixes.
+
+-------------------------------------------------------------------

Old:
----
  Flask-Caching-1.8.0.tar.gz

New:
----
  Flask-Caching-1.9.0.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-Flask-Caching.spec ++++++
--- /var/tmp/diff_new_pack.T4BWqt/_old  2020-06-03 20:34:39.497574286 +0200
+++ /var/tmp/diff_new_pack.T4BWqt/_new  2020-06-03 20:34:39.501574298 +0200
@@ -18,7 +18,7 @@
 
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-Flask-Caching
-Version:        1.8.0
+Version:        1.9.0
 Release:        0
 Summary:        Adds caching support to your Flask application
 License:        BSD-3-Clause
@@ -26,9 +26,9 @@
 URL:            https://github.com/sh4nks/flask-caching
 Source:         
https://files.pythonhosted.org/packages/source/F/Flask-Caching/Flask-Caching-%{version}.tar.gz
 BuildRequires:  %{python_module Flask}
-BuildRequires:  %{python_module setuptools}
-BuildRequires:  %{python_module pytest}
 BuildRequires:  %{python_module pytest-cov}
+BuildRequires:  %{python_module pytest}
+BuildRequires:  %{python_module setuptools}
 BuildRequires:  fdupes
 BuildRequires:  python-rpm-macros
 Requires:       python-Flask
@@ -55,6 +55,7 @@
 %files %{python_files}
 %doc CHANGES README.md
 %license LICENSE
-%{python_sitelib}/*
+%{python_sitelib}/flask_caching
+%{python_sitelib}/Flask_Caching-%{version}-py*.egg-info
 
 %changelog

++++++ Flask-Caching-1.8.0.tar.gz -> Flask-Caching-1.9.0.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Flask-Caching-1.8.0/CHANGES 
new/Flask-Caching-1.9.0/CHANGES
--- old/Flask-Caching-1.8.0/CHANGES     2019-11-24 18:39:43.000000000 +0100
+++ new/Flask-Caching-1.9.0/CHANGES     2020-06-02 17:59:46.000000000 +0200
@@ -1,6 +1,27 @@
 Changelog
 =========
 
+Version 1.9.0
+-------------
+
+Released 2020-06-02
+
+- Add an option to include the functions source code when generating the cache
+  key. PR `#156 <https://github.com/sh4nks/flask-caching/pull/156>`_.
+- Add an feature that allows one to completely control the way how cache keys
+  are generating. For example, one can now implement a function that generates
+  cache the keys based on the POST-based requests.
+  PR `#159 <https://github.com/sh4nks/flask-caching/pull/159>`_.
+- Fix the cache backend naming collisions by renaming them from ``simple`` to
+  ``simplecache``, ``null`` to ``nullcache`` and ``filesystem`` to
+  ``filesystemcache``.
+- Explicitly pass the ``default_timeout`` to ``RedisCache`` from
+  ``RedisSentinelCache``.
+- Use ``os.replace`` instead of werkzeug's ``rename`` due to Windows raising an
+  ``OSError`` if the dst file already exist.
+- Documentation updates and fixes.
+
+
 Version 1.8.0
 -------------
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Flask-Caching-1.8.0/Flask_Caching.egg-info/PKG-INFO 
new/Flask-Caching-1.9.0/Flask_Caching.egg-info/PKG-INFO
--- old/Flask-Caching-1.8.0/Flask_Caching.egg-info/PKG-INFO     2019-11-24 
18:51:33.000000000 +0100
+++ new/Flask-Caching-1.9.0/Flask_Caching.egg-info/PKG-INFO     2020-06-02 
18:01:31.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 1.1
 Name: Flask-Caching
-Version: 1.8.0
+Version: 1.9.0
 Summary: Adds caching support to your Flask application
 Home-page: https://github.com/sh4nks/flask-caching
 Author: Peter Justin
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/Flask-Caching-1.8.0/Flask_Caching.egg-info/SOURCES.txt 
new/Flask-Caching-1.9.0/Flask_Caching.egg-info/SOURCES.txt
--- old/Flask-Caching-1.8.0/Flask_Caching.egg-info/SOURCES.txt  2019-11-24 
18:51:33.000000000 +0100
+++ new/Flask-Caching-1.9.0/Flask_Caching.egg-info/SOURCES.txt  2020-06-02 
18:01:31.000000000 +0200
@@ -25,11 +25,11 @@
 flask_caching/jinja2ext.py
 flask_caching/backends/__init__.py
 flask_caching/backends/base.py
-flask_caching/backends/filesystem.py
+flask_caching/backends/filesystemcache.py
 flask_caching/backends/memcache.py
-flask_caching/backends/null.py
+flask_caching/backends/nullcache.py
 flask_caching/backends/rediscache.py
-flask_caching/backends/simple.py
+flask_caching/backends/simplecache.py
 flask_caching/backends/uwsgicache.py
 tests/conftest.py
 tests/test_backend_cache.py
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Flask-Caching-1.8.0/PKG-INFO 
new/Flask-Caching-1.9.0/PKG-INFO
--- old/Flask-Caching-1.8.0/PKG-INFO    2019-11-24 18:51:33.433720800 +0100
+++ new/Flask-Caching-1.9.0/PKG-INFO    2020-06-02 18:01:31.701421500 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 1.1
 Name: Flask-Caching
-Version: 1.8.0
+Version: 1.9.0
 Summary: Adds caching support to your Flask application
 Home-page: https://github.com/sh4nks/flask-caching
 Author: Peter Justin
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Flask-Caching-1.8.0/docs/index.rst 
new/Flask-Caching-1.9.0/docs/index.rst
--- old/Flask-Caching-1.8.0/docs/index.rst      2019-11-24 16:40:45.000000000 
+0100
+++ new/Flask-Caching-1.9.0/docs/index.rst      2020-05-31 14:10:24.000000000 
+0200
@@ -2,6 +2,7 @@
 =============
 
 .. module:: flask_caching
+   :noindex:
 
 Flask-Caching is an extension to `Flask`_ that adds caching support for
 various backends to any Flask application. Besides providing support for all
@@ -163,8 +164,8 @@
 
 .. versionadded:: 0.2
 
-You might need to delete the cache on a per-function bases. Using the above
-example, lets say you change the users permissions and assign them to a role,
+You might need to delete the cache on a per-function basis. Using the above
+example, lets say you change the user's permissions and assign them to a role,
 but now you need to re-calculate if they have certain memberships or not.
 You can do this with the :meth:`~Cache.delete_memoized` function::
 
@@ -184,6 +185,21 @@
 
      cache.delete_memoized(user_has_membership, 'demo', 'user')
 
+.. warning::
+
+  If a classmethod is memoized, you must provide the ``class`` as the first
+  ``*args`` argument.
+
+  .. code-block:: python
+
+    class Foobar(object):
+        @classmethod
+        @cache.memoize(5)
+        def big_foo(cls, a, b):
+            return a + b + random.randrange(0, 100000)
+
+    cache.delete_memoized(Foobar.big_foo, Foobar, 5, 2)
+
 
 Caching Jinja2 Snippets
 -----------------------
@@ -328,7 +344,7 @@
 ``CACHE_DEFAULT_TIMEOUT``       The default timeout that is used if no
                                 timeout is specified. Unit of time is
                                 seconds.
-``CACHE_IGNORE_ERRORS``         If set to any errors that occured during the
+``CACHE_IGNORE_ERRORS``         If set to any errors that occurred during the
                                 deletion process will be ignored. However, if
                                 it is set to ``False`` it will stop on the
                                 first error. This option is only relevant for
@@ -342,6 +358,14 @@
                                 This makes it possible to use the same
                                 memcached server for different apps.
                                 Used only for RedisCache and MemcachedCache
+``CACHE_SOURCE_CHECK``          The default condition applied to function
+                                decorators which controls if the source code of
+                                the function should be included when forming 
the
+                                hash which is used as the cache key. This
+                                ensures that if the source code changes, the
+                                cached value will not be returned when the new
+                                function is called even if the arguments are 
the
+                                same. Defaults to ``False``.
 ``CACHE_UWSGI_NAME``            The name of the uwsgi caching instance to
                                 connect to, for example: 
mycache@localhost:3031,
                                 defaults to an empty string, which means uWSGI
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Flask-Caching-1.8.0/flask_caching/__init__.py 
new/Flask-Caching-1.9.0/flask_caching/__init__.py
--- old/Flask-Caching-1.8.0/flask_caching/__init__.py   2019-11-24 
18:42:35.000000000 +0100
+++ new/Flask-Caching-1.9.0/flask_caching/__init__.py   2020-06-02 
18:01:04.000000000 +0200
@@ -21,7 +21,7 @@
 from flask import current_app, request, url_for
 from werkzeug.utils import import_string
 
-__version__ = "1.8.0"
+__version__ = "1.9.0"
 
 logger = logging.getLogger(__name__)
 
@@ -149,6 +149,8 @@
         self.with_jinja2_ext = with_jinja2_ext
         self.config = config
 
+        self.source_check = None
+
         if app is not None:
             self.init_app(app, config)
 
@@ -180,6 +182,7 @@
         config.setdefault("CACHE_ARGS", [])
         config.setdefault("CACHE_TYPE", "null")
         config.setdefault("CACHE_NO_NULL_WARNING", False)
+        config.setdefault("CACHE_SOURCE_CHECK", False)
 
         if (
             config["CACHE_TYPE"] == "null"
@@ -196,6 +199,8 @@
                 "CACHE_DIR is set."
             )
 
+        self.source_check = config["CACHE_SOURCE_CHECK"]
+
         if self.with_jinja2_ext:
             from .jinja2ext import CacheExtension, JINJA_CACHE_ATTR_NAME
 
@@ -293,6 +298,8 @@
         query_string=False,
         hash_method=hashlib.md5,
         cache_none=False,
+        make_cache_key=None,
+        source_check=None
     ):
         """Decorator. Use this to cache a function. By default the cache key
         is `view/request.path`. You are able to use this decorator with any
@@ -333,6 +340,8 @@
                 **make_cache_key**
                     A function used in generating the cache_key used.
 
+                    readable and writable
+
         :param timeout: Default None. If set to an integer, will cache for that
                         amount of time. Unit of time is in seconds.
 
@@ -379,7 +388,18 @@
                            check when cache.get returns None. This will likely
                            lead to wrongly returned None values in concurrent
                            situations and is not recommended to use.
+        :param make_cache_key: Default None. If set to a callable object,
+                           it will be called to generate the cache key
 
+        :param source_check: Default None. If None will use the value set by
+                             CACHE_SOURCE_CHECK.
+                             If True, include the function's source code in the
+                             hash to avoid using cached values when the source
+                             code has changed and the input values remain the
+                             same. This ensures that the cache_key will be
+                             formed with the function's source code hash in
+                             addition to other parameters that may be included
+                             in the formation of the key.
         """
 
         def decorator(f):
@@ -389,13 +409,16 @@
                 if self._bypass_cache(unless, f, *args, **kwargs):
                     return f(*args, **kwargs)
 
+                nonlocal source_check
+                if source_check is None:
+                    source_check = self.source_check
+
                 try:
-                    if query_string:
-                        cache_key = _make_cache_key_query_string()
+                    if make_cache_key is not None and callable(make_cache_key):
+                        cache_key = make_cache_key(*args, **kwargs)
                     else:
-                        cache_key = _make_cache_key(
-                            args, kwargs, use_request=True
-                        )
+                        cache_key = _make_cache_key(args, kwargs, 
use_request=True)
+
 
                     if (
                         callable(forced_update)
@@ -450,7 +473,7 @@
                             )
                 return rv
 
-            def make_cache_key(*args, **kwargs):
+            def default_make_cache_key(*args, **kwargs):
                 # Convert non-keyword arguments (which is the way
                 # `make_cache_key` expects them) to keyword arguments
                 # (the way `url_for` expects them)
@@ -467,6 +490,10 @@
                 Produces the same cache key regardless of argument order, e.g.,
                 both `?limit=10&offset=20` and `?offset=20&limit=10` will
                 always produce the same exact cache key.
+
+                If func is provided and is callable it will be used to hash
+                the function's source code and include it in the cache key.
+                This will only be done is source_check is True.
                 """
 
                 # Create a tuple of (key, value) pairs, where the key is the
@@ -475,6 +502,7 @@
                 # is always the same for query string args whose keys/values
                 # are the same, regardless of the order in which they are
                 # provided.
+
                 args_as_sorted_tuple = tuple(
                     sorted((pair for pair in request.args.items(multi=True)))
                 )
@@ -482,26 +510,48 @@
                 # used as a key for cache. Turn them into bytes so that the
                 # hash function will accept them
                 args_as_bytes = str(args_as_sorted_tuple).encode()
-                hashed_args = str(hash_method(args_as_bytes).hexdigest())
-                cache_key = request.path + hashed_args
+                cache_hash = hash_method(args_as_bytes)
+
+                # Use the source code if source_check is True and update the
+                # cache_hash before generating the hashing and using it in
+                # cache_key
+                if source_check and callable(f):
+                    func_source_code = inspect.getsource(f)
+                    cache_hash.update(func_source_code.encode("utf-8"))
+
+                cache_hash = str(cache_hash.hexdigest())
+
+                cache_key = request.path + cache_hash
+
                 return cache_key
 
             def _make_cache_key(args, kwargs, use_request):
-                if callable(key_prefix):
-                    cache_key = key_prefix()
-                elif "%s" in key_prefix:
-                    if use_request:
-                        cache_key = key_prefix % request.path
-                    else:
-                        cache_key = key_prefix % url_for(f.__name__, **kwargs)
+                if query_string:
+                    return _make_cache_key_query_string()
                 else:
-                    cache_key = key_prefix
+                    if callable(key_prefix):
+                        cache_key = key_prefix()
+                    elif "%s" in key_prefix:
+                        if use_request:
+                            cache_key = key_prefix % request.path
+                        else:
+                            cache_key = key_prefix % url_for(f.__name__, 
**kwargs)
+                    else:
+                        cache_key = key_prefix
+
+                if source_check and callable(f):
+                    func_source_code = inspect.getsource(f)
+                    func_source_hash = hash_method(
+                        func_source_code.encode("utf-8"))
+                    func_source_hash = str(func_source_hash.hexdigest())
+
+                    cache_key += func_source_hash
 
                 return cache_key
 
             decorated_function.uncached = f
             decorated_function.cache_timeout = timeout
-            decorated_function.make_cache_key = make_cache_key
+            decorated_function.make_cache_key = default_make_cache_key
 
             return decorated_function
 
@@ -583,6 +633,7 @@
         timeout=None,
         forced_update=False,
         hash_method=hashlib.md5,
+        source_check=False
     ):
         """Function used to create the cache_key for memoized functions."""
 
@@ -607,6 +658,13 @@
 
             cache_key = hash_method()
             cache_key.update(updated.encode("utf-8"))
+
+            # Use the source code if source_check is True and update the
+            # cache_key with the function's source.
+            if source_check and callable(f):
+                func_source_code = inspect.getsource(f)
+                cache_key.update(func_source_code.encode("utf-8"))
+
             cache_key = base64.b64encode(cache_key.digest())[:16]
             cache_key = cache_key.decode("utf-8")
             cache_key += version_data
@@ -711,6 +769,7 @@
         response_filter=None,
         hash_method=hashlib.md5,
         cache_none=False,
+        source_check=None
     ):
         """Use this to cache the result of a function, taking its arguments
         into account in the cache key.
@@ -779,6 +838,16 @@
                            lead to wrongly returned None values in concurrent
                            situations and is not recommended to use.
 
+        :param source_check: Default None. If None will use the value set by
+                             CACHE_SOURCE_CHECK.
+                             If True, include the function's source code in the
+                             hash to avoid using cached values when the source
+                             code has changed and the input values remain the
+                             same. This ensures that the cache_key will be
+                             formed with the function's source code hash in
+                             addition to other parameters that may be included
+                             in the formation of the key.
+
         .. versionadded:: 0.5
             params ``make_name``, ``unless``
         """
@@ -790,6 +859,10 @@
                 if self._bypass_cache(unless, f, *args, **kwargs):
                     return f(*args, **kwargs)
 
+                nonlocal source_check
+                if source_check is None:
+                    source_check = self.source_check
+
                 try:
                     cache_key = decorated_function.make_cache_key(
                         f, *args, **kwargs
@@ -855,6 +928,7 @@
                 timeout=decorated_function,
                 forced_update=forced_update,
                 hash_method=hash_method,
+                source_check=source_check
             )
             decorated_function.delete_memoized = lambda: 
self.delete_memoized(f)
 
@@ -935,9 +1009,9 @@
             3.72341788
 
         :param fname: The memoized function.
-        :param \*args: A list of positional parameters used with
+        :param \\*args: A list of positional parameters used with
                        memoized function.
-        :param \**kwargs: A dict of named parameters used with
+        :param \\**kwargs: A dict of named parameters used with
                           memoized function.
 
         .. note::
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/Flask-Caching-1.8.0/flask_caching/backends/__init__.py 
new/Flask-Caching-1.9.0/flask_caching/backends/__init__.py
--- old/Flask-Caching-1.8.0/flask_caching/backends/__init__.py  2019-10-25 
22:28:20.000000000 +0200
+++ new/Flask-Caching-1.9.0/flask_caching/backends/__init__.py  2020-05-31 
12:56:38.000000000 +0200
@@ -9,17 +9,17 @@
     :copyright: (c) 2010 by Thadeus Burgess.
     :license: BSD, see LICENSE for more details.
 """
-from flask_caching.backends.filesystem import FileSystemCache
+from flask_caching.backends.filesystemcache import FileSystemCache
 from flask_caching.backends.memcache import (
     MemcachedCache,
     SASLMemcachedCache,
     SpreadSASLMemcachedCache,
 )
-from flask_caching.backends.null import NullCache
+from flask_caching.backends.nullcache import NullCache
 
 # TODO: Rename to "redis" when python2 support is removed
 from flask_caching.backends.rediscache import RedisCache, RedisSentinelCache
-from flask_caching.backends.simple import SimpleCache
+from flask_caching.backends.simplecache import SimpleCache
 
 try:
     from flask_caching.backends.uwsgicache import UWSGICache
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Flask-Caching-1.8.0/flask_caching/backends/base.py 
new/Flask-Caching-1.9.0/flask_caching/backends/base.py
--- old/Flask-Caching-1.8.0/flask_caching/backends/base.py      2019-10-25 
22:28:20.000000000 +0200
+++ new/Flask-Caching-1.9.0/flask_caching/backends/base.py      2020-05-31 
11:26:46.000000000 +0200
@@ -96,7 +96,7 @@
         :param value: the value for the key
         :param timeout: the cache timeout for the key in seconds (if not
                         specified, it uses the default timeout). A timeout of
-                        0 idicates that the cache never expires.
+                        0 indicates that the cache never expires.
         :returns: ``True`` if key has been updated, ``False`` for backend
                   errors. Pickling errors, however, will raise a subclass of
                   ``pickle.PickleError``.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/Flask-Caching-1.8.0/flask_caching/backends/filesystem.py 
new/Flask-Caching-1.9.0/flask_caching/backends/filesystem.py
--- old/Flask-Caching-1.8.0/flask_caching/backends/filesystem.py        
2019-10-25 22:28:20.000000000 +0200
+++ new/Flask-Caching-1.9.0/flask_caching/backends/filesystem.py        
1970-01-01 01:00:00.000000000 +0100
@@ -1,220 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-    flask_caching.backends.filesystem
-    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-    The filesystem caching backend.
-
-    :copyright: (c) 2018 by Peter Justin.
-    :copyright: (c) 2010 by Thadeus Burgess.
-    :license: BSD, see LICENSE for more details.
-"""
-import errno
-import hashlib
-import os
-import tempfile
-from time import time
-
-from werkzeug.posixemulation import rename
-
-from flask_caching.backends.base import BaseCache
-
-try:
-    import cPickle as pickle
-except ImportError:  # pragma: no cover
-    import pickle
-
-
-class FileSystemCache(BaseCache):
-
-    """A cache that stores the items on the file system.  This cache depends
-    on being the only user of the `cache_dir`.  Make absolutely sure that
-    nobody but this cache stores files there or otherwise the cache will
-    randomly delete files therein.
-
-    :param cache_dir: the directory where cache files are stored.
-    :param threshold: the maximum number of items the cache stores before
-                      it starts deleting some. A threshold value of 0
-                      indicates no threshold.
-    :param default_timeout: the default timeout that is used if no timeout is
-                            specified on :meth:`~BaseCache.set`. A timeout of
-                            0 indicates that the cache never expires.
-    :param mode: the file mode wanted for the cache files, default 0600
-    :param hash_method: Default hashlib.md5. The hash method used to
-                        generate the filename for cached results.
-    :param ignore_errors: If set to ``True`` the :meth:`~BaseCache.delete_many`
-                          method will ignore any errors that occured during the
-                          deletion process. However, if it is set to ``False``
-                          it will stop on the first error. Defaults to
-                          ``False``.
-    """
-
-    #: used for temporary files by the FileSystemCache
-    _fs_transaction_suffix = ".__wz_cache"
-    #: keep amount of files in a cache element
-    _fs_count_file = "__wz_cache_count"
-
-    def __init__(
-        self,
-        cache_dir,
-        threshold=500,
-        default_timeout=300,
-        mode=0o600,
-        hash_method=hashlib.md5,
-        ignore_errors=False,
-    ):
-        super(FileSystemCache, self).__init__(default_timeout)
-        self._path = cache_dir
-        self._threshold = threshold
-        self._mode = mode
-        self._hash_method = hash_method
-        self.ignore_errors = ignore_errors
-
-        try:
-            os.makedirs(self._path)
-        except OSError as ex:
-            if ex.errno != errno.EEXIST:
-                raise
-
-        self._update_count(value=len(self._list_dir()))
-
-    @property
-    def _file_count(self):
-        return self.get(self._fs_count_file) or 0
-
-    def _update_count(self, delta=None, value=None):
-        # If we have no threshold, don't count files
-        if self._threshold == 0:
-            return
-
-        if delta:
-            new_count = self._file_count + delta
-        else:
-            new_count = value or 0
-        self.set(self._fs_count_file, new_count, mgmt_element=True)
-
-    def _normalize_timeout(self, timeout):
-        timeout = BaseCache._normalize_timeout(self, timeout)
-        if timeout != 0:
-            timeout = time() + timeout
-        return int(timeout)
-
-    def _list_dir(self):
-        """return a list of (fully qualified) cache filenames
-        """
-        mgmt_files = [
-            self._get_filename(name).split("/")[-1]
-            for name in (self._fs_count_file,)
-        ]
-        return [
-            os.path.join(self._path, fn)
-            for fn in os.listdir(self._path)
-            if not fn.endswith(self._fs_transaction_suffix)
-            and fn not in mgmt_files
-        ]
-
-    def _prune(self):
-        if self._threshold == 0 or not self._file_count > self._threshold:
-            return
-
-        entries = self._list_dir()
-        now = time()
-        for idx, fname in enumerate(entries):
-            try:
-                remove = False
-                with open(fname, "rb") as f:
-                    expires = pickle.load(f)
-                remove = (expires != 0 and expires <= now) or idx % 3 == 0
-
-                if remove:
-                    os.remove(fname)
-            except (IOError, OSError):
-                pass
-        self._update_count(value=len(self._list_dir()))
-
-    def clear(self):
-        for fname in self._list_dir():
-            try:
-                os.remove(fname)
-            except (IOError, OSError):
-                self._update_count(value=len(self._list_dir()))
-                return False
-        self._update_count(value=0)
-        return True
-
-    def _get_filename(self, key):
-        if isinstance(key, str):
-            key = key.encode("utf-8")  # XXX unicode review
-        hash = self._hash_method(key).hexdigest()
-        return os.path.join(self._path, hash)
-
-    def get(self, key):
-        filename = self._get_filename(key)
-        try:
-            with open(filename, "rb") as f:
-                pickle_time = pickle.load(f)
-                if pickle_time == 0 or pickle_time >= time():
-                    return pickle.load(f)
-                else:
-                    os.remove(filename)
-                    return None
-        except (IOError, OSError, pickle.PickleError):
-            return None
-
-    def add(self, key, value, timeout=None):
-        filename = self._get_filename(key)
-        if not os.path.exists(filename):
-            return self.set(key, value, timeout)
-        return False
-
-    def set(self, key, value, timeout=None, mgmt_element=False):
-        # Management elements have no timeout
-        if mgmt_element:
-            timeout = 0
-
-        # Don't prune on management element update, to avoid loop
-        else:
-            self._prune()
-
-        timeout = self._normalize_timeout(timeout)
-        filename = self._get_filename(key)
-        try:
-            fd, tmp = tempfile.mkstemp(
-                suffix=self._fs_transaction_suffix, dir=self._path
-            )
-            with os.fdopen(fd, "wb") as f:
-                pickle.dump(timeout, f, 1)
-                pickle.dump(value, f, pickle.HIGHEST_PROTOCOL)
-            rename(tmp, filename)
-            os.chmod(filename, self._mode)
-        except (IOError, OSError):
-            return False
-        else:
-            # Management elements should not count towards threshold
-            if not mgmt_element:
-                self._update_count(delta=1)
-            return True
-
-    def delete(self, key, mgmt_element=False):
-        try:
-            os.remove(self._get_filename(key))
-        except (IOError, OSError):
-            return False
-        else:
-            # Management elements should not count towards threshold
-            if not mgmt_element:
-                self._update_count(delta=-1)
-            return True
-
-    def has(self, key):
-        filename = self._get_filename(key)
-        try:
-            with open(filename, "rb") as f:
-                pickle_time = pickle.load(f)
-                if pickle_time == 0 or pickle_time >= time():
-                    return True
-                else:
-                    os.remove(filename)
-                    return False
-        except (IOError, OSError, pickle.PickleError):
-            return False
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/Flask-Caching-1.8.0/flask_caching/backends/filesystemcache.py 
new/Flask-Caching-1.9.0/flask_caching/backends/filesystemcache.py
--- old/Flask-Caching-1.8.0/flask_caching/backends/filesystemcache.py   
1970-01-01 01:00:00.000000000 +0100
+++ new/Flask-Caching-1.9.0/flask_caching/backends/filesystemcache.py   
2020-05-31 12:16:05.000000000 +0200
@@ -0,0 +1,221 @@
+# -*- coding: utf-8 -*-
+"""
+    flask_caching.backends.filesystem
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    The filesystem caching backend.
+
+    :copyright: (c) 2018 by Peter Justin.
+    :copyright: (c) 2010 by Thadeus Burgess.
+    :license: BSD, see LICENSE for more details.
+"""
+import errno
+import hashlib
+import os
+import tempfile
+from time import time
+
+from flask_caching.backends.base import BaseCache
+
+try:
+    import cPickle as pickle
+except ImportError:  # pragma: no cover
+    import pickle
+
+
+class FileSystemCache(BaseCache):
+
+    """A cache that stores the items on the file system.  This cache depends
+    on being the only user of the `cache_dir`.  Make absolutely sure that
+    nobody but this cache stores files there or otherwise the cache will
+    randomly delete files therein.
+
+    :param cache_dir: the directory where cache files are stored.
+    :param threshold: the maximum number of items the cache stores before
+                      it starts deleting some. A threshold value of 0
+                      indicates no threshold.
+    :param default_timeout: the default timeout that is used if no timeout is
+                            specified on :meth:`~BaseCache.set`. A timeout of
+                            0 indicates that the cache never expires.
+    :param mode: the file mode wanted for the cache files, default 0600
+    :param hash_method: Default hashlib.md5. The hash method used to
+                        generate the filename for cached results.
+    :param ignore_errors: If set to ``True`` the :meth:`~BaseCache.delete_many`
+                          method will ignore any errors that occurred during 
the
+                          deletion process. However, if it is set to ``False``
+                          it will stop on the first error. Defaults to
+                          ``False``.
+    """
+
+    #: used for temporary files by the FileSystemCache
+    _fs_transaction_suffix = ".__wz_cache"
+    #: keep amount of files in a cache element
+    _fs_count_file = "__wz_cache_count"
+
+    def __init__(
+        self,
+        cache_dir,
+        threshold=500,
+        default_timeout=300,
+        mode=0o600,
+        hash_method=hashlib.md5,
+        ignore_errors=False,
+    ):
+        super(FileSystemCache, self).__init__(default_timeout)
+        self._path = cache_dir
+        self._threshold = threshold
+        self._mode = mode
+        self._hash_method = hash_method
+        self.ignore_errors = ignore_errors
+
+        try:
+            os.makedirs(self._path)
+        except OSError as ex:
+            if ex.errno != errno.EEXIST:
+                raise
+
+        # If there are many files and a zero threshold,
+        # the list_dir can slow initialisation massively
+        if self._threshold != 0:
+            self._update_count(value=len(self._list_dir()))
+
+    @property
+    def _file_count(self):
+        return self.get(self._fs_count_file) or 0
+
+    def _update_count(self, delta=None, value=None):
+        # If we have no threshold, don't count files
+        if self._threshold == 0:
+            return
+
+        if delta:
+            new_count = self._file_count + delta
+        else:
+            new_count = value or 0
+        self.set(self._fs_count_file, new_count, mgmt_element=True)
+
+    def _normalize_timeout(self, timeout):
+        timeout = BaseCache._normalize_timeout(self, timeout)
+        if timeout != 0:
+            timeout = time() + timeout
+        return int(timeout)
+
+    def _list_dir(self):
+        """return a list of (fully qualified) cache filenames
+        """
+        mgmt_files = [
+            self._get_filename(name).split("/")[-1]
+            for name in (self._fs_count_file,)
+        ]
+        return [
+            os.path.join(self._path, fn)
+            for fn in os.listdir(self._path)
+            if not fn.endswith(self._fs_transaction_suffix)
+            and fn not in mgmt_files
+        ]
+
+    def _prune(self):
+        if self._threshold == 0 or not self._file_count > self._threshold:
+            return
+
+        entries = self._list_dir()
+        now = time()
+        for idx, fname in enumerate(entries):
+            try:
+                remove = False
+                with open(fname, "rb") as f:
+                    expires = pickle.load(f)
+                remove = (expires != 0 and expires <= now) or idx % 3 == 0
+
+                if remove:
+                    os.remove(fname)
+            except (IOError, OSError):
+                pass
+        self._update_count(value=len(self._list_dir()))
+
+    def clear(self):
+        for fname in self._list_dir():
+            try:
+                os.remove(fname)
+            except (IOError, OSError):
+                self._update_count(value=len(self._list_dir()))
+                return False
+        self._update_count(value=0)
+        return True
+
+    def _get_filename(self, key):
+        if isinstance(key, str):
+            key = key.encode("utf-8")  # XXX unicode review
+        hash = self._hash_method(key).hexdigest()
+        return os.path.join(self._path, hash)
+
+    def get(self, key):
+        filename = self._get_filename(key)
+        try:
+            with open(filename, "rb") as f:
+                pickle_time = pickle.load(f)
+                if pickle_time == 0 or pickle_time >= time():
+                    return pickle.load(f)
+                else:
+                    os.remove(filename)
+                    return None
+        except (IOError, OSError, pickle.PickleError):
+            return None
+
+    def add(self, key, value, timeout=None):
+        filename = self._get_filename(key)
+        if not os.path.exists(filename):
+            return self.set(key, value, timeout)
+        return False
+
+    def set(self, key, value, timeout=None, mgmt_element=False):
+        # Management elements have no timeout
+        if mgmt_element:
+            timeout = 0
+
+        # Don't prune on management element update, to avoid loop
+        else:
+            self._prune()
+
+        timeout = self._normalize_timeout(timeout)
+        filename = self._get_filename(key)
+        try:
+            fd, tmp = tempfile.mkstemp(
+                suffix=self._fs_transaction_suffix, dir=self._path
+            )
+            with os.fdopen(fd, "wb") as f:
+                pickle.dump(timeout, f, 1)
+                pickle.dump(value, f, pickle.HIGHEST_PROTOCOL)
+            os.replace(tmp, filename)
+            os.chmod(filename, self._mode)
+        except (IOError, OSError):
+            return False
+        else:
+            # Management elements should not count towards threshold
+            if not mgmt_element:
+                self._update_count(delta=1)
+            return True
+
+    def delete(self, key, mgmt_element=False):
+        try:
+            os.remove(self._get_filename(key))
+        except (IOError, OSError):
+            return False
+        else:
+            # Management elements should not count towards threshold
+            if not mgmt_element:
+                self._update_count(delta=-1)
+            return True
+
+    def has(self, key):
+        filename = self._get_filename(key)
+        try:
+            with open(filename, "rb") as f:
+                pickle_time = pickle.load(f)
+                if pickle_time == 0 or pickle_time >= time():
+                    return True
+                else:
+                    os.remove(filename)
+                    return False
+        except (IOError, OSError, pickle.PickleError):
+            return False
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/Flask-Caching-1.8.0/flask_caching/backends/memcache.py 
new/Flask-Caching-1.9.0/flask_caching/backends/memcache.py
--- old/Flask-Caching-1.8.0/flask_caching/backends/memcache.py  2019-11-24 
18:23:44.000000000 +0100
+++ new/Flask-Caching-1.9.0/flask_caching/backends/memcache.py  2020-05-31 
11:30:58.000000000 +0200
@@ -84,7 +84,21 @@
     def _normalize_timeout(self, timeout):
         timeout = BaseCache._normalize_timeout(self, timeout)
         if timeout > 0:
-            timeout = int(time()) + timeout
+            # NOTE: pylibmc expect the timeout as delta time up to
+            # 2592000 seconds (30 days)
+            if not hasattr(self, 'mc_library'):
+                try:
+                    import pylibmc
+                except ImportError:
+                    self.mc_library = None
+                else:
+                    self.mc_library = 'pylibmc'
+
+            if self.mc_library != 'pylibmc':
+                timeout = int(time()) + timeout
+            elif timeout > 2592000:
+                timeout = 0
+
         return timeout
 
     def get(self, key):
@@ -181,6 +195,7 @@
         except ImportError:
             pass
         else:
+            self.mc_library = 'pylibmc'
             return pylibmc.Client(servers)
 
         try:
@@ -188,6 +203,7 @@
         except ImportError:
             pass
         else:
+            self.mc_library = 'google.appengine.api'
             return memcache.Client()
 
         try:
@@ -195,6 +211,7 @@
         except ImportError:
             pass
         else:
+            self.mc_library = 'memcache'
             return memcache.Client(servers)
 
         try:
@@ -202,6 +219,7 @@
         except ImportError:
             pass
         else:
+            self.mc_library = 'libmc'
             return libmc.Client(servers)
 
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Flask-Caching-1.8.0/flask_caching/backends/null.py 
new/Flask-Caching-1.9.0/flask_caching/backends/null.py
--- old/Flask-Caching-1.8.0/flask_caching/backends/null.py      2019-10-25 
21:23:57.000000000 +0200
+++ new/Flask-Caching-1.9.0/flask_caching/backends/null.py      1970-01-01 
01:00:00.000000000 +0100
@@ -1,23 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-    flask_caching.backends.null
-    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-    The null cache backend. A caching backend that doesn't cache.
-
-    :copyright: (c) 2018 by Peter Justin.
-    :copyright: (c) 2010 by Thadeus Burgess.
-    :license: BSD, see LICENSE for more details.
-"""
-from flask_caching.backends.base import BaseCache
-
-
-class NullCache(BaseCache):
-    """A cache that doesn't cache.  This can be useful for unit testing.
-
-    :param default_timeout: a dummy parameter that is ignored but exists
-                            for API compatibility with other caches.
-    """
-
-    def has(self, key):
-        return False
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/Flask-Caching-1.8.0/flask_caching/backends/nullcache.py 
new/Flask-Caching-1.9.0/flask_caching/backends/nullcache.py
--- old/Flask-Caching-1.8.0/flask_caching/backends/nullcache.py 1970-01-01 
01:00:00.000000000 +0100
+++ new/Flask-Caching-1.9.0/flask_caching/backends/nullcache.py 2019-10-25 
21:23:57.000000000 +0200
@@ -0,0 +1,23 @@
+# -*- coding: utf-8 -*-
+"""
+    flask_caching.backends.null
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    The null cache backend. A caching backend that doesn't cache.
+
+    :copyright: (c) 2018 by Peter Justin.
+    :copyright: (c) 2010 by Thadeus Burgess.
+    :license: BSD, see LICENSE for more details.
+"""
+from flask_caching.backends.base import BaseCache
+
+
+class NullCache(BaseCache):
+    """A cache that doesn't cache.  This can be useful for unit testing.
+
+    :param default_timeout: a dummy parameter that is ignored but exists
+                            for API compatibility with other caches.
+    """
+
+    def has(self, key):
+        return False
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/Flask-Caching-1.8.0/flask_caching/backends/rediscache.py 
new/Flask-Caching-1.9.0/flask_caching/backends/rediscache.py
--- old/Flask-Caching-1.8.0/flask_caching/backends/rediscache.py        
2019-11-24 16:38:36.000000000 +0100
+++ new/Flask-Caching-1.9.0/flask_caching/backends/rediscache.py        
2020-06-02 18:00:11.000000000 +0200
@@ -49,7 +49,7 @@
         key_prefix=None,
         **kwargs
     ):
-        super(RedisCache, self).__init__(default_timeout)
+        super().__init__(default_timeout)
         if host is None:
             raise ValueError("RedisCache host parameter may not be None")
         if isinstance(host, str):
@@ -237,7 +237,7 @@
         key_prefix=None,
         **kwargs
     ):
-        super(RedisSentinelCache, self).__init__(default_timeout)
+        super().__init__(default_timeout=default_timeout)
 
         try:
             import redis.sentinel
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Flask-Caching-1.8.0/flask_caching/backends/simple.py 
new/Flask-Caching-1.9.0/flask_caching/backends/simple.py
--- old/Flask-Caching-1.8.0/flask_caching/backends/simple.py    2019-10-25 
21:25:09.000000000 +0200
+++ new/Flask-Caching-1.9.0/flask_caching/backends/simple.py    1970-01-01 
01:00:00.000000000 +0100
@@ -1,97 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-    flask_caching.backends.simple
-    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-    The simple cache backend.
-
-    :copyright: (c) 2018 by Peter Justin.
-    :copyright: (c) 2010 by Thadeus Burgess.
-    :license: BSD, see LICENSE for more details.
-"""
-from time import time
-
-from flask_caching.backends.base import BaseCache
-
-try:
-    import cPickle as pickle
-except ImportError:  # pragma: no cover
-    import pickle
-
-
-class SimpleCache(BaseCache):
-    """Simple memory cache for single process environments.  This class exists
-    mainly for the development server and is not 100% thread safe.  It tries
-    to use as many atomic operations as possible and no locks for simplicity
-    but it could happen under heavy load that keys are added multiple times.
-
-    :param threshold: the maximum number of items the cache stores before
-                      it starts deleting some.
-    :param default_timeout: the default timeout that is used if no timeout is
-                            specified on :meth:`~BaseCache.set`. A timeout of
-                            0 indicates that the cache never expires.
-    :param ignore_errors: If set to ``True`` the :meth:`~BaseCache.delete_many`
-                          method will ignore any errors that occured during the
-                          deletion process. However, if it is set to ``False``
-                          it will stop on the first error. Defaults to
-                          ``False``.
-    """
-
-    def __init__(self, threshold=500, default_timeout=300, 
ignore_errors=False):
-        super(SimpleCache, self).__init__(default_timeout)
-        self._cache = {}
-        self.clear = self._cache.clear
-        self._threshold = threshold
-        self.ignore_errors = ignore_errors
-
-    def _prune(self):
-        if len(self._cache) > self._threshold:
-            now = time()
-            toremove = []
-            for idx, (key, (expires, _)) in enumerate(self._cache.items()):
-                if (expires != 0 and expires <= now) or idx % 3 == 0:
-                    toremove.append(key)
-            for key in toremove:
-                self._cache.pop(key, None)
-
-    def _normalize_timeout(self, timeout):
-        timeout = BaseCache._normalize_timeout(self, timeout)
-        if timeout > 0:
-            timeout = time() + timeout
-        return timeout
-
-    def get(self, key):
-        try:
-            expires, value = self._cache[key]
-            if expires == 0 or expires > time():
-                return pickle.loads(value)
-        except (KeyError, pickle.PickleError):
-            return None
-
-    def set(self, key, value, timeout=None):
-        expires = self._normalize_timeout(timeout)
-        self._prune()
-        self._cache[key] = (
-            expires,
-            pickle.dumps(value, pickle.HIGHEST_PROTOCOL),
-        )
-        return True
-
-    def add(self, key, value, timeout=None):
-        expires = self._normalize_timeout(timeout)
-        self._prune()
-        item = (expires, pickle.dumps(value, pickle.HIGHEST_PROTOCOL))
-        if key in self._cache:
-            return False
-        self._cache.setdefault(key, item)
-        return True
-
-    def delete(self, key):
-        return self._cache.pop(key, None) is not None
-
-    def has(self, key):
-        try:
-            expires, value = self._cache[key]
-            return expires == 0 or expires > time()
-        except KeyError:
-            return False
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/Flask-Caching-1.8.0/flask_caching/backends/simplecache.py 
new/Flask-Caching-1.9.0/flask_caching/backends/simplecache.py
--- old/Flask-Caching-1.8.0/flask_caching/backends/simplecache.py       
1970-01-01 01:00:00.000000000 +0100
+++ new/Flask-Caching-1.9.0/flask_caching/backends/simplecache.py       
2020-05-31 11:26:46.000000000 +0200
@@ -0,0 +1,97 @@
+# -*- coding: utf-8 -*-
+"""
+    flask_caching.backends.simple
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    The simple cache backend.
+
+    :copyright: (c) 2018 by Peter Justin.
+    :copyright: (c) 2010 by Thadeus Burgess.
+    :license: BSD, see LICENSE for more details.
+"""
+from time import time
+
+from flask_caching.backends.base import BaseCache
+
+try:
+    import cPickle as pickle
+except ImportError:  # pragma: no cover
+    import pickle
+
+
+class SimpleCache(BaseCache):
+    """Simple memory cache for single process environments.  This class exists
+    mainly for the development server and is not 100% thread safe.  It tries
+    to use as many atomic operations as possible and no locks for simplicity
+    but it could happen under heavy load that keys are added multiple times.
+
+    :param threshold: the maximum number of items the cache stores before
+                      it starts deleting some.
+    :param default_timeout: the default timeout that is used if no timeout is
+                            specified on :meth:`~BaseCache.set`. A timeout of
+                            0 indicates that the cache never expires.
+    :param ignore_errors: If set to ``True`` the :meth:`~BaseCache.delete_many`
+                          method will ignore any errors that occurred during 
the
+                          deletion process. However, if it is set to ``False``
+                          it will stop on the first error. Defaults to
+                          ``False``.
+    """
+
+    def __init__(self, threshold=500, default_timeout=300, 
ignore_errors=False):
+        super(SimpleCache, self).__init__(default_timeout)
+        self._cache = {}
+        self.clear = self._cache.clear
+        self._threshold = threshold
+        self.ignore_errors = ignore_errors
+
+    def _prune(self):
+        if len(self._cache) > self._threshold:
+            now = time()
+            toremove = []
+            for idx, (key, (expires, _)) in enumerate(self._cache.items()):
+                if (expires != 0 and expires <= now) or idx % 3 == 0:
+                    toremove.append(key)
+            for key in toremove:
+                self._cache.pop(key, None)
+
+    def _normalize_timeout(self, timeout):
+        timeout = BaseCache._normalize_timeout(self, timeout)
+        if timeout > 0:
+            timeout = time() + timeout
+        return timeout
+
+    def get(self, key):
+        try:
+            expires, value = self._cache[key]
+            if expires == 0 or expires > time():
+                return pickle.loads(value)
+        except (KeyError, pickle.PickleError):
+            return None
+
+    def set(self, key, value, timeout=None):
+        expires = self._normalize_timeout(timeout)
+        self._prune()
+        self._cache[key] = (
+            expires,
+            pickle.dumps(value, pickle.HIGHEST_PROTOCOL),
+        )
+        return True
+
+    def add(self, key, value, timeout=None):
+        expires = self._normalize_timeout(timeout)
+        self._prune()
+        item = (expires, pickle.dumps(value, pickle.HIGHEST_PROTOCOL))
+        if key in self._cache:
+            return False
+        self._cache.setdefault(key, item)
+        return True
+
+    def delete(self, key):
+        return self._cache.pop(key, None) is not None
+
+    def has(self, key):
+        try:
+            expires, value = self._cache[key]
+            return expires == 0 or expires > time()
+        except KeyError:
+            return False
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Flask-Caching-1.8.0/tests/conftest.py 
new/Flask-Caching-1.9.0/tests/conftest.py
--- old/Flask-Caching-1.8.0/tests/conftest.py   2019-10-25 19:55:57.000000000 
+0200
+++ new/Flask-Caching-1.9.0/tests/conftest.py   2020-06-02 17:48:36.000000000 
+0200
@@ -105,7 +105,7 @@
 
     class Starter(ProcessStarter):
         pattern = ""
-        args = ["memcached"]
+        args = ["memcached", "-vv"]
 
     try:
         xprocess.ensure("memcached", Starter)
@@ -118,3 +118,4 @@
 
     yield
     xprocess.getinfo("memcached").terminate()
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Flask-Caching-1.8.0/tests/test_backend_cache.py 
new/Flask-Caching-1.9.0/tests/test_backend_cache.py
--- old/Flask-Caching-1.8.0/tests/test_backend_cache.py 2019-10-25 
20:59:33.000000000 +0200
+++ new/Flask-Caching-1.9.0/tests/test_backend_cache.py 2020-05-31 
11:30:58.000000000 +0200
@@ -276,6 +276,12 @@
         c.set("foo", "bar", epoch + 100)
         assert c.get("foo") == "bar"
 
+    def test_timeouts(self, c):
+        c.set("foo", "bar", 1)
+        assert c.get("foo") == "bar"
+        time.sleep(1)
+        assert c.has("foo") is False
+
 
 class TestUWSGICache(GenericCacheTests):
     _can_use_fast_sleep = False
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Flask-Caching-1.8.0/tests/test_basic_app.py 
new/Flask-Caching-1.9.0/tests/test_basic_app.py
--- old/Flask-Caching-1.8.0/tests/test_basic_app.py     2019-02-23 
23:28:48.000000000 +0100
+++ new/Flask-Caching-1.9.0/tests/test_basic_app.py     2020-05-31 
13:01:34.000000000 +0200
@@ -3,7 +3,7 @@
 from flask import Flask
 
 from flask_caching import Cache
-from flask_caching.backends.simple import SimpleCache
+from flask_caching.backends.simplecache import SimpleCache
 
 try:
     import redis
@@ -30,7 +30,7 @@
 def test_dict_config_initapp(app):
     cache = Cache()
     cache.init_app(app, config={"CACHE_TYPE": "simple"})
-    from flask_caching.backends.simple import SimpleCache
+    from flask_caching.backends.simplecache import SimpleCache
 
     assert isinstance(app.extensions["cache"][cache], SimpleCache)
 
@@ -38,7 +38,7 @@
 def test_dict_config_both(app):
     cache = Cache(config={"CACHE_TYPE": "null"})
     cache.init_app(app, config={"CACHE_TYPE": "simple"})
-    from flask_caching.backends.simple import SimpleCache
+    from flask_caching.backends.simplecache import SimpleCache
 
     assert isinstance(app.extensions["cache"][cache], SimpleCache)
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Flask-Caching-1.8.0/tests/test_cache.py 
new/Flask-Caching-1.9.0/tests/test_cache.py
--- old/Flask-Caching-1.8.0/tests/test_cache.py 2019-11-24 16:38:36.000000000 
+0100
+++ new/Flask-Caching-1.9.0/tests/test_cache.py 2020-05-31 14:10:24.000000000 
+0200
@@ -88,6 +88,62 @@
         assert my_list != his_list
 
 
+def test_cache_cached_function_with_source_check_enabled(app, cache):
+    with app.test_request_context():
+
+        @cache.cached(key_prefix="MyBits", source_check=True)
+        def get_random_bits():
+            return [random.randrange(0, 2) for i in range(50)]
+
+        first_attempt = get_random_bits()
+        second_attempt = get_random_bits()
+
+        assert second_attempt == first_attempt
+
+        #... change the source  to see if the return value changes when called
+        @cache.cached(key_prefix="MyBits", source_check=True)
+        def get_random_bits():
+            return {"val": [random.randrange(0, 2) for i in range(50)]}
+
+        third_attempt = get_random_bits()
+
+        assert third_attempt != first_attempt
+        # We changed the return data type so we do a check to be sure
+        assert isinstance(third_attempt, dict)
+
+        #... change the source back to what it was original and the data should
+        # be the same
+        @cache.cached(key_prefix="MyBits", source_check=True)
+        def get_random_bits():
+            return [random.randrange(0, 2) for i in range(50)]
+
+        forth_attempt = get_random_bits()
+
+        assert forth_attempt == first_attempt
+
+
+def test_cache_cached_function_with_source_check_disabled(app, cache):
+    with app.test_request_context():
+
+        @cache.cached(key_prefix="MyBits", source_check=False)
+        def get_random_bits():
+            return [random.randrange(0, 2) for i in range(50)]
+
+        first_attempt = get_random_bits()
+        second_attempt = get_random_bits()
+
+        assert second_attempt == first_attempt
+
+        #... change the source  to see if the return value changes when called
+        @cache.cached(key_prefix="MyBits", source_check=False)
+        def get_random_bits():
+            return {"val": [random.randrange(0, 2) for i in range(50)]}
+
+        third_attempt = get_random_bits()
+
+        assert third_attempt == first_attempt
+
+
 def test_cache_accepts_multiple_ciphers(app, cache, hash_method):
     with app.test_request_context():
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Flask-Caching-1.8.0/tests/test_memoize.py 
new/Flask-Caching-1.9.0/tests/test_memoize.py
--- old/Flask-Caching-1.8.0/tests/test_memoize.py       2019-11-24 
16:38:36.000000000 +0100
+++ new/Flask-Caching-1.9.0/tests/test_memoize.py       2020-05-31 
14:10:24.000000000 +0200
@@ -710,3 +710,53 @@
 
         memoize_none(1)
         assert call_counter[1] == 3
+
+
+def test_memoize_with_source_check_enabled(app, cache):
+    with app.test_request_context():
+        @cache.memoize(source_check=True)
+        def big_foo(a, b):
+            return str(time.time())
+
+        first_try = big_foo(5, 2)
+
+        second_try = big_foo(5, 2)
+
+        assert second_try == first_try
+
+        @cache.memoize(source_check=True)
+        def big_foo(a, b):
+            return (str(time.time()))
+
+        third_try = big_foo(5, 2)
+
+        assert third_try[0] != first_try
+
+        @cache.memoize(source_check=True)
+        def big_foo(a, b):
+            return str(time.time())
+
+        forth_try = big_foo(5, 2)
+
+        assert forth_try == first_try
+
+
+def test_memoize_with_source_check_disabled(app, cache):
+    with app.test_request_context():
+        @cache.memoize(source_check=False)
+        def big_foo(a, b):
+            return str(time.time())
+
+        first_try = big_foo(5, 2)
+
+        second_try = big_foo(5, 2)
+
+        assert second_try == first_try
+
+        @cache.memoize(source_check=False)
+        def big_foo(a, b):
+            return (time.time())
+
+        third_try = big_foo(5, 2)
+
+        assert third_try == first_try
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/Flask-Caching-1.8.0/tests/test_view.py 
new/Flask-Caching-1.9.0/tests/test_view.py
--- old/Flask-Caching-1.8.0/tests/test_view.py  2019-11-24 17:30:32.000000000 
+0100
+++ new/Flask-Caching-1.9.0/tests/test_view.py  2020-05-31 14:11:37.000000000 
+0200
@@ -1,4 +1,5 @@
 # -*- coding: utf-8 -*-
+import hashlib
 import time
 
 from flask import request
@@ -332,3 +333,179 @@
     # ... making sure that different query parameter values
     # don't yield the same cache!
     assert not third_time == second_time
+
+
+def test_generate_cache_key_from_request_body(app, cache):
+    """Test a user supplied cache key maker.
+    Create three requests to verify that the same request body
+    always reference the same cache
+    Also test to make sure that the same cache isn't being used for
+    any/all query string parameters.
+    Caching functionality is verified by a `@cached` route `/works` which
+    produces a time in its response. The time in the response can verify that
+    two requests with the same request body produce responses with the same 
time.
+    """
+
+    def _make_cache_key_request_body(argument):
+        """Create keys based on request body."""
+        # now hash the request body so it can be
+        # used as a key for cache.
+        request_body = request.get_data(as_text=False)
+        hashed_body = str(hashlib.md5(request_body).hexdigest())
+        cache_key = request.path + hashed_body
+        return cache_key
+
+    @app.route('/works/<argument>', methods=['POST'])
+    @cache.cached(make_cache_key=_make_cache_key_request_body)
+    def view_works(argument):
+        return str(time.time()) + request.get_data().decode()
+
+    tc = app.test_client()
+
+    # Make our request...
+    first_response = tc.post(
+        '/works/arg', data=dict(mock=True, value=1, test=2)
+    )
+    first_time = first_response.get_data(as_text=True)
+
+    # Make the request...
+    second_response = tc.post(
+        '/works/arg', data=dict(mock=True, value=1, test=2)
+    )
+    second_time = second_response.get_data(as_text=True)
+
+    # Now make sure the time for the first and second
+    # requests are the same!
+    assert second_time == first_time
+
+    # Last/third request with different body should
+    # produce a different time.
+    third_response = tc.post(
+        '/works/arg', data=dict(mock=True, value=2, test=3)
+    )
+    third_time = third_response.get_data(as_text=True)
+
+    # ... making sure that different request bodies
+    # don't yield the same cache!
+    assert not third_time == second_time
+
+
+def test_cache_with_query_string_and_source_check_enabled(app, cache):
+    """Test the _make_cache_key_query_string() cache key maker with
+    source_check set to True to include the view's function's source code as
+    part of the cache hash key.
+    """
+
+    @cache.cached(query_string=True, source_check=True)
+    def view_works():
+        return str(time.time())
+
+    app.add_url_rule('/works', 'works', view_works)
+
+    tc = app.test_client()
+
+    # Make our first query...
+    first_response = tc.get(
+        '/works?mock=true&offset=20&limit=15'
+    )
+    first_time = first_response.get_data(as_text=True)
+
+    # Make our second query...
+    second_response = tc.get(
+        '/works?mock=true&offset=20&limit=15'
+    )
+    second_time = second_response.get_data(as_text=True)
+
+    # The cache should yield the same data first and second time
+    assert first_time == second_time
+
+    # Change the source of the function attached to the view
+    @cache.cached(query_string=True, source_check=True)
+    def view_works():
+        return (str(time.time()))
+
+    #... and we overide the function attached to the view
+    app.view_functions['works'] = view_works
+
+    tc = app.test_client()
+
+    # Make the second query...
+    third_response = tc.get(
+        '/works?mock=true&offset=20&limit=15'
+    )
+    third_time = third_response.get_data(as_text=True)
+
+    # Now make sure the time for the first and third
+    # responses are not the same i.e. cached is not used!
+    assert third_time[0] != first_time
+
+    # Change the source of the function to what it was originally
+    @cache.cached(query_string=True, source_check=True)
+    def view_works():
+        return str(time.time())
+
+    app.view_functions['works'] = view_works
+
+    tc = app.test_client()
+
+    # Last/third query with different parameters/values should
+    # produce a different time.
+    forth_response = tc.get(
+        '/works?mock=true&offset=20&limit=15'
+    )
+    forth_time = forth_response.get_data(as_text=True)
+
+    # ... making sure that the first value and the forth value are the same
+    # since the source is the same
+    assert forth_time == first_time
+
+
+def test_cache_with_query_string_and_source_check_disabled(app, cache):
+    """Test the _make_cache_key_query_string() cache key maker with
+    source_check set to False to exclude the view's function's source code as
+    part of the cache hash key and to see if changing the source changes the
+    data.
+    """
+
+    @cache.cached(query_string=True, source_check=False)
+    def view_works():
+        return str(time.time())
+
+    app.add_url_rule('/works', 'works', view_works)
+
+    tc = app.test_client()
+
+    # Make our first query...
+    first_response = tc.get(
+        '/works?mock=true&offset=20&limit=15'
+    )
+    first_time = first_response.get_data(as_text=True)
+
+    # Make our second query...
+    second_response = tc.get(
+        '/works?mock=true&offset=20&limit=15'
+    )
+    second_time = second_response.get_data(as_text=True)
+
+    # The cache should yield the same data first and second time
+    assert first_time == second_time
+
+    # Change the source of the function attached to the view
+    @cache.cached(query_string=True, source_check=False)
+    def view_works():
+        return (str(time.time()))
+
+    #... and we overide the function attached to the view
+    app.view_functions['works'] = view_works
+
+    tc = app.test_client()
+
+    # Make the second query...
+    third_response = tc.get(
+        '/works?mock=true&offset=20&limit=15'
+    )
+    third_time = third_response.get_data(as_text=True)
+
+    # Now make sure the time for the first and third responses are the same
+    # i.e. cached is used since cache will not check for source changes!
+    assert third_time == first_time


Reply via email to