Ravi-

it took me about 10 minutes to migrate from beaker to dogpile in a
pyramid app.  most of the work was done by find/replace and then i
just checked for oversight.  in a few places i specifically named the
cache keys, so that was a bit more work -- but still within 10
minutes.

from what i went though, it should be very easy to migrate pylons as
well.

i ended up deciding against an environment.ini cache setup, and just
hardcoded stuff in.



my code generally looks like this:

    from dogpile.cache import make_region
    # define regions
    cache_opts = {
        'cache.name2id.backend': 'dogpile.cache.dbm',
        'cache.name2id.expiration_time': '86400',
        'cache.name2id.arguments.filename': '%(app_dir)s/data/dogpile/
name2id' % customized,
        ...
        repeat
        ...
    }
    region_name2id = make_region(key_mangler=str)
    region_name2id.configure_from_config( cache_opts ,
'cache.name2id.' )

note how i passed in a key_mangler as str -- you'll get an error
otherwise, because pyramid ( and i think pylons ) uses utf8 objects
instead of string objects.


to query against the region, i have functions like this:

def tag_name_websafe_to_id( name_websafe , dbSession=None ,
request=None , use_cache=True ):
    """gets the id"""

    def _query(name_websafe):
        _dbSession= ensure_db_session( dbSession )
        tag =
_dbSession.query(model.core.Tag).filter_by('name_websafe',name_websafe.lower()).first()
        if tag:
            return tag.id
        return None

    region_name2id.cache_on_arguments()
    def _sharedcache(name_websafe):
        return _query(name_websafe)

    if use_cache:
        return _sharedcache(name_websafe)
    else:
        return _query(name_websafe)

i centralized all the cache connections into a file
"app.lib.helpers.caching.core" , and then just import the regions from
that whenever needed.  i have some centralized caching tools in files
like "app.lib.helpers.caching.abc" and "app.lib.helpers.caching.xyz".


i actually use another sqlalchemy query in there, that is optimized
against an 'lower' index, but this is just to illustrate.   also,
ensure_db_session is just a routine that grabs the dbsession from the
request object if i didn't pass it in.

-- 
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/pylons-discuss?hl=en.

Reply via email to