:--“if you embed version in a key and it is possible that two different version 
of service (experimental and prod) use different cache.”

  I don’t know why this happen(Using a configuration center point share the 
version info to experimental and prod is a good idea), but using a version in 
the value seems more difficult.   In the case that some “hot” keys are replica 
to more than one cache instances, it is possible to cause the cache 
inconsistency. 

   And sometimes guys want to invalidate keys with a constraint, such as a 
filter, in this case, we store the keys in the  database, delete the keys after 
using a SQL query.

  Best regards,

  

  Jason CHAN

 

From: [email protected] [mailto:[email protected]] On Behalf 
Of Denis Samoylov
Sent: Monday, July 14, 2014 1:25 PM
To: [email protected]
Subject: Re: Managing flushing of a specific set of keys from an application

 

we (box) do not put version information as part of key. instead we store it as 
part of value:
{
cache_version : int
db_version : int
value : object
}

this allows end systems make decision with different versions in flight. E.g. 
if you embed version in a key and it is possible that two different version of 
service (experimental and prod) use different cache. This is cache 
inconsistency. With our approach it will never happen - only one(*) version is 
cached. This can cause cache trashing of course but with additional handling 
even this can be handled.

again, it depens on your needs. it is still not clear what invalidate you mean. 
different "schema" or stale object? My comment above is about first. For second 
we use simply deletes (our ORM is responsible for the links and we can detect 
what to delete). I still playing with using versioning for second but do not 
want to use version as part of the key (due cache consistency issue).

Redis has "hash" structure that allows to implement "tag" (i saw many attempts 
for tagging in memcached but none robust for distributed configuration). But 
after conversation with Instagram people and own experiments not convinced to 
move main cache to Redis (we have  200+K ops per second per server). Now 
evaluating to move sessions to it due more convenient replication and lower 
load.

hope this can be interesting



On Sunday, July 13, 2014 3:03:53 PM UTC-7, John Anderson wrote:

We have a bunch of microservices that manage their own cache keys and sometimes 
when they release they need to invalidate a bunch of their old cache keys at 
once.  Prior to using memcached we were using redis as a cache and we just ran 
a single redis server per service so we would just flush all.  Now that we are 
on a memcached cluster this is no longer acceptable.

 

I'm wondering what the best practices for this is?  Do you prefix cache keys 
with <service>.<version> so that when a new version comes out they 
automatically get ignored and LRU'd?  Or is there a way to scan through the 
servers/slabs and find the keys we want to kick out?

 

Thanks,

John

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to