To my understanding, at the server level, Memcached is implemented by a fully 
associative cache -- most likely using a LRU stack for overwriting comparisons. 
 Would it be theoretically beneficial if Memcached were to use a 2 or 4 way set 
associative cache?  Of course there would be some changes i.e. would have to 
statically alloc RAM so it could partition it's blocks.

But, this would definitely help for apps that cache for speed rather than cache 
hit reliability.

The only way I could see implementing any sort of direct mapped / set 
associative cache organization other than specifying the blocks/partitions you 
write to in your application (ie spec'ing out the direct mapped design in your 
application).  Although, I could see how this could give you more control of 
the cache, and could probably result in faster caching performance (both reads 
and writes), but lower hit rates.

Any thoughts?

Brian Brooks
http://csel.cs.colorado.edu/~brooksbp/
Cell: (303)319-8663

Reply via email to