I find that coding with a cache in mind results in different code - for
example where you would normally do a 'select *' to get a whole bunch of
complete records from a DB, you might instead do a 'select id' followed
by a string of sub-requests in order to play to the strengths of the
cache (i.e. to get cache hits for those individual records as parts of
unrelated queries on the same tables). That would normally be a really
bad way of talking to the DB, but when you have a cache in the picture
it changes things. The downside is that it can also mean that should the
cache disappear, you hit the DB harder than you might otherwise.
We used to use this type of thing a lot. This is some code I just
hacked up. It may not even parse. It could be better like using a
multi get. You would need an additional loop though. Also, order is
not preserved in this function.
function get_stuff($id_array){
$data_array = array();
$fetch_array = array();
foreach($id_array as $id){
$data_array[$id] = $memcache->get("some_prefix_".$id);
if($data_array[$id]===false){
$fetch_array[] = $id
}
}
$sql = "select * from table where id in (".implode(",",
$fetch_array).")";
$res = mysql_query($sql);
while($rec = mysql_fetch_assoc($res)){
$data_array[$rec["id"]] = $rec;
$memcache("some_prefix_".$rec["id"], $rec);
}
return $data_array;
}
Or used to when we used memcached on forward facing servers to cache
code/queries. We don't do that anymore due to cache stampedes that you
mention. We pregenerate our cache now into MySQL replicated servers.
--
Brian Moon
Senior Developer
------------------------------
http://dealnews.com/
It's good to be cheap =)