No takers on this? I assume people manage their server lists some how -- or do you just assume that all servers are always up?
On Sat, Dec 06, 2008 at 05:08:07PM -0800, Bill Moseley wrote: > > In a web application I'm using a pool of servers with Ketama hashing. > I'm using Memcached::libmemcached, and I'm persisting the "memc" > handle between requests, per process. > > If a get() or set() call fails due to a memcached server failure I > want to pull that server out of the server list so that new keys will > map to the remaining active servers. > > The Perl module only provides a "walk_stats()" call which uses > memcached_stat to return stats for every server in the current list. > If a memcached server fails then "walk_stats()" returns nothing -- so > It doesn't tell me which server failed. > > So, here's the approach I'm considering, but I'd like to hear other > suggestions (or reasons why this approach is insane): > > I create a new handle (memcached_create) and then add the servers. > As I call memcached_server_add() for each server I also call > walk_stats() to make sure that I have stats for that newly added > server. If that stat fails I flag the server as bad and start over. > In the end I end up with just a list of servers that are alive (well, > returned stats). > > Then in my get() and set() calls if I get an error I destroy my "memc" > handle which will trigger reloading the server list as above. > > Is there a better way to do this? > > > I can see one issue. If a server is intermittent might be possible to > end up with the same key in multiple servers -- so as the server is > toggled in and out of the pool the app could get different values for > the same key at different times. But, that's a problem not really > related to the method above. > > Thanks, > > > -- > Bill Moseley > [EMAIL PROTECTED] > Sent from my iMutt > > -- Bill Moseley [EMAIL PROTECTED] Sent from my iMutt
