Hi, I was recently running some experiments using memcached and taking some 
basic benchmarks about its performance(latency and throughput). We are 
using one of the python 
clients<http://www.tummy.com/Community/software/python-memcached/>for our 
experiment. We were fine and performing as expected up to a 1024 
node cluster but when we went to 2048 nodes we noticed a huge hit to 
request time, about 5 times slower requests. We tried to keep it all as 
simple as possible so we were running single threaded memcached on the 
nodes and with as few optimization options on the client. We have been 
trying to figure out why this happened and one of the only reasons that we 
could come up with is that memcached could be caching connections and we 
simply hit a point in the experiment where the cache is no longer useful 
and memcached just then started having to teardown connections before it 
could handle new ones. I am not too familiar with the implementation 
details and was wondering if anyone here might have some ideas about it. 
Our experiment setup was to have one client instance running on every 
server and randomly making inserts into the cluster with a fixed size 
random alpha numeric string key and value.

Thanks

Reply via email to