On Friday, March 2, 2012 10:14:39 PM UTC-8, Antonio Martinez wrote: > > Hi, I was recently running some experiments using memcached and taking > some basic benchmarks about its performance(latency and throughput). We are > using one of the python > clients<http://www.tummy.com/Community/software/python-memcached/>for our > experiment. We were fine and performing as expected up to a 1024 > node cluster but when we went to 2048 nodes we noticed a huge hit to > request time, about 5 times slower requests. We tried to keep it all as > simple as possible so we were running single threaded memcached on the > nodes and with as few optimization options on the client. We have been > trying to figure out why this happened and one of the only reasons that we > could come up with is that memcached could be caching connections and we > simply hit a point in the experiment where the cache is no longer useful > and memcached just then started having to teardown connections before it > could handle new ones. I am not too familiar with the implementation > details and was wondering if anyone here might have some ideas about it. > Our experiment setup was to have one client instance running on every > server and randomly making inserts into the cluster with a fixed size > random alpha numeric string key and value.
memcached doesn't know how many servers you have and doesn't care how long you're connected to them. Have you tried any other clients? Perhaps one of the libmemcached base ones?
