Hi Antonio,
On 03/03/2012 07:14 AM, Antonio Martinez wrote:
Hi, I was recently running some experiments using memcached and taking
some basic benchmarks about its performance(latency and throughput). We
are using one of the python clients
<http://www.tummy.com/Community/software/python-memcached/> for our
experiment. We were fine and performing as expected up to a 1024 node
cluster but when we went to 2048 nodes we noticed a huge hit to request
time, about 5 times slower requests.
Memcached has an option to specify the maximum number of connections.
If you have more than that, new connections have to wait until the
others are finished, which creates a backlog.
The default max connection limit is 1024, so when you expect to have
more connections, you should set that higher.
How to is written here:
http://code.google.com/p/memcached/wiki/NewConfiguringServer
We tried to keep it all as simple
as possible so we were running single threaded memcached on the nodes
and with as few optimization options on the client. We have been trying
to figure out why this happened and one of the only reasons that we
could come up with is that memcached could be caching connections and we
simply hit a point in the experiment where the cache is no longer useful
and memcached just then started having to teardown connections before it
could handle new ones. I am not too familiar with the implementation
details and was wondering if anyone here might have some ideas about it.
Our experiment setup was to have one client instance running on every
server and randomly making inserts into the cluster with a fixed size
random alpha numeric string key and value.
Thanks