On Fri, Feb 10, 2012 at 09:21, Yiftach Shoolman <[email protected]>wrote:
> TIf you but the Memcached on a dedicated server each webserver only deals > with the network I/O associated with its traffic, leaving the dedicated > Memcached server to deal with all cached traffic. > You are still not making any sense. Let's say you have two machines, that you need to handle 1000 web requests, and that each web request results in a memcached request. If you split it like you suggest and put a webserver on one machine, and memcached on the other machine, then the webserver will need to handle 1000 web requests, and the memcached server will need to handle 1000 memcached requests. But if you don't split it like I suggest, and you put a webserver on each machine and memcached on each machine, then each webserver will need to handle 500 webrequests, but only 250 of those need to talk to memcached over the network, the other half talk to the local memcached and generate no network traffic. So if you don't split them, there will be 500 memcached requests less over the network, which means you scale better. Not to mention the fact that you're using RAM you probably wouldn't use otherwise, and that you lose less of your cache if one server goes down. > To clear it more, if you have N servers each deployed with a webservers a > memcached server, and memcached is distributed across all servers, each > webservers needs to deal with Memcached network I/O associated with N-1 > webservers --> we found it architecturally wrong, it actually slows down > the entire application > So what? The number of servers you send requests to matters very little, it's the total amount of requests that's interesting. One webserver sending 1000 requests to one memcached server is the same as one webserver sending 100 requests to 10 different servers. It's still 1000 requests? /Henrik
