Henrik, I tend to agree with you on the network I/O stuff, but only in case the webserver and the memcached are deployed on All nodes.
In all the other cases I totally don't agree with you. When webserver and memcached are on the same node, you have scaling dependencies, i.e. if you want to scale the webserver, u must scale also scale the memcached unless u insist to have a strange configuration where only part of the your nodes have both webserver and memcached. This creates unbalanced web-tier configuration, and in this case these nodes with both webserver and memcached are also becoming network I/O bound. This also goes with the opposite direction, you don't want to add more webservers when all your need is memory to your memcached. Last but not least is what Dieter said, when the memcached load increases it can easily get to the point where it reduces your webserver performance. This is of course against the whole concept of memcached, which intends to improve performance. Yiftach On Sat, Feb 11, 2012 at 8:16 AM, Dieter Schmidt <[email protected]>wrote: > The point is your example. 1000 req can cause in practice 10000 memcache > req. So if you want scale 5 times not only the number of ports on your > (one/one) machine setup will hit a hard limit. > > Both services on the same machine is also difficult.pppppppppppp > > > Henrik Schröder <[email protected]> schrieb: > > >On Fri, Feb 10, 2012 at 09:21, Yiftach Shoolman > ><[email protected]>wrote: > > > >> TIf you but the Memcached on a dedicated server each webserver only > deals > >> with the network I/O associated with its traffic, leaving the dedicated > >> Memcached server to deal with all cached traffic. > >> > > > >You are still not making any sense. Let's say you have two machines, that > >you need to handle 1000 web requests, and that each web request results in > >a memcached request. > > > >If you split it like you suggest and put a webserver on one machine, and > >memcached on the other machine, then the webserver will need to handle > 1000 > >web requests, and the memcached server will need to handle 1000 memcached > >requests. > > > >But if you don't split it like I suggest, and you put a webserver on each > >machine and memcached on each machine, then each webserver will need to > >handle 500 webrequests, but only 250 of those need to talk to memcached > >over the network, the other half talk to the local memcached and generate > >no network traffic. > > > >So if you don't split them, there will be 500 memcached requests less over > >the network, which means you scale better. Not to mention the fact that > >you're using RAM you probably wouldn't use otherwise, and that you lose > >less of your cache if one server goes down. > > > > > >> To clear it more, if you have N servers each deployed with a webservers > a > >> memcached server, and memcached is distributed across all servers, > each > >> webservers needs to deal with Memcached network I/O associated with N-1 > >> webservers --> we found it architecturally wrong, it actually slows down > >> the entire application > >> > > > >So what? The number of servers you send requests to matters very little, > >it's the total amount of requests that's interesting. One webserver > sending > >1000 requests to one memcached server is the same as one webserver sending > >100 requests to 10 different servers. It's still 1000 requests? > > > > > >/Henrik > -- Yiftach Shoolman +972-54-7634621
