Hi everyone,

Thanks a lot for your replies. I'll try to comment upon your feedback.

* Running several instances of Memcached sounds a bit silly indeed. I was
testing a lot of different things and somehow it was working better than
having one Memcached per server. That was a long time ago and I haven't
tried to rollback to just one instance, nor do I know how this can impact
(if any) performance.

* The "issue" is that there is data that I can't cache into APC, I need it
to be in one place and allow all the web servers to fetch it there. And the
bottleneck of my application seems to be Memcache. At peak it can be about
20,000 HTTP requests per second. Each of them generating one Memcache call.
If that call hangs for just a few milliseconds too much, then NGINX is very
quickly overloaded as the workers are waiting for PHP to finish and Nginx
simply maxout its workers.

The LAN is 1Gbs.

You are right regarding serializing. It makes it quite slower. I have also
tried gzcompress to use less network bandwidth and that takes up a lot of
CPU resource as well.

In summary, there might not be any issue at all. But I thought Memcache
should be faster to respond. If I log all the Memcache GET that my boxes are
doing, I can see times ranging from 1ms up to 200ms or more.

My sysadmin skills are limited, I'm a programmer. I'll try to have someone
look at our hardware and see if that can be a network issue or something.

Benja.



On Mon, Jul 25, 2011 at 8:19 PM, Dustin <[email protected]> wrote:

>
> On Jul 25, 4:57 am, benjabcn <[email protected]> wrote:
>
> > Testing 1000 GETs.
>
> > Memcache Testing...
> > Value size : 149780 Bytes
> > Time: 13.78 seconds.
>
>   What kind of network are you using here?  That's a bit over 80Mbps.

Reply via email to