Hi,

We are having a scenario, where for each request, we call memcache
about 250 times. Our memcache data size is not much and totals to
around 1 GB. Multiget is not an a straight forward option, as our
cache calls are scattered across the code and depends on the business
scenario.

Each server nearly serves 10 million requests per day and a cluster
combined serves nearly 1 billion requests a day. We are having a
gigabit ethernet. Our server side response time for each request is
and should be < 85 ms.

What approach should be ideal for this scenario?

1. Run memcache on each server on unix socket.
This will reduce the network latency but increase the cache miss ratio
a bit and data redundancy, as nearly same data is present on each
node of the cluster.
We are having similar setup and see around 85% cache hit. Our expiry
time varies for different keys.

or

2. Have a distributed memcache layer. This will probably increase the
cache hit ratio to more than 99%.
But this will add the network latency and may be a single point of
failure.

I did a very basic get bench marking on a low end machine.
1. 100,000 get, 1 thread, memcache and benchmarking script on same
machine - 1.233s
2. 100,000 get, 1 thread, memcache and benchmarking script on
different machine  - 9.538s

>From this numbers, approach 1, the one we are doing right now seems to
be a better one.
Please let me know your opinion, about what approach seems to be
better or if there is any different suggestion.

Thanks.

Reply via email to