Hello Wei,

If the 90th value of, for example, 2ms, means that 90% of the requests 
experience a latency of up to 2ms. The same applies for 95th percentile.The 
target latency you want to achieve depend on your frontend application, but 
it’s typically 5-10ms.

Regarding the throughput, you  first need to run the workload several times and 
every time gradually increase the throughput target (rps) to figure out what is 
what is the maximum throughput that doesn’t violate the QoS. After that you can 
do your final run. If you are collocating another workload with memcached, make 
sure that the other workload is in its steady state while you are tuning 
memcached. 

I don’t remember the exact throughput we achieved, but I believe it should be 
around 60-70K rps per core on Xeon-based machines. This, of course, depends on 
your hardware. The scalability of memcached is known not to be great, so the 
per-core throughput is expected to drop as you scale it.


Regards,
Djordje
________________________________________
From: Wei Kuang [[email protected]]
Sent: Sunday, July 06, 2014 4:04 PM
To: [email protected]
Subject: questions about the cloudsuite-data caching

Hi,
I am doing co-run experiment, so basically I have to run cloudsuite and another 
program together on a machine. And they shared some resources, such as cache 
and memory bandwidth.

Currently, I need to look at the performance degradation of data caching when 
it co-run with other programs. Now the data caching has been set up on a 
core2duo machine. I use taskset so that, these two program have its own 
dedicated core.

My question is what is a normal output of the client-side report, say, the rps, 
I noticed some times rps goes up to 50k, some time it only 1-3k, never see it 
become a stable state. And Could you explain more about the 90th 95th, I know 
its about QOS and the number is in milliseconds. Still I am not sure what does 
that number mean.

Reply via email to