Dear Hamza,

Thanks for your interest in CloudSuite. 

Unfortunately, the client does not support multi-get requests. We do have plans 
to implement that functionality (which is used in Facebook-like settings) at 
some point in the future. It will probably happen with the next release of 
CloudSuite, but we do not know when precisely.

Meanwhile, if you want to experiment and try to implement it yourself,  please 
share your experiences with us, there is a lot of room for collaboration.

Regards,
Djordje

________________________________________
From: Hamza Bin Sohail [[email protected]]
Sent: Thursday, August 28, 2014 4:34 AM
To: cloudsuite
Cc: Falsafi Babak
Subject: Is the memcached client capable of batching ?

Hi all,

This email is regarding the Data caching benchmark setup which uses memcached 
and a client
that emulates twitter-like get/set requests. The questions I have are at the 
bottom.

For the data caching benchmark, the arguments provided for the client on the 
cloudsuite website
describe running the workload in a single request/reply mode. I was trying to 
use the multiget option
in the memcached client ( -m and -l flags are provided for this purpose) , but 
couldn't get it to work.
Either I'm not running it correctly or it doesn't seem to work. I feel it's the 
latter.
The reason I was trying the multiget mode was to emulate Get-request batching 
since memcached inherently
is network-bound. See 'Scaling Memcache at Facebook' NSDI'13.


I performed the measurements as described on the cloudsuite website. After 
warming up memcached issuing
the exact same commands mentioned on the cloudsuite website, I used the -l and 
-m flags and issued:

./loader -a ../twitter_dataset/twitter_dataset_30x -s servers.txt -g 0.8 -T 1 
-c 200 -w 8 -l 10 -m 0.5

Output:

max,    avgGetSize
  1.000001,       0.0,           0,          0,          0,          0,         
 0,       -nan,   0.000100,   0.000100,   0.000100,       -nan, 
1000000000.000000,   0.000000,       -nan
Outstanding requests per worker:
42 33 28 41 46 29 61 41
  timeDiff,     rps,        requests,     gets,       sets,      hits,       
misses,   avg_lat,      90th,      95th,        99th,       std,       min,     
   max,    avgGetSize
  1.000001,       0.0,           0,          0,          0,          0,         
 0,       -nan,   0.000100,   0.000100,   0.000100,       -nan, 
1000000000.000000,   0.000000,       -nan
Outstanding requests per worker:
42 33 28 41 46 29 61 41
  timeDiff,     rps,        requests,     gets,       sets,      hits,       
misses,   avg_lat,      90th,      95th,        99th,       std,       min,     
   max,    avgGetSize
  1.000001,       0.0,           0,          0,          0,          0,         
 0,       -nan,   0.000100,   0.000100,   0.000100,       -nan, 
1000000000.000000,   0.000000,       -nan

*********************************************************

The above command adds an -l and -m to the default commands to the website. It 
says each multiget request has 10 gets in it.
Interestingly, -l 2 tends to work

Output:

 1.000001,  200778.8,      200779,      80352,      39985,     228243,      
12994,  14.945080,  18.700000,  19.700000,  21.100000,   2.284085,  11.553000,  
22.359001, 797.238002
Outstanding requests per worker:
371 363 389 370 383 360 390 370
  timeDiff,     rps,        requests,     gets,       sets,      hits,       
misses,   avg_lat,      90th,      95th,        99th,       std,       min,     
   max,    avgGetSize
  1.000001,  203925.8,      203926,      81746,      40590,     231764,      
13161,  14.704381,  18.400000,  19.200000,  20.200000,   2.105561,  11.369000,  
21.534000, 796.015771
Outstanding requests per worker:
372 363 389 370 383 360 390 370
  timeDiff,     rps,        requests,     gets,       sets,      hits,       
misses,   avg_lat,      90th,      95th,        99th,       std,       min,     
   max,    avgGetSize
  1.000001,  203434.8,      203435,      81533,      40460,     231803,      
12615,  14.766729,  18.500000,  19.400000,  20.500000,   2.244194,  11.412000,  
21.562999, 797.958042
Outstanding requests per worker:
373 363 389 370 383 363 391 370

***********************************************************

For sanity checking, the default arguments (no -m and -l) as mentioned on the 
website give the following output

Output:

      requests,     gets,       sets,      hits,       misses,   avg_lat,      
90th,      95th,        99th,       std,       min,        max,    avgGetSize
  1.000001,  202935.8,      202936,     162486,      40450,     158344,       
4142,   9.997106,  12.700000,  13.200000,  14.100000,   1.722175,   7.115000,  
15.413000, 816.615358
Outstanding requests per worker:
255 249 258 252 253 241 255 266
  timeDiff,     rps,        requests,     gets,       sets,      hits,       
misses,   avg_lat,      90th,      95th,        99th,       std,       min,     
   max,    avgGetSize
  1.000001,  201379.8,      201380,     161349,      40031,     157243,       
4106,  10.108395,  12.900000,  13.400000,  14.200000,   1.803369,   7.326000,  
15.984001, 815.152155
Outstanding requests per worker:
257 249 258 252 254 243 256 266


**************************************************************

Questions:

1) Can you confirm my claim that multigets beyond 2 do not work ?
2) Is there an easy fix for this if 1) is true ?
3) If 1) is not true, can you let me know what I'm doing wrong ?


Batching seems to be the way to run memcached and it would be better if this 
benchmark was run in a way it is
run in a practical setting.

Thanks alot.

Hamza

Reply via email to