hi Dormando,

launch params: 

 memcached -d -p 11211 -u nobody -m 1024 -c 128000 -P 
/var/run/memcached/memcached.pid -r -t 8 -v


(ya, i know about 128K :) )


multiconnection performance is a bit better in .21. See below. I am still 
going to build .24 (i see big improvement there by release notes, but as i 
understand mostly for multi connection case due removing global lock)


.13:

[dsamoy...@dsamoylov.dev twemperf (master)]$ src/mcperf --linger=0 
--call-rate=0 --num-calls=100 --conn-rate=0 --num-conns=1000 --sizes=d1


Total: connections 1000 requests 100000 responses 100000 test-duration 
12.138 s


Connection rate: 82.4 conn/s (12.1 ms/conn <= 1 concurrent connections)

Connection time [ms]: avg 12.1 min 5.9 max 108.5 stddev 7.06

Connect time [ms]: avg 0.0 min 0.0 max 0.2 stddev 0.01


Request rate: 8238.5 req/s (0.1 ms/req)

Request size [B]: avg 28.0 min 28.0 max 28.0 stddev 0.00


Response rate: 8238.5 rsp/s (0.1 ms/rsp)

Response size [B]: avg 8.0 min 8.0 max 8.0 stddev 0.00

Response time [ms]: avg 0.1 min 0.0 max 23.8 stddev 0.00

Response time [ms]: p25 1.0 p50 1.0 p75 1.0

Response time [ms]: p95 1.0 p99 1.0 p999 5.0

Response type: stored 100000 not_stored 0 exists 0 not_found 0

Response type: num 0 deleted 0 end 0 value 0

Response type: error 0 client_error 0 server_error 0


Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0

Errors: fd-unavail 0 ftab-full 0 addrunavail 0 other 0


CPU time [s]: user 2.73 system 6.62 (user 22.5% system 54.5% total 77.0%)

Net I/O: bytes 3.4 MB rate 289.6 KB/s (2.4*10^6 bps)


.21
[dsamoy...@dsamoylov.dev twemperf (master)]$ src/mcperf --linger=0 
--call-rate=0 --num-calls=100 --conn-rate=0 --num-conns=1000 --sizes=d1

Total: connections 1000 requests 100000 responses 100000 test-duration 
11.458 s (11.722 s, 11.494 s)

Connection rate: 87.3 conn/s (11.5 ms/conn <= 1 concurrent connections)
Connection time [ms]: avg 11.5 min 5.8 max 65.8 stddev 5.32
Connect time [ms]: avg 0.0 min 0.0 max 0.3 stddev 0.01

Request rate: 8727.5 req/s (0.1 ms/req)
Request size [B]: avg 28.0 min 28.0 max 28.0 stddev 0.00

Response rate: 8727.5 rsp/s (0.1 ms/rsp)
Response size [B]: avg 8.0 min 8.0 max 8.0 stddev 0.00
Response time [ms]: avg 0.1 min 0.0 max 31.3 stddev 0.00
Response time [ms]: p25 1.0 p50 1.0 p75 1.0
Response time [ms]: p95 1.0 p99 1.0 p999 4.0
Response type: stored 100000 not_stored 0 exists 0 not_found 0
Response type: num 0 deleted 0 end 0 value 0
Response type: error 0 client_error 0 server_error 0

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 ftab-full 0 addrunavail 0 other 0

CPU time [s]: user 2.78 system 6.54 (user 24.3% system 57.1% total 81.4%)
Net I/O: bytes 3.4 MB rate 306.8 KB/s (2.5*10^6 bps)


On Tuesday, July 7, 2015 at 9:40:29 PM UTC-7, Dormando wrote:
>
> What happens with more than one connection? A lot of things changed to 
> increase the scalability of it but single thread might be meh. 
>
> What're the memcached start args? What exactly is that test doing? 
> gets/sets? Is .24 any better? 
>
> On Tue, 7 Jul 2015, Denis Samoylov wrote: 
>
> > hi, 
> > Does anybody know what can change between .13 ( STAT version 1.4.13, 
> STAT libevent 1.4.13-stable) and  .21 (STAT version 1.4.21 STAT libevent 
> 2.0.21-stable)? 
> > 
> > memcached became about 15-20% slower.  We discovered this in production 
> but there are too many moving parts. So I used twemperf (
> https://github.com/twitter/twemperf) to measure on dev server 
> > 
> > 
> > result for .13 (repeated many times) 
> > 
> > [dsamoylov.dev twemperf (master)]$ src/mcperf --linger=0 --call-rate=0 
> --num-calls=100000 --conn-rate=0 --num-conns=1 --sizes=d1 
> > 
> > Total: connections 1 requests 100000 responses 100000 test-duration 
> 9.731 s (10.160 s, 9.981 s) 
> > 
> > Connection rate: 0.1 conn/s (9731.1 ms/conn <= 1 concurrent connections) 
> > Connection time [ms]: avg 9731.1 min 9731.1 max 9731.1 stddev 0.00 
> > Connect time [ms]: avg 0.2 min 0.2 max 0.2 stddev 0.00 
> > 
> > Request rate: 10276.3 req/s (0.1 ms/req) 
> > Request size [B]: avg 28.0 min 28.0 max 28.0 stddev 0.00 
> > 
> > Response rate: 10276.3 rsp/s (0.1 ms/rsp) 
> > Response size [B]: avg 8.0 min 8.0 max 8.0 stddev 0.00 
> > Response time [ms]: avg 0.1 min 0.0 max 18.0 stddev 0.00 
> > Response time [ms]: p25 1.0 p50 1.0 p75 1.0 
> > Response time [ms]: p95 1.0 p99 1.0 p999 4.0 
> > Response type: stored 100000 not_stored 0 exists 0 not_found 0 
> > Response type: num 0 deleted 0 end 0 value 0 
> > Response type: error 0 client_error 0 server_error 0 
> > 
> > Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0 
> > Errors: fd-unavail 0 ftab-full 0 addrunavail 0 other 0 
> > 
> > CPU time [s]: user 1.18 system 3.50 (user 12.2% system 36.0% total 
> 48.2%) 
> > Net I/O: bytes 3.4 MB rate 361.3 KB/s (3.0*10^6 bps) 
> > 
> > 
> > result for .21 (repeated many times) 
> > [dsamoylov.dev twemperf (master)]$ src/mcperf --linger=0 --call-rate=0 
> --num-calls=100000 --conn-rate=0 --num-conns=1 --sizes=d1 
> > 
> > Total: connections 1 requests 100000 responses 100000 test-duration 
> 12.328 s (11.230 s, 12.713 s) 
> > 
> > Connection rate: 0.1 conn/s (12328.4 ms/conn <= 1 concurrent 
> connections) 
> > Connection time [ms]: avg 12328.4 min 12328.4 max 12328.4 stddev 0.00 
> > Connect time [ms]: avg 0.3 min 0.3 max 0.3 stddev 0.00 
> > 
> > Request rate: 8111.4 req/s (0.1 ms/req) 
> > Request size [B]: avg 28.0 min 28.0 max 28.0 stddev 0.00 
> > 
> > Response rate: 8111.4 rsp/s (0.1 ms/rsp) 
> > Response size [B]: avg 8.0 min 8.0 max 8.0 stddev 0.00 
> > Response time [ms]: avg 0.1 min 0.0 max 28.3 stddev 0.00 
> > Response time [ms]: p25 1.0 p50 1.0 p75 1.0 
> > Response time [ms]: p95 1.0 p99 1.0 p999 5.0 
> > Response type: stored 100000 not_stored 0 exists 0 not_found 0 
> > Response type: num 0 deleted 0 end 0 value 0 
> > Response type: error 0 client_error 0 server_error 0 
> > 
> > Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0 
> > Errors: fd-unavail 0 ftab-full 0 addrunavail 0 other 0 
> > 
> > CPU time [s]: user 3.38 system 7.44 (user 27.4% system 60.3% total 
> 87.7%) 
> > Net I/O: bytes 3.4 MB rate 285.2 KB/s (2.3*10^6 bps) 
> > 
> > same server is used, clean just restarted memcached daemon. 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "memcached" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to memcached+...@googlegroups.com <javascript:>. 
> > For more options, visit https://groups.google.com/d/optout. 
> > 
> >

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to