This is from rbd ssd pool 3x replication (sata ssd drives, 2.2 cpu's are on
balanced not optimized, nautilus)
[@~]# rados bench -p rbd.ssd 60 write
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304
for up to 60 seconds or 0 objects
Object prefix: benchmark_data_c01_1487351
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 145 129 515.975 516 0.0720097 0.117034
2 16 250 234 467.954 420 0.0332527 0.122599
3 16 353 337 449.286 412 0.0254232 0.13569
4 16 447 431 430.95 376 0.0308228 0.140396
5 16 550 534 427.149 412 0.0489281 0.147189
6 16 656 640 426.616 424 0.0365499 0.147231
7 16 739 723 413.094 332 0.0261517 0.149667
8 16 837 821 410.451 392 0.0242721 0.150902
9 16 934 918 407.952 388 0.0700446 0.155572
10 16 1040 1024 409.55 424 0.0546874 0.155168
11 16 1135 1119 406.86 380 0.0246981 0.154823
12 16 1228 1212 403.951 372 0.0281447 0.156045
13 16 1329 1313 403.951 404 0.0577759 0.157578
14 16 1424 1408 402.237 380 0.15696 0.157949
15 16 1528 1512 403.152 416 0.0756755 0.158072
16 16 1618 1602 400.451 360 0.0596802 0.158514
17 16 1710 1694 398.539 368 0.0515091 0.15869
18 16 1795 1779 395.284 340 0.218853 0.161703
19 16 1890 1874 394.477 380 0.0435594 0.161365
2022-07-06 22:19:01.255264 min lat: 0.0219673 max lat: 0.49047 avg lat: 0.159777
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
20 16 1998 1982 396.35 432 0.0638213 0.159777
21 16 2083 2067 393.663 340 0.0627098 0.161907
22 16 2191 2175 395.403 432 0.19454 0.161806
23 16 2273 2257 392.47 328 0.138672 0.162491
24 16 2367 2351 391.782 376 0.0281001 0.161778
25 16 2464 2448 391.629 388 0.0220856 0.161989
26 16 2562 2546 391.642 392 0.0309591 0.16242
27 16 2657 2641 391.209 380 0.343609 0.163424
28 16 2743 2727 389.521 344 0.145933 0.163728
29 16 2844 2828 390.019 404 0.0241817 0.163001
30 16 2935 2919 389.15 364 0.427961 0.164419
31 16 3014 2998 386.788 316 0.0307399 0.16469
32 16 3101 3085 385.574 348 0.0506665 0.165257
33 16 3174 3158 382.738 292 0.0260261 0.165897
34 16 3275 3259 383.361 404 0.0596022 0.166677
35 16 3399 3383 386.578 496 0.0445334 0.165292
36 16 3495 3479 386.504 384 0.0253431 0.164562
37 16 3604 3588 387.841 436 0.102069 0.164639
38 16 3689 3673 386.58 340 0.0282991 0.16442
39 16 3765 3749 384.461 304 0.0988234 0.166138
2022-07-06 22:19:21.258166 min lat: 0.0208629 max lat: 0.49047 avg lat: 0.165966
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
40 16 3849 3833 383.248 336 0.025483 0.165966
41 16 3936 3920 382.386 348 0.445226 0.167291
42 16 4021 4005 381.376 340 0.0527353 0.167655
43 16 4090 4074 378.924 276 0.101183 0.168101
44 16 4179 4163 378.402 356 0.0787725 0.168783
45 16 4282 4266 379.147 412 0.0804984 0.168064
46 16 4369 4353 378.469 348 0.0977855 0.168786
47 16 4469 4453 378.926 400 0.0804289 0.168483
48 16 4541 4525 377.031 288 0.195228 0.169554
49 16 4627 4611 376.356 344 0.19558 0.169988
50 16 4722 4706 376.428 380 0.0712425 0.169813
51 16 4801 4785 375.242 316 0.203578 0.169939
52 16 4900 4884 375.64 396 0.344731 0.170263
53 16 4993 4977 375.571 372 0.0691528 0.169995
54 16 5096 5080 376.244 412 0.0515177 0.169545
55 15 5183 5168 375.803 352 0.0509726 0.170167
56 16 5262 5246 374.663 312 0.0427843 0.170383
57 16 5355 5339 374.615 372 0.060925 0.170397
58 16 5445 5429 374.362 360 0.0669999 0.170316
59 16 5533 5517 373.983 352 0.0881168 0.170904
2022-07-06 22:19:41.260960 min lat: 0.0208629 max lat: 0.538331 avg lat:
0.170964
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
60 16 5623 5607 373.749 360 0.10889 0.170964
Total time run: 60.1307
Total writes made: 5623
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 374.052
Stddev Bandwidth: 45.1854
Max bandwidth (MB/sec): 516
Min bandwidth (MB/sec): 276
Average IOPS: 93
Stddev IOPS: 11.2964
Max IOPS: 129
Min IOPS: 69
Average Latency(s): 0.171096
Stddev Latency(s): 0.131978
Max latency(s): 0.538331
Min latency(s): 0.0208629
Cleaning up (deleting benchmark objects)
Removed 5623 objects
Clean up completed and total clean up time :0.645015
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]