This is from rbd hdd pool 3x replication (not really fast drives, 2.2 cpu's are
on balanced not optimized, nautilus)
[@~]# rados bench -p rbd 60 write
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304
for up to 60 seconds or 0 objects
Object prefix: benchmark_data_c01_1485523
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 53 37 147.956 148 0.694179 0.360261
2 16 114 98 195.958 244 0.142756 0.300362
3 16 173 157 209.294 236 0.168231 0.276335
4 16 240 224 223.961 268 0.23223 0.278013
5 16 304 288 230.362 256 0.110597 0.26753
6 16 369 353 235.295 260 0.199911 0.264693
7 16 435 419 239.388 264 0.352416 0.261829
8 16 497 481 240.46 248 0.118362 0.262026
9 16 560 544 241.739 252 0.144724 0.261927
10 16 630 614 245.561 280 0.291625 0.257464
11 16 697 681 247.596 268 0.347944 0.254762
12 16 768 752 250.625 284 0.116253 0.249815
13 16 824 808 248.575 224 1.03832 0.251677
14 16 883 867 247.674 236 0.113183 0.253491
15 16 935 919 245.028 208 0.111092 0.25738
16 16 997 981 245.21 248 0.122768 0.258497
17 16 1066 1050 247.018 276 0.180591 0.256102
18 16 1130 1114 247.515 256 0.141751 0.256467
19 16 1202 1186 249.644 288 0.421549 0.254576
2022-07-06 22:13:18.368288 min lat: 0.0636032 max lat: 1.3537 avg lat: 0.253728
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
20 16 1270 1254 250.759 272 0.212722 0.253728
21 16 1328 1312 249.864 232 0.184818 0.253499
22 16 1397 1381 251.05 276 0.106165 0.253063
23 16 1454 1438 250.046 228 0.343127 0.253154
24 16 1520 1504 250.625 264 0.0995738 0.253349
25 16 1591 1575 251.959 284 0.0769136 0.252103
26 16 1666 1650 253.805 300 0.270013 0.251467
27 16 1730 1714 253.885 256 0.0954993 0.250414
28 16 1808 1792 255.959 312 0.150573 0.249494
29 16 1873 1857 256.097 260 0.149082 0.248357
30 16 1936 1920 255.959 252 0.11005 0.247761
31 16 2002 1986 256.217 264 0.173061 0.248957
32 16 2078 2062 257.709 304 0.270084 0.2476
33 16 2137 2121 257.05 236 0.365813 0.247614
34 16 2213 2197 258.429 304 0.163908 0.246585
35 16 2284 2268 259.159 284 0.234659 0.245991
36 16 2351 2335 259.403 268 0.209601 0.245882
37 16 2404 2388 258.121 212 0.515811 0.246185
38 16 2467 2451 257.959 252 0.702526 0.246905
39 16 2532 2516 258.01 260 0.164722 0.246673
2022-07-06 22:13:38.371458 min lat: 0.0495056 max lat: 1.3537 avg lat: 0.247608
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
40 16 2587 2571 257.059 220 0.197857 0.247608
41 16 2663 2647 258.202 304 0.128436 0.247191
42 16 2732 2716 258.625 276 0.0921473 0.246647
43 16 2803 2787 259.214 284 0.541748 0.246049
44 16 2871 2855 259.504 272 0.265166 0.245746
45 16 2935 2919 259.425 256 0.0962818 0.245762
46 16 3005 2989 259.872 280 0.16439 0.245382
47 16 3072 3056 260.044 268 0.44283 0.245568
48 16 3136 3120 259.959 256 0.549679 0.245257
49 16 3206 3190 260.367 280 0.139847 0.245142
50 16 3274 3258 260.599 272 0.147884 0.244678
51 16 3336 3320 260.351 248 0.101974 0.24523
52 16 3401 3385 260.343 260 0.434156 0.245356
53 16 3466 3450 260.336 260 0.339548 0.245154
54 16 3529 3513 260.181 252 0.223261 0.245232
55 16 3598 3582 260.468 276 0.195226 0.245167
56 16 3656 3640 259.959 232 0.0710267 0.24526
57 16 3716 3700 259.608 240 0.279714 0.245701
58 16 3772 3756 258.993 224 0.547684 0.246007
59 16 3834 3818 258.806 248 0.19286 0.246633
2022-07-06 22:13:58.374569 min lat: 0.0495056 max lat: 1.3537 avg lat: 0.247281
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
60 16 3891 3875 258.292 228 0.138051 0.247281
Total time run: 60.1833
Total writes made: 3891
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 258.61
Stddev Bandwidth: 27.5094
Max bandwidth (MB/sec): 312
Min bandwidth (MB/sec): 148
Average IOPS: 64
Stddev IOPS: 6.87736
Max IOPS: 78
Min IOPS: 37
Average Latency(s): 0.247274
Stddev Latency(s): 0.14822
Max latency(s): 1.3537
Min latency(s): 0.0495056
Cleaning up (deleting benchmark objects)
Removed 3891 objects
Clean up completed and total clean up time :4.52299
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]