>
> I'm trying to set up a test lab with ceph (on proxmox).
> I've got 3 nodes, but I figured I'd start with 1 to test out speeds and
> to
> learn more about the setup of ceph. I will add the 2 extra nodes later.
>
> One thing that was disappointing was the writing speed.
>
> In my setup I've got 14 * 300GB SAS hdd drives. When I do a write test
> directly on one of them I get around 130-140MB/sec write speed:
> fio --filename=/dev/sdp1 -name=test -direct=1 -rw=write -bs=4M -
> iodepth=16
> Run status group 0 (all jobs):
> WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s),
> io=153GiB (164GB), run=1187061-1187061msec
You can forget about this sequential write test. Ceph is writing random.
> When I set up ceph osd.0 trough osd.13 (hdd bluestore - DB/WAL on each
> osd), create a pool that I can use and then run a write speed:
> fio -ioengine=rbd -name=test -direct=1 -rw=write -bs=4M -iodepth=16
> -pool=ceph -rbdname=vm-118-disk-0
> I get only around 80MB/sec:
> Run status group 0 (all jobs):
> WRITE: bw=82.4MiB/s (86.4MB/s), 82.4MiB/s-82.4MiB/s (86.4MB/s-
> 86.4MB/s),
> io=100GiB (107GB), run=1243230-1243230msec
>
> Am I doing something wrong, or is the write speed supposed to drop when
> clustering disks like this?
>
Probably not. You never have native disk speed, because your data is
distributed over multiple disks on multiple nodes, that just takes its time.
If you do a this rados bench, you can compare with my (very) slow hdd drives.
[~]# rados bench -p rbd 10 write
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304
for up to 10 seconds or 0 objects
Object prefix: benchmark_data_c01_538834
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 77 61 243.933 244 0.273347 0.225817
2 16 130 114 227.952 212 0.184422 0.242989
3 16 182 166 221.287 208 0.132519 0.261817
4 16 234 218 217.958 208 0.145279 0.272812
5 16 302 286 228.757 272 0.0905078 0.265453
6 16 370 354 235.957 272 0.260377 0.268706
7 16 414 398 227.389 176 1.0019 0.271782
8 16 474 458 228.961 240 0.120364 0.271026
9 15 541 526 233.739 272 0.120498 0.266636
10 16 595 579 231.561 212 0.11489 0.269708
Total time run: 10.2101
Total writes made: 596
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 233.495
Stddev Bandwidth: 33.4903
Max bandwidth (MB/sec): 272
Min bandwidth (MB/sec): 176
Average IOPS: 58
Stddev IOPS: 8.37257
Max IOPS: 68
Min IOPS: 44
Average Latency(s): 0.273409
Stddev Latency(s): 0.195287
Max latency(s): 1.23514
Min latency(s): 0.0680838
Cleaning up (deleting benchmark objects)
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]