Hello,
we are testing ceph cluster for storing small files (4KiB - 256KiB)
Ceph cluster has 4 OSD servers, each has
- 16 cpu
- 32G ram
- 3x2T HDD + 4x1T HDD (7k2 rpm)
- 1400G SSD
- cluster network 1Gbps, public network 1Gbps
Bluestore storage , 100G partition for Bluestore DB per OSD. OS
Ubuntu16.04LTS, Ceph 12.2.2.
Backfill/recovery speed is slow, max is 200 objects/s => 30MB/s. The
load of network, cpu, memory, io per OSD is OK - I have not found the
bottleneck. Changing osd_max_backfills more then 5 does not have effect.
Is there a way how to speed up backfill/recovery in such a configuration?
Config:
bluestore_min_alloc_size = 4096
osd_recovery_max_active = 3
osd_max_backfills = 5
osd_recovery_sleep = 0
# ceph -s
cluster:
id: 8ab5d08c-7f03-4c37-9dd3-027bbab5a642
health: HEALTH_WARN
79311892/185387019 objects misplaced (42.782%)
Degraded data redundancy: 8949358/185387019 objects degraded
(4.827%), 622 pgs unclean, 204 pgs degraded, 201 pgs undersized
services:
mon: 3 daemons, quorum cep11,cep12,cep13
mgr: rajcep12(active), standbys: cep13, cep11
osd: 28 osds: 28 up, 28 in; 619 remapped pgs
rgw: 3 daemons active
data:
pools: 6 pools, 1064 pgs
objects: 60347k objects, 6671 GB
usage: 24481 GB used, 15564 GB / 40045 GB avail
pgs: 8949358/185387019 objects degraded (4.827%)
79311892/185387019 objects misplaced (42.782%)
442 active+clean
403 active+remapped+backfill_wait
154 active+undersized+degraded+remapped+backfill_wait
47 active+undersized+degraded+remapped+backfilling
15 active+remapped+backfilling
3 active+recovery_wait+degraded
io:
recovery: 18250 kB/s, 164 objects/s
Thank you
Regards
Michal
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com