Hi.
In our ceph cluster we hit one OSD with 95% full while others in same pool
only hit 40% .. (total usage is ~55%). Thus I went into a:
sudo ceph osd reweight-by-utilization 110 0.05 12
Which initated some data movement.. but right after ceph status reported:
jk@bison:~/adm-git$ sudo ceph -s
cluster:
id: dbc33946-ba1f-477c-84df-c63a3c9c91a6
health: HEALTH_WARN
49924979/660322545 objects misplaced (7.561%)
Degraded data redundancy: 26/660322545 objects degraded
(0.000%), 2 pgs degraded
services:
mon: 3 daemons, quorum torsk1,torsk2,bison
mgr: bison(active), standbys: torsk1
mds: cephfs-1/1/2 up {0=zebra01=up:active}, 1 up:standby-replay
osd: 78 osds: 78 up, 78 in; 255 remapped pgs
rgw: 9 daemons active
data:
pools: 16 pools, 2184 pgs
objects: 141M objects, 125 TB
usage: 298 TB used, 340 TB / 638 TB avail
pgs: 26/660322545 objects degraded (0.000%)
49924979/660322545 objects misplaced (7.561%)
1927 active+clean
187 active+remapped+backfilling
68 active+remapped+backfill_wait
2 active+recovery_wait+degraded
io:
client: 761 kB/s rd, 1284 kB/s wr, 85 op/s rd, 79 op/s wr
recovery: 623 MB/s, 665 objects/s
Any idea about how those 26 objects got degraded in the process?
Just in-flight writes ?
Any means to priority the 26 objects over the 49M objects that need to be
replaced?
Thanks.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com