Hi Zakhar,
I'm pretty sure you wanted the #manual compactions for an entire day, not from
whenever the log starts to current time, which is most often not 23:59. You
need to get the date from the previous day and make sure the log contains a
full 00:00-23:59 window.
1) iotop results:
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
2256 be/4 ceph 0.00 B 17.48 M 0.00 % 0.80 % ceph-mon
--cluster ceph --setuser ceph --setgroup ceph --foreground -i ceph-01
--mon-data /var/lib/ceph/mon/ceph-ceph-01 --public-addr 192.168.32.65
[safe_timer]
2230 be/4 ceph 0.00 B 1514.19 M 0.00 % 0.37 % ceph-mon
--cluster ceph --setuser ceph --setgroup ceph --foreground -i ceph-01
--mon-data /var/lib/ceph/mon/ceph-ceph-01 --public-addr 192.168.32.65
[rocksdb:low0]
2250 be/4 ceph 0.00 B 36.23 M 0.00 % 0.15 % ceph-mon
--cluster ceph --setuser ceph --setgroup ceph --foreground -i ceph-01
--mon-data /var/lib/ceph/mon/ceph-ceph-01 --public-addr 192.168.32.65
[fn_monstore]
2231 be/4 ceph 0.00 B 50.52 M 0.00 % 0.02 % ceph-mon
--cluster ceph --setuser ceph --setgroup ceph --foreground -i ceph-01
--mon-data /var/lib/ceph/mon/ceph-ceph-01 --public-addr 192.168.32.65
[rocksdb:high0]
2225 be/4 ceph 0.00 B 120.00 K 0.00 % 0.00 % ceph-mon
--cluster ceph --setuser ceph --setgroup ceph --foreground -i ceph-01
--mon-data /var/lib/ceph/mon/ceph-ceph-01 --public-addr 192.168.32.65 [log]
2) manual compactions (over a full 24h window): 1882
3) monitor store.db size: 616M
4) cluster version and status:
ceph version 15.2.17 (8a82819d84cf884bd39c17e3236e0632ac146dc4) octopus (stable)
cluster:
id: xxx
health: HEALTH_WARN
1 large omap objects
services:
mon: 5 daemons, quorum ceph-01,ceph-02,ceph-03,ceph-25,ceph-26 (age 7w)
mgr: ceph-25(active, since 4w), standbys: ceph-26, ceph-01, ceph-03, ceph-02
mds: con-fs2:8 4 up:standby 8 up:active
osd: 1284 osds: 1282 up (since 27h), 1282 in (since 3w)
task status:
data:
pools: 14 pools, 25065 pgs
objects: 2.18G objects, 3.9 PiB
usage: 4.8 PiB used, 8.3 PiB / 13 PiB avail
pgs: 25037 active+clean
26 active+clean+scrubbing+deep
2 active+clean+scrubbing
io:
client: 1.7 GiB/s rd, 1013 MiB/s wr, 3.02k op/s rd, 1.78k op/s wr
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]