Hi,
On 12/07/2016 01:21 PM, Abhishek L wrote:
> This point release fixes several important bugs in RBD mirroring, RGW
> multi-site, CephFS, and RADOS.
>
> We recommend that all v10.2.x users upgrade. Also note the following when
> upgrading from hammer
Well... little warning: after upgrade from 10.2.3 to 10.2.4, I have big load
cpu on osd and mds. Something like that:
top - 18:53:40 up 2:11, 1 user, load average: 32.14, 29.49, 27.36
Tasks: 192 total, 2 running, 190 sleeping, 0 stopped, 0 zombie
%Cpu(s): 19.4 us, 80.6 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 32908088 total, 1876820 used, 31031268 free, 31464 buffers
KiB Swap: 8388604 total, 0 used, 8388604 free. 412340 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2174 ceph 20 0 492408 79260 8688 S 169.7 0.2 139:49.77 ceph-mds
2318 ceph 20 0 1081428 166700 25832 S 160.4 0.5 178:32.18 ceph-osd
2288 ceph 20 0 1256604 241796 22896 S 159.4 0.7 189:25.19 ceph-osd
2301 ceph 20 0 1261172 261040 23664 S 156.1 0.8 197:11.24 ceph-osd
2337 ceph 20 0 1247904 260048 19084 S 154.8 0.8 191:01.90 ceph-osd
2171 ceph 20 0 466160 58292 10992 S 0.3 0.2 0:29.89 ceph-mon
On IRC, two another persons have the same behavior after the upgrade.
The cluster is HEALTH OK. I don't see O/I on disk. If I restart daemons, all is
ok but after a few minutes the load cpu starts again.
I have currently no idea about the problem.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com