Hi,

after seting ceph upgrade (0.72.2 to 0.80.3) I have issued "ceph osd crush
tunables optimal" and after only few minutes I have added 2 more OSDs to
the CEPH cluster...

So these 2 changes were more or a less done at the same time - rebalancing
because of tunables optimal, and rebalancing because of adding new OSD...

Result - all VMs living on CEPH storage have gone mad, no disk access
efectively, blocked so to speak.

Since this rebalancing took 5h-6h, I had bunch of VMs down for that long...

Did I do wrong by causing "2 rebalancing" to happen at the same time ?
Is this behaviour normal, to cause great load on all VMs because they are
unable to access CEPH storage efectively ?

Thanks for any input...
-- 

Andrija Panić
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to