Re: [ceph-users] OSD hanging on 12.2.12 by message worker

2019-06-10 Thread Stefan Kooman
Quoting solarflow99 (solarflo...@gmail.com): > can the bitmap allocator be set in ceph-ansible? I wonder why is it not > default in 12.2.12 We don't use ceph-ansible. But if ceph-ansible allow you to set specific ([osd]) settings in ceph.conf I guess you can do it. I don't know what the policy

Re: [ceph-users] OSD hanging on 12.2.12 by message worker

2019-06-10 Thread solarflow99
can the bitmap allocator be set in ceph-ansible? I wonder why is it not default in 12.2.12 On Thu, Jun 6, 2019 at 7:06 AM Stefan Kooman wrote: > Quoting Max Vernimmen (vernim...@textkernel.nl): > > > > This is happening several times per day after we made several changes at > > the same time:

Re: [ceph-users] OSD hanging on 12.2.12 by message worker

2019-06-07 Thread Stefan Kooman
Quoting Max Vernimmen (vernim...@textkernel.nl): > Thank you for the suggestion to use the bitmap allocator. I looked at the > ceph documentation and could find no mention of this setting. This makes me > wonder how safe and production ready this setting really is. I'm hesitant > to apply that to

Re: [ceph-users] OSD hanging on 12.2.12 by message worker

2019-06-07 Thread Igor Fedotov
Hi Max, I don't think this is allocator related issue. The symptoms that triggered us to start using bitmap allocator over stupid one were: - write op latency gradually increasing over time (days not hours) - perf showing significant amount of time spent in allocator related function -

Re: [ceph-users] OSD hanging on 12.2.12 by message worker

2019-06-07 Thread Max Vernimmen
Thank you for the suggestion to use the bitmap allocator. I looked at the ceph documentation and could find no mention of this setting. This makes me wonder how safe and production ready this setting really is. I'm hesitant to apply that to our production environment. If the allocator setting

Re: [ceph-users] OSD hanging on 12.2.12 by message worker

2019-06-06 Thread Stefan Kooman
Quoting Max Vernimmen (vernim...@textkernel.nl): > > This is happening several times per day after we made several changes at > the same time: > >- add physical ram to the ceph nodes >- move from fixed 'bluestore cache size hdd|sdd' and 'bluestore cache kv >max' to 'bluestore cache

[ceph-users] OSD hanging on 12.2.12 by message worker

2019-06-06 Thread Max Vernimmen
HI, We are running VM images on ceph using RBD. We are seeing a problem where one of our VMs gets into problems due to IO not completing. iostat on the VM shows IO remaining in the queue, and disk utilisation for ceph based vdisks is 100%. Upon investigation the problem seems to be with the