Re: [ceph-users] ceph-backed VM drive became corrupted after unexpected VM termination

2017-11-07 Thread Дробышевский , Владимир
2017-11-07 19:06 GMT+05:00 Jason Dillaman : > On Tue, Nov 7, 2017 at 8:55 AM, Дробышевский, Владимир > wrote: > > > > Oh, sorry, I forgot to mention that all OSDs are with bluestore, so xfs > mount options don't have any influence. > > > > VMs have

Re: [ceph-users] ceph-backed VM drive became corrupted after unexpected VM termination

2017-11-07 Thread Jason Dillaman
On Tue, Nov 7, 2017 at 8:55 AM, Дробышевский, Владимир wrote: > > Oh, sorry, I forgot to mention that all OSDs are with bluestore, so xfs mount > options don't have any influence. > > VMs have cache="none" by default, then I've tried "writethrough". No > difference. > > And

Re: [ceph-users] ceph-backed VM drive became corrupted after unexpected VM termination

2017-11-07 Thread Дробышевский , Владимир
Oh, sorry, I forgot to mention that all OSDs are with bluestore, so xfs mount options don't have any influence. VMs have cache="none" by default, then I've tried "writethrough". No difference. And aren't these rbd cache options enabled by default? 2017-11-07 18:45 GMT+05:00 Peter Maloney

Re: [ceph-users] ceph-backed VM drive became corrupted after unexpected VM termination

2017-11-07 Thread Peter Maloney
I see nobarrier in there... Try without that. (unless that's just the bluestore xfs...then it probably won't change anything). And are the osds using bluestore? And what cache options did you set in the VM config? It's dangerous to set writeback without also this in the client side ceph.conf:

[ceph-users] ceph-backed VM drive became corrupted after unexpected VM termination

2017-11-07 Thread Дробышевский , Владимир
Hello! I've got a weird situation with rdb drive image reliability. I found that after hard-reset VM with ceph rbd drive from my new cluster become corrupted. I accidentally found it during HA tests of my new cloud cluster: after host reset VM was not able to boot again because of the virtual