This might not be a Ceph issue at all depending on if you're using any sort
of caching. If you have caching on your disk controllers at all, then the
write might have happened to the cache but never made it to the OSD disks
which would show up as problems on the VM RBDs. Make sure you have proper
Thanks to all! I might have found the reason.
It is look like relate to the below bug.
https://bugs.launchpad.net/nova/+bug/1773449
At 2018-12-04 23:42:15, "Ouyang Xu" wrote:
Hi linghucongsong:
I have got this issue before, you can try to fix it as below:
1. use rbd lock ls to ge
Hi linghucongsong:
I have got this issue before, you can try to fix it as below:
1. use /rbd lock ls/ to get the lock for the vm
2. use /rbd lock rm/ to remove that lock for the vm
3. start vm again
hope that can help you.
regards,
Ouyang
On 2018/12/4 下午4:48, linghucongsong wrote:
HI all!
I would check to see if the images have an exclusive-lock still held
by a force-killed VM. librbd will generally automatically clear this
up unless it doesn't have the proper permissions to blacklist a dead
client from the Ceph cluster. Verify that your OpenStack Ceph user
caps are correct [1][2].
On 04/12/2018 09:37, linghucongsong wrote:
But it is just in case suddenly power off for all the hosts!
I'm surprised you're seeing I/O errors inside the VM once they're restarted.
Is the cluster healthy? What's the output of ceph status?
Simon
___
c
Den tis 4 dec. 2018 kl 10:37 skrev linghucongsong :
> Thank you for reply!
> But it is just in case suddenly power off for all the hosts!
> So the best way for this it is to have the snapshot on the import vms or
> have to mirror the
> images to other ceph cluster?
Best way is probably to do jus
Thank you for reply!
But it is just in case suddenly power off for all the hosts!
So the best way for this it is to have the snapshot on the import vms or have
to mirror the
images to other ceph cluster?
At 2018-12-04 17:30:13, "Janne Johansson" wrote:
Den tis 4 dec. 2018 kl 0
Den tis 4 dec. 2018 kl 09:49 skrev linghucongsong :
> HI all!
>
> I have a ceph test envirment use ceph with openstack. There are some vms
> run on the openstack. It is just a test envirment.
> my ceph version is 12.2.4. Last day I reboot all the ceph hosts before
> this I do not shutdown the vms
HI all!
I have a ceph test envirment use ceph with openstack. There are some vms run on
the openstack. It is just a test envirment.
my ceph version is 12.2.4. Last day I reboot all the ceph hosts before this I
do not shutdown the vms on the openstack.
When all the hosts boot up and the ceph