Re: [Openstack-operators] ceph rbd root disk unexpected deletion

2017-03-17 Thread Mike Lowe
This was Newton booting from ephemeral disk.  There were no delete events in 
the nova api database, just an unexpected stop when the kernel oom killer got 
qemu.  


> On Mar 17, 2017, at 8:28 AM, Saverio Proto  wrote:
> 
> Hello Mike,
> 
> what version of openstack ?
> is the instance booting from ephemeral disk or booting from cinder volume ?
> 
> When you boot from volume, that will be the root disk of your
> instance. The user could have clicked on "Delete Volume on Instance
> Delete". It can be selected when creating a new instance.
> 
> Saverio
> 
> 2017-03-13 15:47 GMT+01:00 Mike Lowe :
>> Over the weekend a user reported that his instance was in a stopped state 
>> and could not be started, on further examination it appears that the vm had 
>> crashed and the strange thing is that the root disk is now gone.  Has 
>> anybody come across anything like this before?
>> 
>> And why on earth is it attempting deletion of the rbd device without 
>> deletion of the instance?
>> 
>> 2017-03-12 10:59:07.591 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
>> rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
>> failed
>> 2017-03-12 10:59:17.613 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
>> rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
>> failed
>> 2017-03-12 10:59:26.143 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
>> rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
>> failed
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ceph rbd root disk unexpected deletion

2017-03-17 Thread Saverio Proto
Hello Mike,

what version of openstack ?
is the instance booting from ephemeral disk or booting from cinder volume ?

When you boot from volume, that will be the root disk of your
instance. The user could have clicked on "Delete Volume on Instance
Delete". It can be selected when creating a new instance.

Saverio

2017-03-13 15:47 GMT+01:00 Mike Lowe :
> Over the weekend a user reported that his instance was in a stopped state and 
> could not be started, on further examination it appears that the vm had 
> crashed and the strange thing is that the root disk is now gone.  Has anybody 
> come across anything like this before?
>
> And why on earth is it attempting deletion of the rbd device without deletion 
> of the instance?
>
> 2017-03-12 10:59:07.591 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
> rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
> failed
> 2017-03-12 10:59:17.613 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
> rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
> failed
> 2017-03-12 10:59:26.143 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
> rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
> failed
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ceph rbd root disk unexpected deletion

2017-03-13 Thread Edmund Rhudy (BLOOMBERG/ 120 PARK)
I recall encountering something like this once when an instance termination 
failed halfway through - the root disk was removed but the instance record 
remained in the database. In my case, it didn't spontaneously happen, but was a 
requested termination that blew up at some point between removing the root disk 
and removing the instance record from the DB. If the instance action log 
doesn't indicate that anyone asked for the instance to be terminated, that's a 
bit weird.

From: joml...@iu.edu 
Subject: Re:[Openstack-operators] ceph rbd root disk unexpected deletion

Over the weekend a user reported that his instance was in a stopped state and 
could not be started, on further examination it appears that the vm had crashed 
and the strange thing is that the root disk is now gone.  Has anybody come 
across anything like this before?

And why on earth is it attempting deletion of the rbd device without deletion 
of the instance?

2017-03-12 10:59:07.591 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
failed
2017-03-12 10:59:17.613 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
failed
2017-03-12 10:59:26.143 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
failed



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] ceph rbd root disk unexpected deletion

2017-03-13 Thread Mike Lowe
Over the weekend a user reported that his instance was in a stopped state and 
could not be started, on further examination it appears that the vm had crashed 
and the strange thing is that the root disk is now gone.  Has anybody come 
across anything like this before?

And why on earth is it attempting deletion of the rbd device without deletion 
of the instance?

2017-03-12 10:59:07.591 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
failed
2017-03-12 10:59:17.613 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
failed
2017-03-12 10:59:26.143 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
failed

smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators