Hi all,

We have met the following problem:

we deployed our env with ceph as volume backend, we boot an instance and
attach a ceph volume to this instance, when our nova-compute is down and we
delete this instance, it will go local_delete and the ceph volume we
attached to this instance will change to "available" status in cinder but
when we try to delete it, error will happen, so we will have an "available"
volume but can't be attached or delete. We also tested with iscii volumes
and it seems fine.

I reported a bug about this:
https://bugs.launchpad.net/nova/+bug/1672624

Thanks,

Kevin Zheng
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to