On 03/17/2015 02:33 PM, Joe Gordon wrote:

Digging up this old thread because I am working on getting multi node live
migration testing working (https://review.openstack.org/#/c/165182/), and just
ran into this issue (bug 1398999).

And I am not sure I agree with this statement. I think there is a valid case for
doing block migrate with a cinder volume attached to an instance:


* Cloud isn't using a shared filesystem for ephemeral storage
* Instance is booted from an image, and a volume is attached afterwards. An
admin wants to take the box the instance is running on offline for maintanince
with a minimal impact to the instances running on it.

What is the recommended solution for that use case? If the admin disconnects and
reconnects the volume themselves is there a risk of impacting whats running on
the instance? etc.

Interesting bug. I think I agree with you that there isn't a good solution currently for instances that have a mix of shared and not-shared storage.

I'm curious what Daniel meant by saying that marking the disk shareable is not as reliable as we would want.

I think there is definitely a risk if the admin disconnects the volume--whether or not that causes problems depends on whether the application can handle that cleanly.

I suspect the "proper" cloud-aware strategy would be to just kill it and have another instance take over. But that's not very helpful for not-fully-cloud-aware applications.

Also, since you've been playing in this area...do you know if we currently properly support all variations on live/cold migration, resize, evacuate, etc. for the boot-from-volume case?

Chris

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to