Re: [ceph-users] pool/volume live migration

2019-02-11 Thread Jason Dillaman
On Mon, Feb 11, 2019 at 4:53 AM Luis Periquito wrote: > > Hi Jason, > > that's been very helpful, but it got me thinking and looking. > > The pool name is both inside the libvirt.xml (and running KVM config) > and it's cached in the Nova database. For it to change would require a > detach/attach

Re: [ceph-users] pool/volume live migration

2019-02-08 Thread Jason Dillaman
On Fri, Feb 8, 2019 at 11:43 AM Luis Periquito wrote: > > This is indeed for an OpenStack cloud - it didn't require any level of > performance (so was created on an EC pool) and now it does :( > > So the idea would be: 0 - upgrade OSDs and librbd clients to Nautilus > 1- create a new pool Are

Re: [ceph-users] pool/volume live migration

2019-02-08 Thread Luis Periquito
This is indeed for an OpenStack cloud - it didn't require any level of performance (so was created on an EC pool) and now it does :( So the idea would be: 1- create a new pool 2- change cinder to use the new pool for each volume 3- stop the usage of the volume (stop the instance?) 4- "live

Re: [ceph-users] pool/volume live migration

2019-02-08 Thread Jason Dillaman
Correction: at least for the initial version of live-migration, you need to temporarily stop clients that are using the image, execute "rbd migration prepare", and then restart the clients against the new destination image. The "prepare" step will fail if it detects that the source image is

Re: [ceph-users] pool/volume live migration

2019-02-08 Thread Jason Dillaman
Indeed, it is forthcoming in the Nautilus release. You would initiate a "rbd migration prepare " to transparently link the dst-image-spec to the src-image-spec. Any active Nautilus clients against the image will then re-open the dst-image-spec for all IO operations. Read requests that cannot be

Re: [ceph-users] pool/volume live migration

2019-02-08 Thread Caspar Smit
Hi Luis, According to slide 21 of Sage's presentation at FOSDEM it is coming in Nautilus: https://fosdem.org/2019/schedule/event/ceph_project_status_update/attachments/slides/3251/export/events/attachments/ceph_project_status_update/slides/3251/ceph_new_in_nautilus.pdf Kind regards, Caspar Op