I thought there was some discussion about this before. Something like
creating a new pool and then taking your existing pool as an overlay of the
new pool  (cache) and then flush the overlay to the new pool. I haven't
tried it or know if it is possible.

The other option is shut the VM down, create a new snapshot on the new
pool, point the VM to that and then flatten the RBD.

Robert LeBlanc

Sent from a mobile device please excuse any typos.
On Mar 26, 2015 5:23 PM, "Steffen W Sørensen" <[email protected]> wrote:

>
> On 26/03/2015, at 23.13, Gregory Farnum <[email protected]> wrote:
>
> The procedure you've outlined won't copy snapshots, just the head
> objects. Preserving the proper snapshot metadata and inter-pool
> relationships on rbd images I think isn't actually possible when
> trying to change pools.
>
> This wasn’t ment for migrating a RBD pool, but pure object/Swift pools…
>
> Anyway seems Glance
> <http://docs.openstack.org/developer/glance/architecture.html#basic-architecture>
>  supports multiple storages
> <http://docs.openstack.org/developer/glance/configuring.html#configuring-multiple-swift-accounts-stores>
>  so
> assume one could use a glance client to also extract/download images into
> local file format (raw, qcow2 vmdk…) as well as uploading images to glance.
> And as glance images ain’t ‘live’ like virtual disk images one could also
> download glance images from one glance store over local file and upload
> back into a different glance back end store. Again this is properly better
> than dealing at a lower abstraction level and having to known its internal
> storage structures and avoid what you’re pointing put Greg.
>
>
>
>
>
> On Thu, Mar 26, 2015 at 3:05 PM, Steffen W Sørensen <[email protected]> wrote:
>
>
> On 26/03/2015, at 23.01, Gregory Farnum <[email protected]> wrote:
>
> On Thu, Mar 26, 2015 at 2:53 PM, Steffen W Sørensen <[email protected]> wrote:
>
>
> On 26/03/2015, at 21.07, J-P Methot <[email protected]> wrote:
>
> That's a great idea. I know I can setup cinder (the openstack volume
> manager) as a multi-backend manager and migrate from one backend to the
> other, each backend linking to different pools of the same ceph cluster.
> What bugs me though is that I'm pretty sure the image store, glance,
> wouldn't let me do that. Additionally, since the compute component also has
> its own ceph pool, I'm pretty sure it won't let me migrate the data through
> openstack.
>
> Hm wouldn’t it be possible to do something similar ala:
>
> # list object from src pool
> rados ls objects loop | filter-obj-id | while read obj; do
>    # export $obj to local disk
>    rados -p pool-wth-too-many-pgs get $obj
>    # import $obj from local disk to new pool
>    rados -p better-sized-pool put $obj
> done
>
>
> You would also have issues with snapshots if you do this on an RBD
> pool. That's unfortunately not feasible.
>
> What isn’t possible, export-import objects out-and-in of pools or snapshots
> issues?
>
> /Steffen
>
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to