The procedure you've outlined won't copy snapshots, just the head
objects. Preserving the proper snapshot metadata and inter-pool
relationships on rbd images I think isn't actually possible when
trying to change pools.

On Thu, Mar 26, 2015 at 3:05 PM, Steffen W Sørensen <ste...@me.com> wrote:
>
> On 26/03/2015, at 23.01, Gregory Farnum <g...@gregs42.com> wrote:
>
> On Thu, Mar 26, 2015 at 2:53 PM, Steffen W Sørensen <ste...@me.com> wrote:
>
>
> On 26/03/2015, at 21.07, J-P Methot <jpmet...@gtcomm.net> wrote:
>
> That's a great idea. I know I can setup cinder (the openstack volume
> manager) as a multi-backend manager and migrate from one backend to the
> other, each backend linking to different pools of the same ceph cluster.
> What bugs me though is that I'm pretty sure the image store, glance,
> wouldn't let me do that. Additionally, since the compute component also has
> its own ceph pool, I'm pretty sure it won't let me migrate the data through
> openstack.
>
> Hm wouldn’t it be possible to do something similar ala:
>
> # list object from src pool
> rados ls objects loop | filter-obj-id | while read obj; do
>     # export $obj to local disk
>     rados -p pool-wth-too-many-pgs get $obj
>     # import $obj from local disk to new pool
>     rados -p better-sized-pool put $obj
> done
>
>
> You would also have issues with snapshots if you do this on an RBD
> pool. That's unfortunately not feasible.
>
> What isn’t possible, export-import objects out-and-in of pools or snapshots
> issues?
>
> /Steffen
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to