Hi Jean
You would probably need this
ceph osd pool create glance-images-bkp 128 128
rados cppool glance-images glance-images-bkp
ceph osd pool rename glance-images glance-images-old
ceph osd pool rename glance-images-bkp glance-images
ceph osd pool delete glance-images-old glance-images-old
That's a great idea. I know I can setup cinder (the openstack volume
manager) as a multi-backend manager and migrate from one backend to the
other, each backend linking to different pools of the same ceph cluster.
What bugs me though is that I'm pretty sure the image store, glance,
wouldn't
Hi,
Lately I've been going back to work on one of my first ceph setup and
now I see that I have created way too many placement groups for the
pools on that setup (about 10 000 too many). I believe this may impact
performances negatively, as the performances on this ceph cluster are
abysmal.
I thought there was some discussion about this before. Something like
creating a new pool and then taking your existing pool as an overlay of the
new pool (cache) and then flush the overlay to the new pool. I haven't
tried it or know if it is possible.
The other option is shut the VM down,
On 26/03/2015, at 22.53, Steffen W Sørensen ste...@me.com wrote:
On 26/03/2015, at 21.07, J-P Methot jpmet...@gtcomm.net
mailto:jpmet...@gtcomm.net wrote:
That's a great idea. I know I can setup cinder (the openstack volume
manager) as a multi-backend manager and migrate from one
On Thu, Mar 26, 2015 at 2:53 PM, Steffen W Sørensen ste...@me.com wrote:
On 26/03/2015, at 21.07, J-P Methot jpmet...@gtcomm.net wrote:
That's a great idea. I know I can setup cinder (the openstack volume
manager) as a multi-backend manager and migrate from one backend to the
other, each
On 26/03/2015, at 23.01, Gregory Farnum g...@gregs42.com wrote:
On Thu, Mar 26, 2015 at 2:53 PM, Steffen W Sørensen ste...@me.com
mailto:ste...@me.com wrote:
On 26/03/2015, at 21.07, J-P Methot jpmet...@gtcomm.net wrote:
That's a great idea. I know I can setup cinder (the openstack
The procedure you've outlined won't copy snapshots, just the head
objects. Preserving the proper snapshot metadata and inter-pool
relationships on rbd images I think isn't actually possible when
trying to change pools.
On Thu, Mar 26, 2015 at 3:05 PM, Steffen W Sørensen ste...@me.com wrote:
On
On 26/03/2015, at 20.38, J-P Methot jpmet...@gtcomm.net wrote:
Lately I've been going back to work on one of my first ceph setup and now I
see that I have created way too many placement groups for the pools on that
setup (about 10 000 too many). I believe this may impact performances
On 26/03/2015, at 21.07, J-P Methot jpmet...@gtcomm.net wrote:
That's a great idea. I know I can setup cinder (the openstack volume manager)
as a multi-backend manager and migrate from one backend to the other, each
backend linking to different pools of the same ceph cluster. What bugs me
On 26/03/2015, at 23.13, Gregory Farnum g...@gregs42.com wrote:
The procedure you've outlined won't copy snapshots, just the head
objects. Preserving the proper snapshot metadata and inter-pool
relationships on rbd images I think isn't actually possible when
trying to change pools.
This
11 matches
Mail list logo