Re: [Openstack-operators] Multiple Ceph pools for Nova?

2018-05-22 Thread Smith, Eric
Thanks everyone for the feedback - I have a pretty small environment (11 nodes) and I was able to find the compute / volume pool segregation within nova.conf / cinder.conf. I think I should be able to just export / import my existing RBDs from the spinning disk compute pool to the SSD compute

Re: [Openstack-operators] Multiple Ceph pools for Nova?

2018-05-21 Thread Matt Riedemann
On 5/21/2018 11:51 AM, Smith, Eric wrote: I have 2 Ceph pools, one backed by SSDs and one backed by spinning disks (Separate roots within the CRUSH hierarchy). I’d like to run all instances in a single project / tenant on SSDs and the rest on spinning disks. How would I go about setting this

Re: [Openstack-operators] Multiple Ceph pools for Nova?

2018-05-21 Thread Guilherme Steinmuller Pimentel Pimentel
2018-05-21 16:17 GMT-03:00 Erik McCormick : > Do you have enough hypervisors you can dedicate some to each purpose? You > could make two availability zones each with a different backend. > I have about 20 hypervisors. Ten are using a nova pool with SAS disks and the

Re: [Openstack-operators] Multiple Ceph pools for Nova?

2018-05-21 Thread Guilherme Steinmuller Pimentel Pimentel
I usually separate things using host aggregate feature. In my deployment, I have 2 different nova pools. So, in nova.conf, I define the *images_rbd_pool* variable point to desired pool and then, I create an aggregate and put these computes into it. The flavor extra_spec metadata will define which

Re: [Openstack-operators] Multiple Ceph pools for Nova?

2018-05-21 Thread Erik McCormick
Do you have enough hypervisors you can dedicate some to each purpose? You could make two availability zones each with a different backend. On Mon, May 21, 2018, 11:52 AM Smith, Eric wrote: > I have 2 Ceph pools, one backed by SSDs and one backed by spinning disks >

[Openstack-operators] Multiple Ceph pools for Nova?

2018-05-21 Thread Smith, Eric
I have 2 Ceph pools, one backed by SSDs and one backed by spinning disks (Separate roots within the CRUSH hierarchy). I’d like to run all instances in a single project / tenant on SSDs and the rest on spinning disks. How would I go about setting this up?