Re: [openstack-dev] [Fuel} Separating Ceph pools depending on storage type

2015-03-20 Thread Andrew Woodward
Right now, we create pools for images, compute, volumes, and radosgw creates a bunch, all are assigned to the default crush map. from the Ceph side, In order to create a pool where we could separate it from another pool is to create a ruleset in the cursh map to isolate the devices, then the

Re: [openstack-dev] [Fuel} Separating Ceph pools depending on storage type

2015-03-20 Thread Federico Michele Facca
hi, generally speaking it would be nice to have the possibility to define availability zones. and this could be used as well to group, not only computing resources, but also storage ones. for this if i am not wrong there is already a discussion or blueprint on this from mirantis folks. then i am

[openstack-dev] [Fuel} Separating Ceph pools depending on storage type

2015-03-19 Thread Rogon, Kamil
Hello, I want to initiate a discussion about different backend storage types for Ceph. Now all types of drives (HDD, SAS, SSD) are treated the same way, so the performance can vary widely. It would be good to detect SSD drives and create separate Ceph pool for them. From the user perspective, it