Hello,

We use a Ceph cluster for Nova (Glance and Cinder as well) and over time,
more and more data is stored there. We can't keep the cluster so big because of
Ceph's limitations. Sooner or later it needs to be closed for adding new
instances, images and volumes. Not to mention it's a big failure domain.

How do you handle this issue?
What is your strategy to divide Ceph clusters between compute nodes?
How do you solve VM snapshot placement and migration issues then
(snapshots will be left on older Ceph)?

We've been thinking about features like: dynamic Ceph configuration
(not static like in nova.conf) in Nova, pinning instances to a Ceph cluster etc.
What do you think about that?


_______________________________________________
OpenStack-operators mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to