On Mon, 2016-10-10 at 13:29 +0000, Adam Kijak wrote: > Hello, > > We use a Ceph cluster for Nova (Glance and Cinder as well) and over > time, > more and more data is stored there. We can't keep the cluster so big > because of > Ceph's limitations. Sooner or later it needs to be closed for adding > new > instances, images and volumes. Not to mention it's a big failure > domain.
I'm really keen to hear more about those limitations. > > How do you handle this issue? > What is your strategy to divide Ceph clusters between compute nodes? > How do you solve VM snapshot placement and migration issues then > (snapshots will be left on older Ceph)? Having played with Ceph and compute on the same hosts, I'm a big fan of separating them and having dedicated Ceph hosts, and dedicated compute hosts. That allows me a lot more flexibility with hardware configuration and maintenance, easier troubleshooting for resource contention, and also allows scaling at different rates. > > We've been thinking about features like: dynamic Ceph configuration > (not static like in nova.conf) in Nova, pinning instances to a Ceph > cluster etc. > What do you think about that? > > > _______________________________________________ > OpenStack-operators mailing list > [email protected] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operato > rs _______________________________________________ OpenStack-operators mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
