Ceilometer can be a pain in the a* if not properly configured/designed, especially when things start to grow. I've already saw the exact same situation you described on two different instalations. To make things more complicated, some OpenStack distributions use MongoDB as a storage backend and do not consider a dedicated infrastructure for Ceilometer, relegating this important service to live, by default, in the controller nodes... worst: not clearly agreeing on what should be done when the service starts to stall rather than simply adding more controller nodes... (yes Red Hat, I'm looking to you). You might consider using gnocchi and a ceph storage for telemetry as it was already suggested.
For my 2 cents, here's a nice talk on the matter: https://www.openstack.org/videos/video/capacity-planning-saving-money-and-maximizing-efficiency-in-openstack-using-gnocchi-and-ceilometer []'s Hubner On Sat, Apr 8, 2017 at 2:00 PM, Paras pradhan <[email protected]> wrote: > Hello > > What kind of storage backend do you guys use if you see disk IO > bottlenecks when storing ceilometer events and metrics? In my current > configuration I am using 300 GB 10K SAS (in hardware raid 1) and iostat > report does not look good (upto 100% unilization) with ceilometer consuming > high CPU and Memory. Does it help adding more spindles and move to raid 10? > > Thanks! > Paras. > > _______________________________________________ > OpenStack-operators mailing list > [email protected] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >
_______________________________________________ OpenStack-operators mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
