On 09/25/14 17:55, Clint Byrum wrote:
Now I use Ceilometer's pipeline to forward events to elasticsearch via udp + logstash and do not use Ceilometer's DB or API at all.
Interesting, this almost sounds like what should be the default
configuration honestly.

Ceilometer generates a lot of data in a real OpenStack deployment. The problem is not only managing the I/O load, but also the long term implications of storing and searching through that amount of data. The documentation does not cover this last point at all, last time I checked. Actually it is very difficult to find anything about "what can I do with ceilometer data?".

I'm storing about one million messages per day generated by a 6-compute cluster. Assuming each message contains one sample... MySQL is not the best solution. Think if you need to store them for one year. And as others have said, it is not a good relational database use-case anyway.

As for Elasticsearch, the setup is straightforward and works well, even better since you can re-use the same infrastructure for system logs. It is fast to deploy, reliable on the long run and easily scalable. And you can start to have a good look at the data while you think ways of using it.

You need to add some simple mappings, otherwise ES will try to be smart indexing fields (IDs become words separated by '-' and they will no longer match).

There are also the messages from neutron metering and cinder that have a non-standard format for date fields that Elasticsearch cannot parse without help (I opened bugs for those, and use logstash to convert the fields to ISO format). Hopefully these will be fixed.

The application that uses Ceilometer's data through Elasticsearch has to speak to OpenStack APIs anyway, to translate the IDs and to make sense of Neutron metering labels.


OpenStack-dev mailing list

Reply via email to