On 14/05/14 13:24, Christian Eichelmann wrote:
> Hi Ceph User!
> 
> I had a look at the "official" collectd fork for ceph, which is quite
> outdated and not compatible with the upstream version.
> 
> Since this was not an option for us, I've worte a Python Plugin for
> Collectd, that gets all the precious informations out of the admin
> sockets "perf dump" command. It runs on our productive cluster right now
> and I'd like to share it with you:
> 
> https://github.com/Crapworks/collectd-ceph
> 
> Any feedback is welcome!

Fun, I'd just implemented something very similar!

I've just pushed my version upstream to:

 https://github.com/dwm/collectd-ceph

There appear to be some minor differences between our designs:

 * I don't require that a types DB be kept up to date and consistent;
   rather, I've reused the generic 'counter' and 'gauge' types.

 * My version includes some historical processing to allow for the
   calculation of per-period, rather than global, average values.

 * Your version is nicer in that it communicates with the admin socket
   directly; I was lazy and simply invoked the `ceph` command-line tool
   to do that work for me.  It's not currently a significant
   performance hit, but should be improved.

I've been feeding all of my Ceph performance counter data to a Graphite
cluster via CollectD, using the CollectD AMQP plugin and a RabbitMQ
cluster as an intermediary, and Grafana as a query/graphing tool.

Apart from causing some stress on the disk-subsystem attempting to
write-out all those metrics, this has been working out quite well...

Cheers,
David
-- 
David McBride <dw...@cam.ac.uk>
Unix Specialist, University Information Services
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to