Re: [ceph-users] separate monitoring node

2018-06-22 Thread Stefan Kooman
Quoting Reed Dier (reed.d...@focusvq.com): > > > On Jun 22, 2018, at 2:14 AM, Stefan Kooman wrote: > > > > Just checking here: Are you using the telegraf ceph plugin on the nodes? > > In that case you _are_ duplicating data. But the good news is that you > > don't need to. There is a Ceph mgr te

Re: [ceph-users] separate monitoring node

2018-06-22 Thread Reed Dier
> On Jun 22, 2018, at 2:14 AM, Stefan Kooman wrote: > > Just checking here: Are you using the telegraf ceph plugin on the nodes? > In that case you _are_ duplicating data. But the good news is that you > don't need to. There is a Ceph mgr telegraf plugin now (mimic) which > also works on luminou

Re: [ceph-users] separate monitoring node

2018-06-22 Thread Lenz Grimmer
On 06/20/2018 05:42 PM, Kevin Hrpcek wrote: > The ceph mgr dashboard is only enabled on the mgr daemons. I'm not > familiar with the mimic dashboard yet, but it is much more advanced than > luminous' dashboard and may have some alerting abilities built in. Not yet - see http://docs.ceph.com/docs/

Re: [ceph-users] separate monitoring node

2018-06-22 Thread Stefan Kooman
Quoting Denny Fuchs (linuxm...@4lin.net): > hi, > > > Am 19.06.2018 um 17:17 schrieb Kevin Hrpcek : > > > > # ceph auth get client.icinga > > exported keyring for client.icinga > > [client.icinga] > > key = > > caps mgr = "allow r" > > caps mon = "allow r" > > thats the point: It's

Re: [ceph-users] separate monitoring node

2018-06-20 Thread Kevin Hrpcek
Denny, I should have mentioned this as well. Any ceph cluster wide checks I am doing with Icinga are only applied to my 3 mon/mgr nodes. They would definitely be annoying if it was on all osd nodes. Having the checks on all of the mons allows me to not lose monitoring ability should one go dow

Re: [ceph-users] separate monitoring node

2018-06-20 Thread Konstantin Shalygin
Hi, at the moment, we use Icinga2, check_ceph* and Telegraf with the Ceph plugin. I'm asking what I need to have a separate host, which knows all about the Ceph cluster health. The reason is, that each OSD node has mostly the exact same data, which is transmitted into our database (like InfluxDB

Re: [ceph-users] separate monitoring node

2018-06-19 Thread Denny Fuchs
hi, > Am 19.06.2018 um 17:17 schrieb Kevin Hrpcek : > > # ceph auth get client.icinga > exported keyring for client.icinga > [client.icinga] > key = > caps mgr = "allow r" > caps mon = "allow r" thats the point: It's OK, to check, if all processes are up and running and may some ch

Re: [ceph-users] separate monitoring node

2018-06-19 Thread Kevin Hrpcek
I use icinga2 as well with a check_ceph.py that I wrote a couple years ago. The method I use is that icinga2 runs the check from the icinga2 host itself. ceph-common is installed on the icinga2 host since the check_ceph script is a wrapper and parser for the ceph command output using python's s

Re: [ceph-users] separate monitoring node

2018-06-19 Thread Stefan Kooman
Quoting John Spray (jsp...@redhat.com): > > The general idea with mgr plugins (Telegraf, etc) is that because > there's only one active mgr daemon, you don't have to worry about > duplicate feeds going in. > > I haven't use the icinga2 check_ceph plugin, but it seems like it's > intended to run o

Re: [ceph-users] separate monitoring node

2018-06-19 Thread John Spray
On Tue, Jun 19, 2018 at 1:17 PM Denny Fuchs wrote: > > Hi, > > at the moment, we use Icinga2, check_ceph* and Telegraf with the Ceph > plugin. I'm asking what I need to have a separate host, which knows all > about the Ceph cluster health. The reason is, that each OSD node has > mostly the exact s

[ceph-users] separate monitoring node

2018-06-19 Thread Denny Fuchs
Hi, at the moment, we use Icinga2, check_ceph* and Telegraf with the Ceph plugin. I'm asking what I need to have a separate host, which knows all about the Ceph cluster health. The reason is, that each OSD node has mostly the exact same data, which is transmitted into our database (like Influ