On 07/11/2018 10:02 AM, [email protected] wrote:
> anyone with "mgr Zabbix enabled" migrated from Luminous (12.2.5 or 5) and has
> the same problem in Mimic now?
> if I disable and re-enable the "zabbix" module, the status is "HEALTH_OK" for
> some sec. and changes to "HEALTH_WARN" again...
>
> ---------------
>
> # ceph -s
> cluster:
> id: <ID>
> health: HEALTH_WARN
> Failed to send data to Zabbix
>
> services:
> mon: 3 daemons, quorum ceph20,ceph21,ceph22
> mgr: ceph21(active), standbys: ceph20, ceph22
> osd: 18 osds: 18 up, 18 in
> rgw: 4 daemons active
>
> data:
> pools: 25 pools, 1390 pgs
> objects: 2.55 k objects, 3.4 GiB
> usage: 26 GiB used, 8.8 TiB / 8.8 TiB avail
> pgs: 1390 active+clean
>
> io:
> client: 8.6 KiB/s rd, 9 op/s rd, 0 op/s wr
>
> # ceph version
> ceph version 13.2.0 (<ID>) mimic (stable)
>
> # grep -i zabbix /var/log/ceph/ceph-mgr.ceph21.log | tail -2
> 2018-07-11 09:50:10.191 7f2223582700 0 mgr[zabbix] Exception when sending:
> /usr/bin/zabbix_sender exited non-zero: zabbix_sender [18450]: DEBUG: answer
> [{"response":"success","info":"processed: 29; failed: 3; total: 32; seconds
> spent: 0.000605"}]
> 2018-07-11 09:51:10.222 7f2223582700 0 mgr[zabbix] Exception when sending:
> /usr/bin/zabbix_sender exited non-zero: zabbix_sender [18459]: DEBUG: answer
> [{"response":"success","info":"processed: 29; failed: 3; total: 32; seconds
> spent: 0.000692"}]
>
This is the problem, the zabbix_sender process is exiting with a
non-zero status.
You didn't change anything? You just upgraded from Luminous to Mimic and
this came along?
Wido
> ---------------
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com