On Tue, Jun 30, 2015 at 1:31 AM, Dustin Funk <[email protected]> wrote:
> Am 29.06.2015 um 07:01 schrieb Shay Rojansky: > > Just to add my experience, we're using icinga2 in command execution very > > successfully. Deployment and configuration is ansible-managed, and apart > > the initial setup work (especially around SSL) it's trivial to manage and > > install on new machines. (I can share the ansible scripts). > > I'm interested in your concept. I have some questions and an example: > > User A has 3 Hard-Disks: sda, sdb and sdc > and want to monitor SMART-values. > > > Do you define for each check an own check_command? > I'm not sure I understand the question exactly... My current setup centralizes all icinga2 config at the master by using ansible delegation. In many cases checks map well to ansible roles, so I create a hostgroup or servicegroup and assign machines to it based on their ansible role assignment; this way defining a new machine and ansible and assigning it to roles there sets up its monitoring config along with all the rest of its (non-monitoring) setup. I'm pretty sure my SMART checks are set up without specifying specific hard drives (but rather an alert on any hard drive) but I'm not 100% sure. In any case, in most cases checks derived from roles as above. In some cases checks are simply host-specific and aren't derived from any roles (this may be what you're looking for), so there's a "host-specific valve" to deploy icinga2 config for a specific host. > Another one: > > User A wants to monitor a lot of certificates: > check_cert_expire_domain1 > check_cert_expire_domain2 > check_cert_expire_domain3 > check_cert_expire_domain4 > .... > > User B wants also the monitoring of certificates: > check_cert_expire_domain1 > check_cert_expire_domain55 > check_cert_expire_domain56 > check_cert_expire_domain57 > ... > > In this case its a lot of effort to define all checks in an own > check_command (on client the effort is the same but you have that twice > because you must define all commands also on the monitoring-server). > I think I see where this is going... If you're monitoring a totally heterogeneous network where each machine has its own arbitrary checks, and in addition machines don't trust the centralized monitoring server and its admin, orchestration with ansible is definitely not for you. My model is geared towards a large network of hosts which are assigned roles (e.g. hadoop node, webserver), and icinga2 checks are largely derived from those roles. Note that like I said above, it's still possible to manage host variations, like "different list certificate", in a central way: you define an ansible list variable such as "certificates" that gets rendered into a host-specific sets of icinga2 checks. The advantage of this approach is that the "blueprint" for your network, including the specific icinga2 checks for each host, is centralized and version-managed in, say, git. > > Another thing is when User A need a notification 14 days bevor > check_cert_expire_domain1 expires and User B needs a notification but > only 30 days bevor. > > Is that covered by your concept? > I guess I answered that above. Variations are definitely covered in my concept, but they tend to be the exception rather than the rule. > > > nuts > > > _______________________________________________ > icinga-users mailing list > [email protected] > https://lists.icinga.org/mailman/listinfo/icinga-users > >
_______________________________________________ icinga-users mailing list [email protected] https://lists.icinga.org/mailman/listinfo/icinga-users
