Is this the icinga notification that you get or output of check_mysql
from command line?

If it is the icinga notification - could you, please, share your icinga
setup, as that is exactly what I try to achieve...

Thank you!


On Wed, 2017-05-17 at 12:33 +0000, Kasper Løvschall wrote:
> Of course - by bad :-) 
> 
> Yup - I get "Slave IO: No Slave SQL: No Seconds Behind Master: (null)" and 
> exit code 2 (CRITICAL).
> 
> /Kasper
> 
> -----Oprindelig meddelelse-----
> Fra: ST [mailto:smn...@gmail.com] 
> Sendt: 17. maj 2017 14:27
> Til: Kasper Løvschall <k...@aub.aau.dk>
> Cc: icinga-users@lists.icinga.org
> Emne: Re: SV: SV: [icinga-users] Mysql replication monitoring
> 
> Don't stop the whole mysql server - only the replication on your slave (type 
> STOP SLAVE; in your mysql terminal as root user). Otherwise you test mysql 
> server monitoring, not mysql replication monitoring (which can stop even 
> while the server itself runs well).
> 
> Yes, I run icinga on the slave machine.
> 
> Thank you!
> 
> On Wed, 2017-05-17 at 12:13 +0000, Kasper Løvschall wrote:
> > Hi,
> > 
> > Yes, I can get notifications from check_mysql if I stop the replication 
> > server. But it's due to the "Can't connect to local MySQL server..." error.
> > 
> > Are you running the check on the correct machine (not the master server)?
> > 
> > /Kasper
> > 
> > -----Oprindelig meddelelse-----
> > Fra: ST [mailto:smn...@gmail.com]
> > Sendt: 17. maj 2017 12:45
> > Til: Kasper Løvschall <k...@aub.aau.dk>
> > Cc: Icinga User's Corner <icinga-users@lists.icinga.org>
> > Emne: Re: SV: [icinga-users] Mysql replication monitoring
> > 
> > Hello Kasper,
> > 
> > On Wed, 2017-05-17 at 09:23 +0000, Kasper Løvschall wrote:
> > > I can get the check_mysql command to work without issues: 
> > > 
> > > ./check_mysql --check-slave
> > > Uptime: 1038854  Threads: 1  Questions: 803933  Slow queries: 0
> > > Opens: 48  Flush tables: 1  Open tables: 42  Queries per second avg: 
> > > 0.773 Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 
> > > 0|Connections=258337c;;; Open_files=53;;; Open_tables=42;;;
> > > Qcache_free_memory=1031336;;; Qcache_hits=0c;;; Qcache_inserts=0c;;; 
> > > Qcache_lowmem_prunes=0c;;; Qcache_not_cached=0c;;; 
> > > Qcache_queries_in_cache=0;;; Queries=803933c;;; Questions=757771c;;; 
> > > Table_locks_waited=0c;;; Threads_connected=1;;; Threads_running=1;;; 
> > > Uptime=1038854c;;; 'seconds behind 
> > > master'=0,000000s;0,000000;0,000000;
> > > 
> > > But it made me wonder if the problem is that it only display metrics and 
> > > no service status; e.g. OK, CRITICAL, WARNING or UNKNOWN? 
> > 
> > Are you able to get notifications from Icinga based on check_mysql (once 
> > you stop your slave manually)?
> > 
> > > 
> > > You might need to provide some thresholds to your config:
> > > 
> > > -w, --warning
> > >     Exit with WARNING status if slave server is more than INTEGER seconds
> > >     behind master
> > >  -c, --critical
> > >     Exit with CRITICAL status if slave server is more then INTEGER seconds
> > >     behind master
> > 
> > If slave is not running then "Seconds behind the master" are not Integers - 
> > NULL(or nil)...
> > 
> > Thank you!
> > 
> > > 
> > > -----Oprindelig meddelelse-----
> > > Fra: ST [mailto:smn...@gmail.com]
> > > Sendt: 17. maj 2017 09:50
> > > Til: Icinga User's Corner <icinga-users@lists.icinga.org>
> > > Cc: Kasper Løvschall <k...@aub.aau.dk>
> > > Emne: Re: [icinga-users] Mysql replication monitoring
> > > 
> > > Thank you very much for your response!
> > > 
> > > I did read about the Percona Monitoring Plugin, but thought that using 
> > > standard plugins, available as Debian package should be better, 
> > > especially if I plan to use Ansible in future to configure my Icinga 
> > > setup...
> > > 
> > > I'll definitely use your suggestion if I don't get it working with the 
> > > standard check_mysql. Just any ideas why it doesn't work? Can you test 
> > > your replication with this command, to see whether it is working 
> > > correctly?
> > > 
> > > /usr/lib/nagios/plugins/check_mysql -u repl --password='my_password'
> > > --check-slave
> > > 
> > > And if yes - why doesn't it work through Icinga?
> > > 
> > > Thank you!
> > > 
> > > 
> > > On Wed, 2017-05-17 at 07:02 +0000, Kasper Løvschall wrote:
> > > > Hi ST!
> > > > 
> > > > I can recommend a different approach using the Percona Monitoring 
> > > > Plugins (available for free at 
> > > > https://www.percona.com/software/database-tools/percona-monitoring-plugins).
> > > > 
> > > > It has (among others) two check commands: 
> > > > pmp-check-mysql-replication-running and 
> > > > pmp-check-mysql-replication-delay which should do the trick.
> > > > 
> > > > My configuration is the following - and will probably needs some 
> > > > tweaking in your environment:
> > > > 
> > > > object CheckCommand "pmp-check-mysql-replication-running" {
> > > >   import "plugin-check-command"
> > > > 
> > > >   command = [
> > > >     PluginDir + "/pmp-check-mysql-replication-running"
> > > >   ]
> > > > }
> > > > 
> > > > object CheckCommand "pmp-check-mysql-replication-delay" {
> > > >   import "plugin-check-command"
> > > > 
> > > >   command = [
> > > >     PluginDir + "/pmp-check-mysql-replication-delay"
> > > >   ]
> > > > }
> > > > 
> > > > And apply service:
> > > > 
> > > > apply Service "pmp-check-mysql-replication-running" {
> > > >   import "generic-service"
> > > >   check_command = "pmp-check-mysql-replication-running"
> > > >   display_name = "MySQL replication running"
> > > > 
> > > >   // Run the service on remote client if needed
> > > >   if ( host.vars.remote_client ) {
> > > >     command_endpoint = host.vars.remote_client
> > > >   }
> > > > 
> > > >   assign where host.vars.percona_standard_checks }
> > > > 
> > > > apply Service "pmp-check-mysql-replication-delay" {
> > > >   import "generic-service"
> > > >   check_command = "pmp-check-mysql-replication-delay"
> > > >   display_name = "MySQL replication delay"
> > > > 
> > > >   // Run the service on remote client if needed
> > > >   if ( host.vars.remote_client ) {
> > > >     command_endpoint = host.vars.remote_client
> > > >   }
> > > > 
> > > >   assign where host.vars.percona_standard_checks }
> > > > 
> > > > And the host has got the following assignment:
> > > > 
> > > > vars.percona_standard_checks = true
> > > > 
> > > > 
> > > > Regards,
> > > > 
> > > > Kasper Løvschall
> > > > Senior Adviser  | The University Library
> > > > 
> > > > Phone: (+45) 99 40 73 03  |  Mobile: (+45) 28 95 91 29  |  Email: 
> > > > k...@aub.aau.dk  |  Web: http://www.en.aub.aau.dk Aalborg University 
> > > > Library  |  Langagervej 2  |  9220 Aalborg Ø  |  Denmark
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > -----Oprindelig meddelelse-----
> > > > Fra: icinga-users [mailto:icinga-users-boun...@lists.icinga.org] 
> > > > På vegne af ST
> > > > Sendt: 17. maj 2017 07:44
> > > > Til: icinga-users@lists.icinga.org
> > > > Emne: [icinga-users] Mysql replication monitoring
> > > > 
> > > > Hello,
> > > > 
> > > > I'm new to Incinga2 and try to setup mysql replication monitoring on 
> > > > the slave machine. My OS is Debian/Wheezy with Incinga2 Version: 
> > > > r2.1.1-1 from wheezy-backports.
> > > > 
> > > > While learning docs I created following
> > > > file: /etc/icinga2/conf.d/hosts/localhost/mysql-slave.conf (see bellow).
> > > > 
> > > > However if I stop slave manually - I get no notifications from icinga2.
> > > > Only if I change vars.mysql_user to something non-existent - only then 
> > > > icinga complains and notifies me. This doesn't happen even if I change 
> > > > mysql_password to something wrong...
> > > > If I run the check manually from command line:
> > > > 
> > > > /usr/lib/nagios/plugins/check_mysql -u repl --password='my_password'
> > > > --check-slave
> > > > 
> > > > I get proper message on the CLI in both cases when replication runs 
> > > > (OK) and when it doesn't (Failure)...
> > > > 
> > > > 1. What is wrong with my configuration?
> > > > 2. Is there a better/standard way to monitor mysql replication?
> > > > 
> > > > Thank you!
> > > > 
> > > > ---------------------------------
> > > > 
> > > > // custom mysql replication check
> > > > object CheckCommand "my-mysql" {
> > > >   import "plugin-check-command"
> > > >   command = [ PluginDir + "/check_mysql" ] //constants.conf -> const 
> > > > PluginDir
> > > >   arguments = {
> > > >     "-H" = "$mysql_host$"
> > > >     "-u" = {
> > > >       required = true
> > > >       value = "$mysql_user$"
> > > >     }
> > > >     "-p" = "$mysql_password$"
> > > >     "-P" = "$mysql_port$"
> > > >     "-s" = "$mysql_socket$"
> > > >     "-a" = "$mysql_cert$"
> > > >     "-d" = "$mysql_database$"
> > > >     "-k" = "$mysql_key$"
> > > >     "-C" = "$mysql_ca_cert$"
> > > >     "-D" = "$mysql_ca_dir$"
> > > >     "-L" = "$mysql_ciphers$"
> > > >     "-f" = "$mysql_optfile$"
> > > >     "-g" = "$mysql_group$"
> > > >     "-S" = {
> > > >       set_if = "$mysql_check_slave$"
> > > >       description = "Check if the slave thread is running properly."
> > > >     }
> > > >     "-l" = {
> > > >       set_if = "$mysql_ssl$"
> > > >       description = "Use ssl encryption"
> > > >     }
> > > >   }
> > > >   vars.mysql_check_slave = true
> > > >   vars.mysql_ssl = false
> > > >   vars.mysql_host = "$address$"
> > > > }
> > > > apply Service "mysql-replication-health" {
> > > >   import "generic-service"
> > > >   check_command = "my-mysql"
> > > >   vars.mysql_user = "repl"
> > > >   vars.  = "my_password"
> > > > //  vars.mysql_database = "icinga"
> > > > //  vars.mysql_host = "192.168.33.11"
> > > >   vars.mysql_host = "localhost"
> > > >   vars.mysql_check_slave = true
> > > >   vars.sla = "24x7"
> > > > //  assign where match("icinga2*", host.name)
> > > >   assign where true
> > > > //  ignore where host.vars.no_health_check == true }
> > > > 
> > > > ---------------------------------
> > > > 
> > > > _______________________________________________
> > > > icinga-users mailing list
> > > > icinga-users@lists.icinga.org
> > > > https://lists.icinga.org/mailman/listinfo/icinga-users
> > > > _______________________________________________
> > > > icinga-users mailing list
> > > > icinga-users@lists.icinga.org
> > > > https://lists.icinga.org/mailman/listinfo/icinga-users
> > > 
> > 
> 

_______________________________________________
icinga-users mailing list
icinga-users@lists.icinga.org
https://lists.icinga.org/mailman/listinfo/icinga-users

Reply via email to