For reference -
Here's the *DERIVATIVE* example I've configured, but still sending alerts
for existing/new points:
-------------
stream
|from()
.measurement('delaware:last_completed')
.where(lambda: "name" == 'exchange_hourly')
|stats(1m)
|derivative('emitted')
.unit(1m)
.nonNegative()
|alert()
.id('{{ .TaskName }}')
.message('service is dead')
.crit(lambda: "emitted" < 1.0)
.log()
.slack()
.channel('#alerts')
-----------------
(Also tried to implement this derivative within a batch script. same
behavior)
Please advise, as many of our services depends on this feature :/
thanks a lot.
On Wednesday, June 15, 2016 at 5:42:54 PM UTC+3, [email protected] wrote:
>
> Hello,
>
> We're trying to setup a *deadman* check for two measurements,
> but after long hours and QA to this function - I cannot get rid of the
> false-positives it sends.
>
> Could you please take a look at this TICK and let me know what am I doing
> wrong?
> Here's the TICK file:
>
> stream
> |from()
> .measurement('last_completed')
> .where(lambda: "name" == 'data')
> // |eval(lambda: "value").as('value')
> |deadman(1.0, 1m)
> .message('Service:{{ index .Tags "name"}} is {{ if eq .Level "OK"
> }}alive{{ else }}dead - not running for 1m {{ end }}')
> .log()
> .slack()
> .channel('#alerts')
>
>
> And here's an example data set:
>
>
> 2016-06-15T14:05:47.769283954Z "data" 1
> 2016-06-15T14:05:56.17229738Z "data" 1
> 2016-06-15T14:06:04.216883312Z "data" 3
> 2016-06-15T14:06:12.028630147Z "data" 2
> 2016-06-15T14:06:20.21923461Z "data" 2
> 2016-06-15T14:06:28.37728243Z "data" 0
> 2016-06-15T14:06:36.239360137Z "data" 1
> Although there are new points every 8 seconds approx, deadman is alerting
> every minute.
> I've tried all kind of times & limits.. (checks for every minute, hour,
> two hours).
>
> What could be the reason for this?
>
> Important --
> I've also tried to convert the above STREAM tick to a BATCH and the
> behavior was the same.
> Also tried to use the DERIVATIVE instead of the deadman - and it still
> ignores the new points and alerting :(
>
>
> Influx 0.13.0
> Kapacitor 1.0.0-beta
> (the above was tried on both 0.13.1 and the new beta)
>
> THANKS!
> Elad
>
--
Remember to include the InfluxDB version number with all issue reports
---
You received this message because you are subscribed to the Google Groups
"InfluxDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit
https://groups.google.com/d/msgid/influxdb/af6962fd-7da1-47f4-8162-30ef9ebd1b09%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.