Hi,

According to log I'm using UDP subscriptions (default kapacitor config).
Here's are the details of one of the deadman tasks:

=============
ID: dead_agg_load 
Error:  
Template:  
Type: stream 
Status: disabled 
Executng: false 
Created: 15 Jun 16 12:53 UTC 
Modified: 16 Jun 16 15:09 UTC 
LastEnabled: 16 Jun 16 13:17 UTC 
Databases Retenton Policies: ["metrics"."default"] 
TICKscript: 
stream 
   |from() 
       .measurement('duration') 
       .where(lambda: "name" == 'agg_load') 
   |eval(lambda: "value") 
       .as('value') 
   |deadman(1.0, 5m) 
       .message('service: Agg Load is {{ if eq .Level "OK" }}alive{{ else 
}}dead - not running for 2h{{ 
end }}') 
       .log() 
       .slack() 
       .channel('#alerts') 

DOT: 
digraph dead_agg_load { 
stream0 -> from1; 
from1 -> eval2; 
eval2 -> noop4; 
stats3 -> derivative5; 
derivative5 -> alert6; 
}
===============

Thanks!

BR,
Elad

On Thursday, June 16, 2016 at 9:17:56 PM UTC+3, [email protected] wrote:
>
> Some basic trouble shooting steps/questions
>
> 1. How are you sending data to Kapacitor? Over UDP subcriptions? Is it 
> possible that while InfluxDB has the data it was dropped before it made it 
> to Kapacitor?
> 2. Can you share the `kapacitor show` output of your task?
>
>
>
> On Thursday, June 16, 2016 at 5:12:46 AM UTC-6, [email protected] wrote:
>>
>> yes? no? maybe?
>> someone?
>>
>>
>> On Wednesday, June 15, 2016 at 6:47:46 PM UTC+3, [email protected] wrote:
>>>
>>> For reference - 
>>> Here's the *DERIVATIVE* example I've configured, but still sending 
>>> alerts for existing/new points:
>>> -------------
>>> stream 
>>>    |from() 
>>>        .measurement('delaware:last_completed') 
>>>        .where(lambda: "name" == 'exchange_hourly') 
>>>    |stats(1m) 
>>>    |derivative('emitted') 
>>>        .unit(1m) 
>>>        .nonNegative() 
>>>    |alert() 
>>>        .id('{{ .TaskName }}') 
>>>        .message('service is dead') 
>>>        .crit(lambda: "emitted" < 1.0) 
>>>        .log() 
>>>        .slack() 
>>>        .channel('#alerts')
>>> -----------------
>>> (Also tried to implement this derivative within a batch script. same 
>>> behavior)
>>>
>>> Please advise, as many of our services depends on this feature :/
>>> thanks a lot.
>>>
>>> On Wednesday, June 15, 2016 at 5:42:54 PM UTC+3, [email protected] 
>>> wrote:
>>>>
>>>> Hello,
>>>>
>>>> We're trying to setup a *deadman* check for two measurements, 
>>>> but after long hours and QA to this function - I cannot get rid of the 
>>>> false-positives it sends.
>>>>
>>>> Could you please take a look at this TICK and let me know what am I 
>>>> doing wrong?
>>>> Here's the TICK file:
>>>>
>>>> stream 
>>>>    |from() 
>>>>       .measurement('last_completed') 
>>>>       .where(lambda: "name" == 'data') 
>>>> //    |eval(lambda: "value").as('value') 
>>>>    |deadman(1.0, 1m) 
>>>>       .message('Service:{{ index .Tags "name"}} is {{ if eq .Level "OK" 
>>>> }}alive{{ else }}dead - not running for 1m {{ end }}') 
>>>>       .log() 
>>>>       .slack() 
>>>>       .channel('#alerts')
>>>>
>>>>
>>>> And here's an example data set:
>>>>
>>>>
>>>> 2016-06-15T14:05:47.769283954Z "data" 1
>>>> 2016-06-15T14:05:56.17229738Z "data" 1
>>>> 2016-06-15T14:06:04.216883312Z "data" 3
>>>> 2016-06-15T14:06:12.028630147Z "data" 2
>>>> 2016-06-15T14:06:20.21923461Z "data" 2
>>>> 2016-06-15T14:06:28.37728243Z "data" 0
>>>> 2016-06-15T14:06:36.239360137Z "data" 1
>>>> Although there are new points every 8 seconds approx, deadman is 
>>>> alerting every minute.
>>>> I've tried all kind of times & limits.. (checks for every minute, hour, 
>>>> two hours).
>>>>
>>>> What could be the reason for this?
>>>>
>>>> Important -- 
>>>> I've also tried to convert the above STREAM tick to a BATCH and the 
>>>> behavior was the same.
>>>> Also tried to use the DERIVATIVE instead of the deadman - and it still 
>>>> ignores the new points and alerting :(
>>>>
>>>>
>>>> Influx 0.13.0
>>>> Kapacitor 1.0.0-beta
>>>> (the above was tried on both 0.13.1 and the new beta)
>>>>
>>>> THANKS!
>>>> Elad
>>>>
>>>

-- 
Remember to include the InfluxDB version number with all issue reports
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/cf5e80cb-5610-45eb-9d1d-e9cdcc6df0d1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to