The beta is better than 0.13.1 so if you were using 0.13.1 go ahead and 
upgrade.

On Monday, June 13, 2016 at 4:39:10 AM UTC-6, [email protected] wrote:
>
> Upgraded to the beta and it works great.
> Would you suggest installing this beta on production?  OR wait for the 
> stable release..
>
> Thx,
> Elad
>
> On Thursday, June 9, 2016 at 7:33:10 PM UTC+3, [email protected] wrote:
>>
>> Looks like I was wrong that error is fixed in 1.0.0-beta1 and still 
>> present in 0.13.1.
>>
>> Can you try upgrading?
>>
>> On Thursday, June 9, 2016 at 4:45:54 AM UTC-6, [email protected] wrote:
>>>
>>> Hi, 
>>> Sorry for the delay.  I'm using Kapacitor 0.13.1.
>>> Any other thoughts about it?
>>>
>>> Thanks,
>>> Elad
>>>
>>> On Tuesday, June 7, 2016 at 7:27:14 PM UTC+3, [email protected] 
>>> wrote:
>>>>
>>>> What version of Kapacitor are you using? If you are not using the 
>>>> latest can you upgrade and try again? I believe that specific error was 
>>>> fixed in v0.13.1
>>>>
>>>> On Tuesday, June 7, 2016 at 2:13:38 AM UTC-6, [email protected] wrote:
>>>>>
>>>>> ahh ok found it.
>>>>> Here's Kapacitor's log section from the activation of the mentioned 
>>>>> tick:
>>>>> =============
>>>>> [cpu_usage_batch:eval2] 2016/06/07 08:09:32 *E! Failed to handle 1 
>>>>> argument: expression returned unexpecte*
>>>>> *d type invalid type *
>>>>> [cpu_usage_batch:eval2] 2016/06/07 08:09:32 *E! name "hour" is 
>>>>> undefined*. Names in scope: time 
>>>>> [cpu_usage_batch:log3] 2016/06/07 08:09:32 I!  {cpu_value  2016-06-07 
>>>>> 08:09:27.200830333 +0000 UTC map[] 
>>>>> [{2016-06-07 08:09:27.200830333 +0000 UTC map[mean:11.25876972443783] 
>>>>> map[]}]}
>>>>> ==============
>>>>>
>>>>> thanks,
>>>>> Elad
>>>>>
>>>>> On Tuesday, June 7, 2016 at 12:18:44 AM UTC+3, [email protected] 
>>>>> wrote:
>>>>>>
>>>>>> That timestamp isn't exactly what we want. Using the TICkscript I 
>>>>>> provided you should see the log message in the Kapacitor daemon logs 
>>>>>> (STDERR, or where ever you configured them to go). You should see a log 
>>>>>> message like `[taskname:log#] ...` There should be  map that contains 
>>>>>> the 
>>>>>> value of the `hour` field.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sunday, June 5, 2016 at 3:42:10 AM UTC-6, [email protected] wrote:
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I've updated the TICK script with the Eval and Log nodes, and this 
>>>>>>> is the log output (one result):
>>>>>>>
>>>>>>> {"id":"cpu_value@","message":"cpu_value@ is CRITICAL measure: 
>>>>>>> 10.673594281463597","details":"{\u0026#34;N
>>>>>>>
>>>>>>> ame\u0026#34;:\u0026#34;cpu_value\u0026#34;,\u0026#34;TaskName\u0026#34;:\u0026#34;cpu_usage\u0026#34;,\u
>>>>>>>
>>>>>>> 0026#34;Group\u0026#34;:\u0026#34;nil\u0026#34;,\u0026#34;Tags\u0026#34;:{},\u0026#34;ID\u0026#34;:\u0026
>>>>>>>
>>>>>>> #34;cpu_value@\u0026#34;,\u0026#34;Fields\u0026#34;:{\u0026#34;mean\u0026#34;:10.673594281463597},\u0026#
>>>>>>>
>>>>>>> 34;Level\u0026#34;:\u0026#34;CRITICAL\u0026#34;,\u0026#34;Time\u0026#34;:\u0026#34;2016-06-05T09:13:29.11
>>>>>>> 089Z\u0026#34;,\u0026#34;Message\u0026#34;:\u0026#34;cpu_value@ is 
>>>>>>> CRITICAL measure: 10.673594281463597\u
>>>>>>>
>>>>>>> 0026#34;}","time":"2016-06-05T09:13:29.11089Z","duration":0,"level":"CRITICAL","data":{"series":[{"name":
>>>>>>>
>>>>>>> "cpu_value","columns":["time","mean"],"values":[["2016-06-05T09:13:29.11089Z",10.673594281463597]]}]}}
>>>>>>>
>>>>>>> I'm not sure if the timestamp sections I see in the log is the 
>>>>>>> relevant one..  how can I print the output from the Eval? (I couldn't 
>>>>>>> manage to print the 'hour' value using the '.message' section)
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Elad
>>>>>>>
>>>>>>> On Thursday, June 2, 2016 at 7:42:38 PM UTC+3, [email protected] 
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Try this to see what the value of `hour` is.
>>>>>>>>
>>>>>>>> stream 
>>>>>>>>    |from() 
>>>>>>>>        .measurement('cpu_value') 
>>>>>>>>    |window() 
>>>>>>>>        .period(120m) 
>>>>>>>>        .every(5s) 
>>>>>>>>    |mean('value') 
>>>>>>>>    |eval(lambda: hour("time"))
>>>>>>>>       .as('hour')
>>>>>>>>       .keep()
>>>>>>>>    |log()
>>>>>>>>    |alert() 
>>>>>>>>        .id('{{ .Name }}@{{ index .Tags "host"}}') 
>>>>>>>>        .message('{{ .ID }} is {{.Level }} measure: {{ index .Fields 
>>>>>>>> "mean" }}') 
>>>>>>>>        .crit(lambda: "mean" > 3) 
>>>>>>>>        .slack() 
>>>>>>>>
>>>>>>>> On Thursday, June 2, 2016 at 10:21:57 AM UTC-6, [email protected] 
>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Thanks Krishna,
>>>>>>>>> I'm always logging but no helpful info there :(
>>>>>>>>>
>>>>>>>>> On Thursday, June 2, 2016 at 7:08:23 PM UTC+3, krishna T wrote:
>>>>>>>>>>
>>>>>>>>>> May be you have already tried this but I usually add a "log" node 
>>>>>>>>>> to see what kind of information is flowing to the child node to 
>>>>>>>>>> debug when 
>>>>>>>>>> some of the conditions don't trigger
>>>>>>>>>>
>>>>>>>>>> hth
>>>>>>>>>> -krishna
>>>>>>>>>>
>>>>>>>>>> On Thursday, June 2, 2016 at 9:03:27 AM UTC-7, [email protected] 
>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Hi Nathaniel,
>>>>>>>>>>>
>>>>>>>>>>> I'm trying this but it's not working. no matter what time range 
>>>>>>>>>>> I specify there - it's not alerting.
>>>>>>>>>>> Once I omit the time range condition - all good. but when I 
>>>>>>>>>>> apply it back there is no alerting.
>>>>>>>>>>>
>>>>>>>>>>> what could be the reason?
>>>>>>>>>>>
>>>>>>>>>>> BR,
>>>>>>>>>>> Elad
>>>>>>>>>>>
>>>>>>>>>>> On Thursday, June 2, 2016 at 6:44:21 PM UTC+3, 
>>>>>>>>>>> [email protected] wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Simply add the same kind of logic to your critical alert lambda 
>>>>>>>>>>>> expression:
>>>>>>>>>>>>
>>>>>>>>>>>>        .crit(lambda: "mean" > 3 AND !(hour("time") >= 4 AND 
>>>>>>>>>>>> hour("time") < 8))
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Thursday, June 2, 2016 at 9:23:28 AM UTC-6, 
>>>>>>>>>>>> [email protected] wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Hello,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I have a working stream tick script that alerts on high cpu 
>>>>>>>>>>>>> usage, and I'd like to *prevent it from alerting* on a 
>>>>>>>>>>>>> specific time range (4-8 am).
>>>>>>>>>>>>> Now, I saw the deadman's lambda time condition on the 
>>>>>>>>>>>>> documentation (|deadman(100.0, 10s, lambda: hour("time") >= 8 
>>>>>>>>>>>>> AND hour("time") <= 17).
>>>>>>>>>>>>> But I don't have deadman switch on my tick script, I have the 
>>>>>>>>>>>>> following:
>>>>>>>>>>>>> stream 
>>>>>>>>>>>>>    |from() 
>>>>>>>>>>>>>        .measurement('cpu_value') 
>>>>>>>>>>>>>    |window() 
>>>>>>>>>>>>>        .period(120m) 
>>>>>>>>>>>>>        .every(5s) 
>>>>>>>>>>>>>    |mean('value') 
>>>>>>>>>>>>>    |alert() 
>>>>>>>>>>>>>        .id('{{ .Name }}@{{ index .Tags "host"}}') 
>>>>>>>>>>>>>        .message('{{ .ID }} is {{.Level }} measure: {{ index 
>>>>>>>>>>>>> .Fields "mean" }}') 
>>>>>>>>>>>>>        .crit(lambda: "mean" > 3) 
>>>>>>>>>>>>>        .slack() 
>>>>>>>>>>>>>
>>>>>>>>>>>>> I've tried to put the lambda condition (lambda: hour("time") 
>>>>>>>>>>>>> >= 8 AND hour("time") <= 17) inside the window() and alert() 
>>>>>>>>>>>>> nodes but it didn't seem to work properly.
>>>>>>>>>>>>>
>>>>>>>>>>>>> *How do I limit the alerting for this TICK to specific hours?*
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks!
>>>>>>>>>>>>> Elad
>>>>>>>>>>>>>
>>>>>>>>>>>>

-- 
Remember to include the InfluxDB version number with all issue reports
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/60505725-5897-4232-9b26-dcb3359bf79b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to