It would be better for your health manager to query prometheus instead when 
needed.

On Monday, June 22, 2020 at 9:08:08 PM UTC+2, Sébastien Dionne wrote:
>
> I want to use Prometheus + alertmanager for health manager.  I want to 
> know what is the lowest value I can use for scraping metrics (I hope that I 
> can have a config for particuliar rules) and send alert as soon as there 
> are alerts.  I need almost realtime.  Is it possible in Prometheus + 
> alertmanager ?  
>
>
> I have a sample config that works now, but is it possible to have 1s are 
> something that prometheus send alert as soon as the metric is read ?
>
> serverFiles:
>   alerts:
>     groups:
>       - name: Instances
>         rules:
>           - alert: InstanceDown
>             expr: up == 0
>             for: 10s
>             labels:
>               severity: page
>             annotations:
>               description: '{{ $labels.instance }} of job {{ $labels.job 
> }} has been down for more than 1 minute.'
>               summary: 'Instance {{ $labels.instance }} down'
>               
> alertmanagerFiles:
>   alertmanager.yml:
>     route:
>       receiver: default-receiver
>       group_wait: 5s
>       group_interval: 10s
>
>     receivers:
>       - name: default-receiver
>         webhook_configs:
>           - url: "
> https://webhook.site/815a0b0b-f40c-4fc2-984d-e29cb9606840";
>               
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/cb8d61db-5094-4e5f-bdfc-47ed583489d3o%40googlegroups.com.

Reply via email to