*hello team*:
     Under normal circumstances, if the value of the EndsAt field of the 
alert is zero, the AlertManager will assign EndsAt based on the value of 
resolve_timeout in the configuration and the value of current time, then it 
will determine whether to send an alert, entirely based on the value of the 
EndsAt. If Prometheus does not resend information about this alert during 
this time, AlterManager will send resolved after a period of 
time(resolvedtimeout).

    Under these circumstances, if I set a rule for exmple 
http_requests_total > 2, it will alert after time, then I modify the rule 
http_requests_total> 20000000000 (a big value), Prometheus will receive 
signal of HUP ,and the new rules will work with the old one replaced. But 
the old rules are in firing state, at this time Prometheus won’t send alert 
about the old rules anymore. Every time Prometheus sends an alert to the 
AlertManager there will create an new EndsAt value, so the value of 
resolve_timeout is not valid, and the AlertManager will send resolved after 
a period of time (value of rules.alert.resend-delay in prometheus * 3 + 
value of evaluation_interval of group). All of this is strictly followed 
the codes, it should be infallible.

    It means once rule under firing state is modified, the old rule will 
send resolved automatically. I’m wondering it’s an issue or it should be 
like this? And If it’s the right way, what is the design intention?

Note: My Prometheus version is 2.16.0, AlertManager version is 0.20.0

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/0cf671a8-a3d0-4934-a3ed-e05f40845bfc%40googlegroups.com.

Reply via email to