Re: [prometheus-users] Re: better way to get notified about (true) single scrape failures?

2024-04-05 Thread Christoph Anton Mitterer
Hey Chris.

On Thursday, April 4, 2024 at 8:41:02 PM UTC+2 Chris Siebenmann wrote:

> - The evaluation interval is sufficiently less than the scrape 
> interval, so that it's guaranteed that none of the `up`-samples are 
> being missed. 


I assume you were referring to the above specific point?

Maybe there is a misunderstanding:

With the above I merely meant that, my solution requires that the alert 
rule evaluation interval is small enough, so that when it look at 
resets(up[20s] offset 60s) (which is the window from -70s to -50s PLUS an 
additional shift by 10s, so effectively -80s to -60s), the evaluations 
happen often enough, so that no sample can "jump over" that time window.

I.e. if the scrape interval was 10s, but the evaluation interval only 20s, 
it would surely miss some.
 

I don't believe this assumption about up{} is correct. My understanding 
is that up{} is not merely an indication that Prometheus has connected 
to the target exporter, but an indication that it has successfully 
scraped said exporter. Prometheus can only know this after all samples 
from the scrape target have been received and ingested and there are no 
unexpected errors, which means that just like other metrics from the 
scrape, up{} can only be visible after the scrape has finished (and 
Prometheus knows whether it succeeded or not). 


Yes, I'd have assumed so as well. Therefore I generally shifted both alerts 
by 10s, hoping that 10s is enough for all that.

 

How long scrapes take is variable and can be up to almost their timeout 
interval. You may wish to check 'scrape_duration_seconds'. Our metrics 
suggest that this can go right up to the timeout (possibly in the case 
of failed scrapes). 


Interesting. 

I see the same (I mean entries that go up to and even a bit above the 
timeout). Would be interesting to know whether these are ones that still 
made it "just in time (despite actually being a bit longer than the 
timeout)... or whether these are only such that timed out and were 
discarded.
Cause the name scrape_duration_seconds would kind of imply that it's the 
former, but I guess it's actually the latter.

So what would you think that means for me and my solution now? The I should 
shift all my checks even further? That is at least the scrape_timeout + 
some extra time for the data getting into the TDSB?


Thanks,
Chris.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/f6603b09-d44b-412d-831a-c53234c85a82n%40googlegroups.com.


Re: [prometheus-users] Re: better way to get notified about (true) single scrape failures?

2024-04-04 Thread Chris Siebenmann
> The assumptions I've made are basically three:
> - Prometheus does that "faking" of sample times, and thus these are
>   always on point with exactly the scrape interval between each.
>   This in turn should mean, that if I have e.g. a scrape interval of
>   10s, and I do up[20s], then regardless of when this is done, I get
>   at least 2 samples, and in some rare cases (when the evaluation
>   happens exactly on a scrape time), 3 samples.
>   Never more, never less.
>   Which for `up` I think should be true, as Prometheus itself
>   generates it, right, and not the exporter that is scraped.
> - The evaluation interval is sufficiently less than the scrape
>   interval, so that it's guaranteed that none of the `up`-samples are
>   being missed.

I don't believe this assumption about up{} is correct. My understanding
is that up{} is not merely an indication that Prometheus has connected
to the target exporter, but an indication that it has successfully
scraped said exporter. Prometheus can only know this after all samples
from the scrape target have been received and ingested and there are no
unexpected errors, which means that just like other metrics from the
scrape, up{} can only be visible after the scrape has finished (and
Prometheus knows whether it succeeded or not).

How long scrapes take is variable and can be up to almost their timeout
interval. You may wish to check 'scrape_duration_seconds'. Our metrics
suggest that this can go right up to the timeout (possibly in the case
of failed scrapes).

- cks

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/2784936.1712256052%40apps0.cs.toronto.edu.


Re: [prometheus-users] Re: better way to get notified about (true) single scrape failures?

2024-04-04 Thread Christoph Anton Mitterer
Hey.

On Friday, March 22, 2024 at 9:20:45 AM UTC+1 Brian Candler wrote:

You want to "capture" single scrape failures?  Sure - it's already being 
captured.  Make yourself a dashboard.


Well as I've said before, the dashboard always has the problem that someone 
actually needs to look at it.
 

But do you really want to be *alerted* on every individual one-time scrape 
failure?  That goes against the whole philosophy of alerting 
,
 
where alerts should be "urgent, important, actionable, and real".  A single 
scrape failure is none of those.


I guess in the end I'll see whether or not I'm annoyed by it. ;-)
 

How often do you get hosts where:
(1) occasional scrape failures occur; and
(2) there are enough of them to make you investigate further, but not 
enough to trigger any alerts?


So far I've seen two kinds of nodes, those where I never get scrape errors, 
and those where they happen regularly - and probably need investigation.


Anyway,... I think it might have found a solution, which - if some
assumption's I've made are correct - I'm somewhat confident that
it works, even in the strange cases.


The assumptions I've made are basically three:
- Prometheus does that "faking" of sample times, and thus these are
  always on point with exactly the scrape interval between each.
  This in turn should mean, that if I have e.g. a scrape interval of
  10s, and I do up[20s], then regardless of when this is done, I get
  at least 2 samples, and in some rare cases (when the evaluation
  happens exactly on a scrape time), 3 samples.
  Never more, never less.
  Which for `up` I think should be true, as Prometheus itself
  generates it, right, and not the exporter that is scraped.
- The evaluation interval is sufficiently less than the scrape
  interval, so that it's guaranteed that none of the `up`-samples are
  being missed.
- After some small time (e.g. 10s) it's guaranteed that all samples
  are in the TSDB and a query will return them.
  (basically, to counter the observation I've made in
  https://groups.google.com/g/prometheus-users/c/mXk3HPtqLsg )
- Both alerts run in the same alert group, and that means (I hope) that
  each query in them is evaluated with respect to the very same time.

With that, my final solution would be:
- alert: general_target-down   (TD below)
  expr: 'max_over_time(up[1m] offset 10s) == 0'
  for:  0s
- alert: general_target-down_single-scrapes   (TDSS below)
  expr: 'resets(up[20s] offset 60s) >= 1  unless  max_over_time(up[50s] 
offset 10s) == 0'
  for:  0s

And that seems to actually work for at least practical cases (of
course it's difficult to simulate the cases where the evaluation
happens right on time of a scrape).

For anyone who'd ever be interested in the details, and why I think that 
works in all cases,
I've attached the git logs where I describe the changes in my config git 
below.

Thanks to everyone for helping me with that :-)

Best wishes,
Chris.


(needs a mono-spaced font to work out nicely)
TL/DR:
-
commit f31f3c656cae4aeb79ce4bfd1782a624784c1c43
Author: Christoph Anton Mitterer 
Date:   Mon Mar 25 02:01:57 2024 +0100

alerts: overhauled the `general_target-down_single-scrapes`-alert

This is a major overhaul of the 
`general_target-down_single-scrapes`-alert,
which turned out to have been quite an effort that went over several 
months.

Before this branch was merged, the 
`general_target-down_single-scrapes`-alert
(from now on called “TDSS”) had various issues.
While the alert did stop to fire, when the `general_target-down`-alert 
(from now
on called “TD”) started to do so, that alone meant that it would still 
also fire
when scrapes failed which eventually turned out to be an actual TD.
For example the first few (< ≈7) `0`s would have caused TDSS to fire 
which would
seamlessly be replaced by a firing TD (unless any `1`s came in between).

Assumptions made below:
• The scraping interval is `10s`.
• If a (single) time series for the `up`-metric is given like `0 1 0 0 
1`, the
  time goes from left (farther back in time) to right (less farther 
back in
  time).

I) Goals

There should be two alerts:
• TD
  Is for general use and similar to Icinga’s concept of host being `UP` 
or
  `DOWN` (with the minor difference, that an unreachable Prometheus 
target does
  not necessarily mean that a host is `DOWN` in that sense).
  It should fire after scraping has failed for some time, for example 
one
  minute (which is assumed form now on).
• TDSS
  Since Prometheus is all about monitoring metrics, it’s of interest 
whether the
  scraping fails, even if it’s only every now and then for very short 
amount of
  times, because in that cases samples are lost.
  TD will notice 

Re: [prometheus-users] Re: better way to get notified about (true) single scrape failures?

2024-03-22 Thread 'Brian Candler' via Prometheus Users
Personally I think you're looking at this wrong.

You want to "capture" single scrape failures?  Sure - it's already being 
captured.  Make yourself a dashboard.

But do you really want to be *alerted* on every individual one-time scrape 
failure?  That goes against the whole philosophy of alerting 
,
 
where alerts should be "urgent, important, actionable, and real".  A single 
scrape failure is none of those.

If you want to do further investigation when a host has more than N 
single-scrape failures in 24 hours, sure. But firstly, is that urgent 
enough to warrant an alert? If it is, then you also say you *don't* want to 
be alerted on this when a more important alert has been sent for the same 
host in the same time period.  That's tricky to get right, which is what 
this whole thread is about. Like you say: alertmanager is probably not the 
right tool for that.

How often do you get hosts where:
(1) occasional scrape failures occur; and
(2) there are enough of them to make you investigate further, but not 
enough to trigger any alerts?

If it's "not often" then I wouldn't worry too much it anyway (check a 
dashboard), but in any case you don't want to waste time trying to bend 
existing tooling to work in ways it wasn't intended for. That is: if you 
need suitable tooling, then write it.

It could be as simple as a script doing one query per day, using the same 
logic I just outlined above:
- identify hosts with scrape failures above a particular threshold over the 
last 24 hours
- identify hosts where one or more alerts have been generated over the last 
24 hours (there are metrics for this)
- subtract the second set from the first set
- if the remaining set is non-empty, then send a notification

You can do this in any language of your choice, or even a shell script with 
promtool/curl and jq.

On Friday 22 March 2024 at 02:31:52 UTC Christoph Anton Mitterer wrote:

>
> I've been looking into possible alternatives, based on the ideas given 
> here.
>
> I) First one completely different approach might be:
> - alert: target-down expr: 'max_over_time( up[1m0s] ) == 0' for: 0s and: (
> - alert: single-scrape-failure
> expr: 'min_over_time( up[2m0s] ) == 0'
> for: 1m
> or
> - alert: single-scrape-failure
> expr: 'resets( up[2m0s] ) > 0'
> for: 1m
> or perhaps even
> - alert: single-scrape-failure
> expr: 'changes( up[2m0s] ) >= 2'
> for: 1m
> (which would however behave a bit different, I guess)
> )
>
> plus an inhibit rule, that silences single-scrape-failure when
> target-down fires.
> The for: 1m is needed, so that target-down has a chance to fire
> (and inhibit) before single-scrape-failure does.
>
> I'm not really sure, whether that works in all cases, though,
> especially since I look back much more (and the additional time
> span further back may undesirably trigger again.
>
>
> Using for: > 0 seems generally a bit fragile for my use-case (because I 
> want to capture even single scrape failures, but with for: > 0 I need t to 
> have at least two evaluations to actually trigger, so my evaluation period 
> must be small enough so that it's done >= 2 during the scrape interval.
>
> Also, I guess the scrape intervals and the evaluation intervals are not 
> synced, so when with for: 0s, when I look back e.g. [1m] and assume a 
> certain number of samples in that range, it may be that there are actually 
> more or less.
>
>
> If I forget about the above approach with inhibiting, then I need to 
> consider cases like:
> time>
> - 0 1 0 0 0 0 0 0
> first zero should be a single-scrape-failure, the last 6 however a
> target-down
> - 1 0 0 0 0 0 1 0 0 0 0 0 0
> same here, the first 5 should be a single-scrape-failure, the last 6
> however a target-down
> - 1 0 0 0 0 0 0 1 0 0 0 0 0 0
> here however, both should be target-down
> - 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0
> or
> 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0
> here, 2x target-down, 1x single-scrape-failure
>
>
>
>
> II) Using the original {min,max}_over_time approach:
> - min_over_time(up[1m]) == 0
> tells me, there was at least one missing scrape in the last 1m.
> but that alone would already be the case for the first zero:
> . . . . . 0
> so:
> - for: 1m
> was added (and the [1m] was enlarged)
> but this would still fire with
> 0 0 0 0 0 0 0
> which should however be a target-down
> so:
> - unless max_over_time(up[1m]) == 0
> was added to silence it then
> but that would still fail in e.g. the case when a previous
> target-down runs out:
> 0 0 0 0 0 0 -> target down
> the next is a 1
> 0 0 0 0 0 0 1 -> single-scrape-failure
> and some similar cases,
>
> Plus the usage of for: >0s is - in my special case - IMO fragile.
>
>
>
> III) So in my previous mail I came up with the idea of using:
> - alert: target-down expr: 'max_over_time( up[1m0s] ) == 0' for: 0s - 
> alert: single-scrape-failure expr: 'min_over_time(up[15s] offset 1m) == 0 
> unless max_over_time(up[1m0s]) 

Re: [prometheus-users] Re: better way to get notified about (true) single scrape failures?

2024-03-21 Thread Christoph Anton Mitterer

I've been looking into possible alternatives, based on the ideas given here.

I) First one completely different approach might be:
- alert: target-down expr: 'max_over_time( up[1m0s] ) == 0' for: 0s and: (
- alert: single-scrape-failure
expr: 'min_over_time( up[2m0s] ) == 0'
for: 1m
or
- alert: single-scrape-failure
expr: 'resets( up[2m0s] ) > 0'
for: 1m
or perhaps even
- alert: single-scrape-failure
expr: 'changes( up[2m0s] ) >= 2'
for: 1m
(which would however behave a bit different, I guess)
)

plus an inhibit rule, that silences single-scrape-failure when
target-down fires.
The for: 1m is needed, so that target-down has a chance to fire
(and inhibit) before single-scrape-failure does.

I'm not really sure, whether that works in all cases, though,
especially since I look back much more (and the additional time
span further back may undesirably trigger again.


Using for: > 0 seems generally a bit fragile for my use-case (because I 
want to capture even single scrape failures, but with for: > 0 I need t to 
have at least two evaluations to actually trigger, so my evaluation period 
must be small enough so that it's done >= 2 during the scrape interval.

Also, I guess the scrape intervals and the evaluation intervals are not 
synced, so when with for: 0s, when I look back e.g. [1m] and assume a 
certain number of samples in that range, it may be that there are actually 
more or less.


If I forget about the above approach with inhibiting, then I need to 
consider cases like:
time>
- 0 1 0 0 0 0 0 0
first zero should be a single-scrape-failure, the last 6 however a
target-down
- 1 0 0 0 0 0 1 0 0 0 0 0 0
same here, the first 5 should be a single-scrape-failure, the last 6
however a target-down
- 1 0 0 0 0 0 0 1 0 0 0 0 0 0
here however, both should be target-down
- 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0
or
1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0
here, 2x target-down, 1x single-scrape-failure




II) Using the original {min,max}_over_time approach:
- min_over_time(up[1m]) == 0
tells me, there was at least one missing scrape in the last 1m.
but that alone would already be the case for the first zero:
. . . . . 0
so:
- for: 1m
was added (and the [1m] was enlarged)
but this would still fire with
0 0 0 0 0 0 0
which should however be a target-down
so:
- unless max_over_time(up[1m]) == 0
was added to silence it then
but that would still fail in e.g. the case when a previous
target-down runs out:
0 0 0 0 0 0 -> target down
the next is a 1
0 0 0 0 0 0 1 -> single-scrape-failure
and some similar cases,

Plus the usage of for: >0s is - in my special case - IMO fragile.



III) So in my previous mail I came up with the idea of using:
- alert: target-down expr: 'max_over_time( up[1m0s] ) == 0' for: 0s - 
alert: single-scrape-failure expr: 'min_over_time(up[15s] offset 1m) == 0 
unless max_over_time(up[1m0s]) == 0 unless max_over_time(up[1m0s] offset 
1m10s) == 0 unless max_over_time(up[1m0s] offset 1m) == 0 unless 
max_over_time(up[1m0s] offset 50s) == 0 unless max_over_time(up[1m0s] 
offset 40s) == 0 unless max_over_time(up[1m0s] offset 30s) == 0 unless 
max_over_time(up[1m0s] offset 20s) == 0 unless max_over_time(up[1m0s] 
offset 10s) == 0' for: 0m
The idea was, that when I don't use for: >0s, the first time
window where one can be really sure (in all cases), that whether
it's a single-scrape-failure or target-down is a 0 in -70s to
-60s:
-130s -120s -110s -100s -90s -80s -70s -60s -50s -40s -30s -20s -10s 0s/now 
| | | | | | | 0 | | | | | | | | | | | | | | | | | | 1 | 0 | 1 | case 1 | | 
| | | | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | case 2 | | | | 1 | 0 | 0 | 0 | 0 | 0 
| 0 | 0 | 1 | 1 | case 3 In case 1 it would be already clear when the zeros 
is between -20
and -10.
But if there's a sequence of zeros, it takes up to -70s to -60s,
when it becomes clear.

Now the zero in that time span could also be that of a target-down
sequence of zeros like in case 3.
For these cases, I had the shifted silencers that each looked over
1m.

Looked good at first, though there were some open questions.
At least one main problem, namely it would fail in e.g. that case:
-130s -120s -110s -100s -90s -80s -70s -60s -50s -40s -30s -20s -10s 0s/now 
| 1 | 1 | 1 | 1 | 1 | 1 | 0 1 | 0 | 0 | 0 | 0 | 0 | 0 | case 8a
The zero between -70s to 60s would be noticed, but still be
silenced, because the one would not.




Chris Siebenmann suggested to use resets(). ... and keep_firing_for:, which 
Ben Kochie, suggested, too.

First I didn't quite understand how the latter would help me? Maybe I have 
the wrong mindset for it, so could you guys please explain what your idea 
was wiht keep_firing_for:?




IV) resets() sounded promising at first, but while I tried quite some
variations, I wasn't able to get anything working.
First, something like
resets(up[1m]) >= 1
alone (with or without a for: >0s) would already fire in case of:
time>
1 0
which still could become a target-down but also in case of:
1 0 0 0 0 0 0
which is a target down.
And I think even 

Re: [prometheus-users] Re: better way to get notified about (true) single scrape failures?

2024-03-18 Thread Ben Kochie
I usually recommend throwing out any "But this is how Icinga does it".
thinking.

The way we do things in Prometheus for this kind of thing is to simply
think about "availability".

For any scrape failures:

avg_over_time(up[5m]) < 1

For more than one scrape failure (assuming 15s intervals)

avg_over_time(up[5m]) < 0.95

This is a much easier way to think about "uptime".

Also, if you want, there is the new "keep_firing_for" alerting option.

On Mon, Mar 18, 2024 at 5:45 AM Christoph Anton Mitterer 
wrote:

> Hey Chris.
>
> On Sun, 2024-03-17 at 22:40 -0400, Chris Siebenmann wrote:
> >
> > One thing you can look into here for detecting and counting failed
> > scrapes is resets(). This works perfectly well when applied to a
> > gauge
>
> Though it is documented as to be only used with counters... :-/
>
>
> > that is 1 or 0, and in this case it will count the number of times
> > the
> > metric went from 1 to 0 in a particular time interval. You can
> > similarly
> > use changes() to count the total number of transitions (either 1->0
> > scrape failures or 0->1 scrapes starting to succeed after failures).
>
> The idea sounds promising... especially to also catch cases like that
> 8a, I've mentioned in my previous mail and where the
> {min,max}_over_time approach seems to fail.
>
>
> > It may also be useful to multiply the result of this by the current
> > value of the metric, so for example:
> >
> >   resets(up{..}[1m]) * up{..}
> >
> > will be non-zero if there have been some number of scrape failures
> > over
> > the past minute *but* the most recent scrape succeeded (if that
> > scrape
> > failed, you're multiplying resets() by zero and getting zero). You
> > can
> > then wrap this in an '(...) > 0' to get something you can maybe use
> > as
> > an alert rule for the 'scrapes failed' notification. You might need
> > to
> > make the range for resets() one step larger than you use for the
> > 'target-down' alert, since resets() will also be zero if up{...} was
> > zero all through its range.
> >
> > (At this point you may also want to look at the alert
> > 'keep_firing_for'
> > setting.)
>
> I will give that some more thinking and reply back if I should find
> some way to make an alert out of this.
>
> Well and probably also if I fail to ^^ ... at least at a first glance I
> wasn't able to use that to create and alert that would behave as
> desired. :/
>
>
> > However, my other suggestion here would be that this notification or
> > count of failed scrapes may be better handled as a dashboard or a
> > periodic report (from a script) instead of through an alert,
> > especially
> > a fast-firing alert.
>
> Well the problem with a dashboard would IMO be, that someone must
> actually look at it or otherwise it would be pointless. ;-)
>
> Not really sure how to do that with a script (which I guess would be
> conceptually similar to an alert... just that it's sent e.g. weekly).
>
> I guess I'm not so much interested in the exact times, when single
> scrapes fail (I cannot correct it retrospectively anyway) but just
> *that* it happens and that I have to look into it.
>
> My assumption kinda is, that normally scrapes aren't lost. So I would
> really only get an alert mail if something's wrong.
> And even if the alert is flaky, like in 1 0 1 0 1 0, I think it could
> still reduce mail but on the alertmanager level?
>
>
> > I think it will be relatively difficult to make an
> > alert give you an accurate count of how many times this happened; if
> > you
> > want such a count to make decisions, a dashboard (possibly
> > visualizing
> > the up/down blips) or a report could be better. A program is also in
> > the
> > position to extract the raw up{...} metrics (with timestamps) and
> > then
> > readily analyze them for things like how long the failed scrapes tend
> > to
> > last for, how frequently they happen, etc etc.
>
> Well that sounds to be quite some effort... and I already think that my
> current approaches required far too much of an effort (and still don't
> fully work ^^).
> As said... despite not really being comparable to Prometheus: in
> Incinga a failed sensor probe would be immediately noticeable.
>
>
> Thanks,
> Chris.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/f3549318aaa5f5b4fa0a01fb20c44e30769f540a.camel%40gmail.com
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Re: [prometheus-users] Re: better way to get notified about (true) single scrape failures?

2024-03-17 Thread Christoph Anton Mitterer
Hey Chris.

On Sun, 2024-03-17 at 22:40 -0400, Chris Siebenmann wrote:
> 
> One thing you can look into here for detecting and counting failed
> scrapes is resets(). This works perfectly well when applied to a
> gauge

Though it is documented as to be only used with counters... :-/


> that is 1 or 0, and in this case it will count the number of times
> the
> metric went from 1 to 0 in a particular time interval. You can
> similarly
> use changes() to count the total number of transitions (either 1->0
> scrape failures or 0->1 scrapes starting to succeed after failures).

The idea sounds promising... especially to also catch cases like that
8a, I've mentioned in my previous mail and where the
{min,max}_over_time approach seems to fail.


> It may also be useful to multiply the result of this by the current
> value of the metric, so for example:
> 
>   resets(up{..}[1m]) * up{..}
> 
> will be non-zero if there have been some number of scrape failures
> over
> the past minute *but* the most recent scrape succeeded (if that
> scrape
> failed, you're multiplying resets() by zero and getting zero). You
> can
> then wrap this in an '(...) > 0' to get something you can maybe use
> as
> an alert rule for the 'scrapes failed' notification. You might need
> to
> make the range for resets() one step larger than you use for the
> 'target-down' alert, since resets() will also be zero if up{...} was
> zero all through its range.
> 
> (At this point you may also want to look at the alert
> 'keep_firing_for'
> setting.)

I will give that some more thinking and reply back if I should find
some way to make an alert out of this.

Well and probably also if I fail to ^^ ... at least at a first glance I
wasn't able to use that to create and alert that would behave as
desired. :/


> However, my other suggestion here would be that this notification or
> count of failed scrapes may be better handled as a dashboard or a
> periodic report (from a script) instead of through an alert,
> especially
> a fast-firing alert.

Well the problem with a dashboard would IMO be, that someone must
actually look at it or otherwise it would be pointless. ;-)

Not really sure how to do that with a script (which I guess would be
conceptually similar to an alert... just that it's sent e.g. weekly).

I guess I'm not so much interested in the exact times, when single
scrapes fail (I cannot correct it retrospectively anyway) but just
*that* it happens and that I have to look into it.

My assumption kinda is, that normally scrapes aren't lost. So I would
really only get an alert mail if something's wrong.
And even if the alert is flaky, like in 1 0 1 0 1 0, I think it could
still reduce mail but on the alertmanager level?


> I think it will be relatively difficult to make an
> alert give you an accurate count of how many times this happened; if
> you
> want such a count to make decisions, a dashboard (possibly
> visualizing
> the up/down blips) or a report could be better. A program is also in
> the
> position to extract the raw up{...} metrics (with timestamps) and
> then
> readily analyze them for things like how long the failed scrapes tend
> to
> last for, how frequently they happen, etc etc.

Well that sounds to be quite some effort... and I already think that my
current approaches required far too much of an effort (and still don't
fully work ^^).
As said... despite not really being comparable to Prometheus: in
Incinga a failed sensor probe would be immediately noticeable.


Thanks,
Chris.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/f3549318aaa5f5b4fa0a01fb20c44e30769f540a.camel%40gmail.com.


Re: [prometheus-users] Re: better way to get notified about (true) single scrape failures?

2024-03-17 Thread Chris Siebenmann
> As a reminder, my goal was:
> - if e.g. scrapes fail for 1m, a target-down alert shall fire (similar to
>   how Icinga would put the host into down state, after pings failed or a
>   number of seconds)
> - but even if a single scrape fails (which alone wouldn't trigger the above
>   alert) I'd like to get a notification (telling me, that something might be
>   fishy with the networking or so), that is UNLESS that single failed scrape
>   is part of a sequence of failed scrapes that also caused / will cause the
>   above target-down alert
>
> Assuming in the following, each number is a sample value with ~10s distance 
> for
> the `up` metric of a single host, with the most recent one being the 
> right-most:
> - 1 1 1 1 1 1 1 => should give nothing
> - 1 1 1 1 1 1 0 => should NOT YET give anything (might be just a single 
> failure,
>or develop into the target-down alert)
> - 1 1 1 1 1 0 0 => same as above, not clear yet
> ...
> - 1 0 0 0 0 0 0 => here it's clear, this is a target-down alert

One thing you can look into here for detecting and counting failed
scrapes is resets(). This works perfectly well when applied to a gauge
that is 1 or 0, and in this case it will count the number of times the
metric went from 1 to 0 in a particular time interval. You can similarly
use changes() to count the total number of transitions (either 1->0
scrape failures or 0->1 scrapes starting to succeed after failures).
It may also be useful to multiply the result of this by the current
value of the metric, so for example:

resets(up{..}[1m]) * up{..}

will be non-zero if there have been some number of scrape failures over
the past minute *but* the most recent scrape succeeded (if that scrape
failed, you're multiplying resets() by zero and getting zero). You can
then wrap this in an '(...) > 0' to get something you can maybe use as
an alert rule for the 'scrapes failed' notification. You might need to
make the range for resets() one step larger than you use for the
'target-down' alert, since resets() will also be zero if up{...} was
zero all through its range.

(At this point you may also want to look at the alert 'keep_firing_for'
setting.)

However, my other suggestion here would be that this notification or
count of failed scrapes may be better handled as a dashboard or a
periodic report (from a script) instead of through an alert, especially
a fast-firing alert. I think it will be relatively difficult to make an
alert give you an accurate count of how many times this happened; if you
want such a count to make decisions, a dashboard (possibly visualizing
the up/down blips) or a report could be better. A program is also in the
position to extract the raw up{...} metrics (with timestamps) and then
readily analyze them for things like how long the failed scrapes tend to
last for, how frequently they happen, etc etc.

- cks
PS: This is not my clever set of tricks, I got it from other people.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/3652072.1710729628%40apps0.cs.toronto.edu.


[prometheus-users] Re: better way to get notified about (true) single scrape failures?

2024-03-17 Thread Christoph Anton Mitterer
Hey there.

I eventually got back to this and I'm still fighting this problem.

As a reminder, my goal was:
- if e.g. scrapes fail for 1m, a target-down alert shall fire (similar to
  how Icinga would put the host into down state, after pings failed or a
  number of seconds)
- but even if a single scrape fails (which alone wouldn't trigger the above
  alert) I'd like to get a notification (telling me, that something might be
  fishy with the networking or so), that is UNLESS that single failed scrape
  is part of a sequence of failed scrapes that also caused / will cause the
  above target-down alert

Assuming in the following, each number is a sample value with ~10s distance 
for
the `up` metric of a single host, with the most recent one being the 
right-most:
- 1 1 1 1 1 1 1 => should give nothing
- 1 1 1 1 1 1 0 => should NOT YET give anything (might be just a single 
failure,
   or develop into the target-down alert)
- 1 1 1 1 1 0 0 => same as above, not clear yet
...
- 1 0 0 0 0 0 0 => here it's clear, this is a target-down alert

In the following:
- 1 1 1 1 1 0 1
- 1 1 1 1 0 0 1
- 1 1 1 0 0 0 1
...
should eventually (not necessarily after the right-most 1, though) all give 
a
"single-scrape-failure" (even though it's more than just one - it's not a
target-down), simply because there's a 0s but for a time span less than 1m.

- 1 0 1 0 0 0 0 0 0
should give both, a single-scrape-failure alert (the left-most single 0) 
AND a
target-down alert (the 6 consecutive zeros)

-   1 0 1 0 1 0 0 0
should give at least 2x a single-scrape-failure alert, and for the leftmost
zeros, it's not yet clear what they'll become.
-   0 0 0 0 0 0 0 0 0 0 0 0  (= 2x six zeros)
should give only 1 target-down alert
- 0 0 0 0 0 0 1 0 0 0 0 0 0  (= 2x six zeros, separated by a 1)
should give 2 target-down alerts

Whether each of such alerts (e.g. in the 1 0 1 0 1 0 ...) case actually 
results
in a notification (mail) is of course a different matter, and depends on the
alertmanager configuration, but at least the alert should fire and with the 
right
alert-manager config one should actually get a notification for each single 
failed
scrape.


Now, Brian has already given me some pretty good ideas how do them 
basically the
ideas were:
(assuming that 1m makes the target down, and a scrape interval of 10s)

For the target-down alert:
a) expr: 'up == 0'
   for:  1m
b) expr: 'max_over_time(up[1m]) == 0'
   for:  0s
=> here (b) was probably better, as it would use the same condition as is 
also used
   in the alert below, and there can be no weird timing effects depending 
on the
   for: an when these are actually evaluated.

For the single-scrape-failiure alert:
A) expr: min_over_time(up[1m20s]) == 0 unless max_over_time(up[1m]) == 0
   for: 1m10s
   (numbers a bit modified from Brian's example, but I think the idea is 
the same)
B) expr: min_over_time(up[1m10s]) == 0 unless max_over_time(up[1m10s]) == 0
   for: 1m

=> I did test (B) quite a lot, but there was at least still one case where 
it failed
   and that was when there were two consecutive but distinct target-down 
errors, that
   is:
   0 0 0 0 0 0 1 0 0 0 0 0 0  (= 2x six zeros, separated by a 1)
   which would eventually look like e.g. 
   0 1 0 0 0 0 0 0   or   0 0 1 0 0 0 0 0
   in the above check, and thus trigger (via the left-most zeros) a false
   single-scrape-failiure alert.

=> I'm not so sure whether I truly understand (A),... especially with 
respect to any
   niche cases, when there's jitter or so (plus, IIRC, it also failed in 
the case
   described for (B).


One approach I tried in the meantime was to use sum_over_time .. and then 
the idea was
simply to check how mane ones there are for each case. But it turns out 
that even if
everything runs normal, the sum is not stable... some times, over [1m] I 
got only 5,
whereas most times it was 6.
Not really sure how that comes, because the printed timestamps for each 
sample seem to
be sper accurate (all the time), but the sum wasn't.


So I tried a different approach now, based on the above from Brian,... 
which at least in
tests looks promising so far... but I'd like to hear what experts think 
about it.

- both alerts have to be in the same alert groups (I assume this assures 
they're then
  evaluated in the same thread and at the "same time" (that is, with 
respect to the same
  reference timestamp).
- in my example I assume a scrape time of 10s and evaluation interval of 7s 
(not really
  sure whether the latter matters or could be changed while the rules stay 
the same - and
  it would still work or not)
- for: is always 0s ... I think that's good, because at least to me it's 
unclear, how
  things are evaluated if the two alerts have different values for for:, 
especially in
  border cases.
- rules:
- alert: target-down
  expr: 'max_over_time( up[1m0s] )  ==  0'
  for:  0s
- alert: single-scrape-failure
  expr: 'min_over_time(up[15s] offset 1m) == 0 unless 

[prometheus-users] Re: better way to get notified about (true) single scrape failures?

2023-05-13 Thread Brian Candler
On Saturday, 13 May 2023 at 03:26:18 UTC+1 Christoph Anton Mitterer wrote:


  (If there is jitter in the sampling time, then occasionally it might look 
at 4 or 6 samples)


Jitter in the sense that the samples are taken at slightly different times?


Yes. Each sample is timestamped with the time the scrape took place.

Consider a 5 minute window which contains generally contains 5 samples at 1 
minute intervals:

   |...*..*..*..*..*|...*

Now consider what happens when one of those samples is right on the 
boundary of the window:

   |*..*..*..*..*...|*...

Depending on the exact timings that the scrape takes place, it's possible 
that the first sample could fall outside:

   *|..*..*..*..*...|*...

Or the next sample could fall inside:

   |*..*..*..*..*..*|...

 

Do you think that could affect the desired behaviour?


In my experience, the scraping regularity of Prometheus is very good (just 
try putting "up[5m]" into the PromQL browser and looking at the timestamps 
of the samples, they seem to increment in exact intervals).  Oo it's 
unlikely to happen much, and it might when the system is under high load, I 
guess.  Or it might never happen, if Prometheus writes the timestamps of 
the times it *wanted* to make the scrape, not when it actually occurred.  
Determining that would require looking in source code.
 

Another point I basically don't understand... how does all that relate to 
the scrap intervals?
The plain up == 0 simply looks at the most recent sample (going back up to 
5m as you've said in the other thread).

The series up[Ns] looks back N seconds, giving whichever samples are within 
there and now. AFAIU, there it doesn't go "automatically" back any further 
(like the 5m above), right?


That's correct.

So if you're trying to make mutual expressions which fire in case A but not 
B, and case B but not A, then you'd probably be better off writing then to 
both use up[5m].

min_over_time(up[5m]) == 0# use this instead of "up == 0  // for: 5m" 
for the main alert.

 


In order for the for: to work I need at least two samples


No, you just need two rule evaluations. The rule evaluation interval 
doesn't have to be the same as the scrape interval, and even if they are 
the same, they are not synchronized.


If what I've written above is correct (and it may well not be!), then

expr: up == 0
for: 5m

will fire if "up" is zero for 6 cycles, whereas


(*rule evaluation* cycles, if your rule evaluation interval is 1m)
 


As far as I understand you... 6 cycles of rule evaluation interval... with 
at least two samples within that interval, right?


No.  The expression "up" is evaluated at each rule evaluation time, and it 
gives the most recent value of "up", looking back up to 5 minutes.

So if you had a scrape interval of 2 minutes, with a rule evaluation 
interval of 1 minute it could be that two rule evaluations of "up" see the 
same scraped value.

(This can also happen in real life with a 1 minute scrape interval, if you 
have a failed scrape)

 

Once an alert fires (in prometheus), even i just for one evaluation 
interval cycle and there is no inhibiton rule or so in alertmanager... 
is it expected that a notification is sent out for sure,... regardless of 
alertmanagers grouping settings?


There is group_wait. If the alert were to trigger and clear within the 
group_wait interval, I'd expect no alert to be sent. But I've not tested 
that.
 

Like when the alert fires for one short 15s evaluation interval and clears 
again afterwards,... but group_wait: is set to some 7d ... is it expected 
to send that singe firing event after 7d, even if it has resolved already 
once the 7d are over and there was .g. no further firing in between?


You'll need to test it, but my expectation would be that it wouldn't send 
*anything* for 7 days (while it waits for other similar alerts to appear), 
and if all alerts have disappeared within that period, that nothing would 
be sent.  However, I don't know if the 7 day clock resets as soon as all 
alerts go away, or it continues to tick.  If this matters to you, then test 
it.

Nobody in their right might would use 7d for group_wait of course.  
Typically you might set it to around a minute, so that if a bunch of 
similar alerts fire within that 1 minute period, they are gathered together 
into a single notification rather than a slew of separate notifications.

HTH,

Brian.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/c2aa08ef-e27d-4c3c-b364-8a064d0fc7d0n%40googlegroups.com.


[prometheus-users] Re: better way to get notified about (true) single scrape failures?

2023-05-12 Thread Christoph Anton Mitterer
Hey Brian

On Wednesday, May 10, 2023 at 9:03:36 AM UTC+2 Brian Candler wrote:

It depends on the exact semantics of "for". e.g. take a simple case of 1 
minute rule evaluation interval. If you apply "for: 1m" then I guess that 
means the alert must be firing for two successive evaluations (otherwise, 
"for: 1m" would have no effect).


Seems you're right.

I did quite some testing meanwhile with the following alertmanager route 
(note, that I didn't use 5m, but 1m... simply in order to not have to wait 
so long):
  routes:
  - match_re:
  alertname: 'td.*'
receiver:   admins_monitoring
group_by:   [alertname]
group_wait: 0s
group_interval: 1s

and the following rules:
groups:
  - name: alerts_general_single-scrapes
interval: 15s
rules:
- alert: td-fast
  expr: 'min_over_time(up[75s]) == 0 unless max_over_time(up[75s]) == 0'
  for:  1m
- alert: td
  expr: 'up == 0'
  for:  1m

My understanding is, correct me if wrong, that basically prometheus would 
run a thread for the scrape job (which in my case would have an interval of 
15s) and another one that evaluates the alert rules (above every 15s) which 
then sends the alert to the alertmanager (if firing).

It felt a bit brittle to have the rules evaluated with the same period then 
the scrapes, so I did all tests once with 15s for the rules interval, and 
once with 10s. But it seems as if this wouldn't change the behaviour.


But up[5m] only looks at samples wholly contained within a 5 minute window, 
and therefore will normally only look at 5 samples.


As you can see above,... I had already noticed that you were indeed right 
before, and if my for: is e.g. 4 * evaluation_interval(15s) = 1m ... I need 
to look back 5 * evaluation_interval(15s) = 75s

At least in my tests, that seemed to cause the desired behaviour, except 
for one case:
When my "slow" td fires (i.e. after 5 consecutive "0"s) and then there 
is... within (less than?) 1m, another sequence of "0"s that eventually 
cause a "slow" td. In that case, td-fast fires for a while, until it 
directly switches over to td firing.

Was your idea above with something like:
>expr: min_over_time(up[8m]) == 0 unless max_over_time(up[6m]) == 0
>for: 7m
intended to fix that issue?

Or could one perhaps use 
ALERTS{alertname="td",instance="lcg-lrz-ext.grid.lrz.de",job="node"}[??s] 
== 1 somehow, to check whether it did fire... and then silence the false 
positive.

 

  (If there is jitter in the sampling time, then occasionally it might look 
at 4 or 6 samples)


Jitter in the sense that the samples are taken at slightly different times?
Do you think that could affect the desired behaviour? I would intuitively 
expect that it rather only cases the "base duration" not be be exactly e.g. 
1m ... so e.g. instead of taking 1m for the "slow" td to fire, it would 
happen +/- 15s earlier (and conversely for td-slow).


Another point I basically don't understand... how does all that relate to 
the scrap intervals?
The plain up == 0 simply looks at the most recent sample (going back up to 
5m as you've said in the other thread).

The series up[Ns] looks back N seconds, giving whichever samples are within 
there and now. AFAIU, there it doesn't go "automatically" back any further 
(like the 5m above), right?

In order for the for: to work I need at least two samples... so doesn't 
that mean that as soon as any scrape time is for:-time(1m) / 2 = ~30s (in 
the above example), the above two alerts will never fire, even if it's down?

So if I had e.g. some jobs scraping only every 10m ... I'd need another 
pair of td/td-fast alerts, which then filter on the job 
(up{job="longRunning"}) and either only have td... (if that makes sense) 
... or at td-fast for if one of the every-10m-scrape fails and an even long 
"slow" td like if that fails for 1h.


If what I've written above is correct (and it may well not be!), then

expr: up == 0
for: 5m

will fire if "up" is zero for 6 cycles, whereas


As far as I understand you... 6 cycles of rule evaluation interval... with 
at least two samples within that interval, right?
 

... unless max_over_time(up[5m])

will suppress an alert if "up" is zero for (usually) 5 cycles.



 Last but not least an (only) partially related question:

Once an alert fires (in prometheus), even i just for one evaluation 
interval cycle and there is no inhibiton rule or so in alertmanager... 
is it expected that a notification is sent out for sure,... regardless of 
alertmanagers grouping settings?
Like when the alert fires for one short 15s evaluation interval and clears 
again afterwards,... but group_wait: is set to some 7d ... is it expected 
to send that singe firing event after 7d, even if it has resolved already 
once the 7d are over and there was .g. no further firing in between?


Thanks a lot :-)
Chris.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To 

[prometheus-users] Re: better way to get notified about (true) single scrape failures?

2023-05-10 Thread Brian Candler
> Not sure if I'm right, but I think if one places both rules in the same 
group (and I think even the order shouldn't matter?), then the original:
> expr: min_over_time(up[5m]) == 0 unless max_over_time(up[5m]) == 0
> for: 5m
> with 5m being the "for:"-time of the long-alert should be guaranteed to 
work... in the sense that if the above doesn't fire... the long-alert > 
does.

It depends on the exact semantics of "for". e.g. take a simple case of 1 
minute rule evaluation interval. If you apply "for: 1m" then I guess that 
means the alert must be firing for two successive evaluations (otherwise, 
"for: 1m" would have no effect).

If so, then "for: 5m" means it must be firing for six successive 
evaluations.

But up[5m] only looks at samples wholly contained within a 5 minute window, 
and therefore will normally only look at 5 samples.  (If there is jitter in 
the sampling time, then occasionally it might look at 4 or 6 samples)

If what I've written above is correct (and it may well not be!), then

expr: up == 0
for: 5m

will fire if "up" is zero for 6 cycles, whereas

... unless max_over_time(up[5m])

will suppress an alert if "up" is zero for (usually) 5 cycles.

If you want to get to the bottom of this with certainty, you can write unit 
tests 

 
that try out these scenarios.

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/12e68a80-7d90-4e91-838a-bae6a21ca3b1n%40googlegroups.com.


[prometheus-users] Re: better way to get notified about (true) single scrape failures?

2023-05-09 Thread Christoph Anton Mitterer
Hey Brian.

On Tuesday, May 9, 2023 at 9:55:22 AM UTC+2 Brian Candler wrote:

That's tricky to get exactly right. You could try something like this 
(untested):

expr: min_over_time(up[5m]) == 0 unless max_over_time(up[5m]) == 0
for: 5m

- min_over_time will be 0 if any single scrape failed in the past 5 minutes
- max_over_time will be 0 if all scrapes failed (which means the 'standard' 
failure alert should have triggered)

Therefore, this should alert if any scrape failed over 5 minutes, unless 
all scrapes failed over 5 minutes.


Ah that seems a pretty smart idea.

And the for: is needed to make it actually "count", as the [5m] only looks 
back 5m, but there, max_over_time(up[5m]) would have likely been still 1 
while min_over_time(up[5m]) would already be 0, and if one had then e.g. 
for: 0s, it would fire immediately.
 

There is a boundary condition where if the scraping fails for approximately 
5 minutes you're not sure if the standard failure alert would have 
triggered.


You mean like the above one wouldn't fire cause it thinks it's the 
long-term alert, while that wouldn't fire either, because it has just 
resolved then?
 
 

Hence it might need a bit of tweaking for robustness. To start with, just 
make it over 6 minutes:

expr: min_over_time(up[6m]) == 0 unless max_over_time(up[6m]) == 0
for: 6m

That is, if max_over_time[6m] is zero, we're pretty sure that a standard 
alert will have been triggered by then.


That one I don't quite understand.
What if e.g. the following scenario happens (with each line giving the 
state 1m after the one before):

  for=6   for=5
m   -5 -4 -3 -2 -1  0   for min[6m] max[6m] result/shortresult/long
up:  1  1  1  1  1  0   1   0   1   pending pending 
up:  1  1  1  1  0  0   2   0   1   pending pending
up:  1  1  1  0  0  0   3   0   1   pending pending
up:  1  1  0  0  0  0   4   0   1   pending pending 
up:  1  0  0  0  0  0   5   0   1   pending fire
up:  0  0  0  0  0  1   6   0   1   fireclear

After 5m, the long term alert would fire, after that the scraping would 
succeed again, but AFAIU the "special" alert for the short ones would still 
be true at that point and then start to fire, despite all the previous 5 
zeros have actually been reported as part of a long-down alert.


I'm still not quite convinced about the "for: 6m" and whether we might lose 
an alert if there were a single failed scrape. Maybe this would be more 
sensitive:

expr: min_over_time(up[8m]) == 0 unless max_over_time(up[6m]) == 0
for: 7m

but I think you might get some spurious alerts at the *end* of a period of 
downtime.


That also seems quite complex. And I guess it might have the same possible 
issue from above?

The same should be the case if one would do:
expr: min_over_time(up[6m]) == 0 unless max_over_time(up[5m]) == 0
for: 6m
It may be just 6m ago that there was a "0" (from a long alert) and the last 
5m there would have been "1"s. So the short-alert would fire, despite it's 
unclear whether the "0" 6m ago was really just a lonely one or the end of a 
long-alert period.

Actually, I think, any case where the min_over_time goes further back than 
the long-alert's for:-time should have that.


expr: min_over_time(up[5m]) == 0 unless max_over_time(up[6m]) == 0
for: 5m
would also be broken, IMO, cause if 6m ago there was a "1", only the 
min_over_time(up[5m]) == 0 would remain (and nothing would silence the 
alert if needed)... if there 6m ago was a "0", it should effectively be the 
same than using [5m]?


Isn't the problem from the very above already solved by placing both alerts 
in the same rule group?

https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/ 
says:
"Recording and alerting rules exist in a rule group. Rules within a group 
are run sequentially at a regular interval, with the same evaluation time."
which I guess applies also to alert rules.

Not sure if I'm right, but I think if one places both rules in the same 
group (and I think even the order shouldn't matter?), then the original:
expr: min_over_time(up[5m]) == 0 unless max_over_time(up[5m]) == 0
for: 5m
with 5m being the "for:"-time of the long-alert should be guaranteed to 
work... in the sense that if the above doesn't fire... the long-alert does.

Unless of course the grouping settings at alert manager cause trouble.. 
which I don't quite understand especially, once an alert fires, even if 
just for short,... is it guaranteed that a notiication is sent?
Cause as I wrote before, that didn't seem to be the case.

Last but not least, if my assumption is true and your 1st version would 
work if both alerts are in the same group... how would the interval then 
matter? Would it still need to be the smallest scrape time (I guess so)?


Thanks,
Chris.


[prometheus-users] Re: better way to get notified about (true) single scrape failures?

2023-05-09 Thread Brian Candler
That's tricky to get exactly right. You could try something like this 
(untested):

expr: min_over_time(up[5m]) == 0 unless max_over_time(up[5m]) == 0
for: 5m

- min_over_time will be 0 if any single scrape failed in the past 5 minutes
- max_over_time will be 0 if all scrapes failed (which means the 'standard' 
failure alert should have triggered)

Therefore, this should alert if any scrape failed over 5 minutes, unless 
all scrapes failed over 5 minutes.

There is a boundary condition where if the scraping fails for approximately 
5 minutes you're not sure if the standard failure alert would have 
triggered. Hence it might need a bit of tweaking for robustness. To start 
with, just make it over 6 minutes:

expr: min_over_time(up[6m]) == 0 unless max_over_time(up[6m]) == 0
for: 6m

That is, if max_over_time[6m] is zero, we're pretty sure that a standard 
alert will have been triggered by then.

I'm still not quite convinced about the "for: 6m" and whether we might lose 
an alert if there were a single failed scrape. Maybe this would be more 
sensitive:

expr: min_over_time(up[8m]) == 0 unless max_over_time(up[6m]) == 0
for: 7m

but I think you might get some spurious alerts at the *end* of a period of 
downtime.

On Tuesday, 9 May 2023 at 02:29:40 UTC+1 Christoph Anton Mitterer wrote:

> Hey.
>
> I have an alert rule like this:
>
> groups:
>   - name:   alerts_general
> rules:
> - alert: general_target-down
>   expr: 'up == 0'
>   for:  5m
>
> which is intended to notify about a target instance (respectively a 
> specific exporter on that) being down.
>
> There are also routes in alertmanager.yml which have some "higher" periods 
> for group_wait and group_interval and also distribute that resulting alerts 
> to the various receivers (e.g. depending on the instance that is affected).
>
>
> By chance I've noticed that some of our instances (or the networking) seem 
> to be a bit unstable and every now and so often, a single scrape or some 
> few fail.
>
> Since this does typically not mean that the exporter is down (in the above 
> sense) I wouldn't want that to cause a notification to be sent to people 
> responsible for the respective instances.
> But I would want to get one sent, even if only a single scrape fails, to 
> the local prometheus admin (me ^^), so that I can look further, what causes 
> the scrape failures.
>
>
>
> My (working) solution for that is:
> a) another alert rule like:
> groups:
>   - name: alerts_general_single-scrapes
> interval: 15s
> rules:
> - alert: general_target-down_single-scrapes  
>   expr: 
> 'up{instance!~"(?i)^.*\\.garching\\.physik\\.uni-muenchen\\.de$"} == 0'
>   for:  0s
>
> (With 15s being the smallest scrape time used by any jobs.)
>
> And a corresponding alertmanager route like:
>   - match:
>   alertname: general_target-down_single-scrapes
> receiver:   admins_monitoring_no-resolved
> group_by:   [alertname]
> group_wait: 0s
> group_interval: 1s
>
>
> The group_wait: 0s and group_interval: 1s seemed necessary, cause despite 
> of the for: 0s, it seems that alertmanager kind of checks again before 
> actually sending a notification... and when the alert is gone by then 
> (because there was e.g. only one single missing scrape) it wouldn't send 
> anything (despite the alert actually fired).
>
>
> That works so far... that is admins_monitoring_no-resolved get a 
> notification for every single failed scrape while all others only get them 
> when they fail for at least 5m.
>
> I even improved the above a bit, by clearing the alert for single failed 
> scrapes, when the one for long-term down starts firing via something like:
>   expr: '( up{instance!~"(?i)^.*\\.ignored\\.hosts\\.example\\.org$"} 
> == 0 )  unless on (instance,job)  ( ALERTS{alertname="general_target-down", 
> alertstate="firing"} == 1 )'
>
>
> I wondered wheter this can be done better?
>
> Ideally I'd like to get notification for 
> general_target-down_single-scrapes only sent, if there would be no one for 
> general_target-down.
>
> That is, I don't care if the notification comes in late (by the above ~ 
> 5m), it just *needs* to come, unless - of course - the target is "really" 
> down (that is when general_target-down fires), in which case no 
> notification should go out for general_target-down_single-scrapes.
>
>
> I couldn't think of an easy way to get that. Any ideas?
>
>
> Thanks,
> Chris.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/237fda1f-89ce-419a-a54f-b9b12ea4d593n%40googlegroups.com.