If the scrape job is removing duplicate targets, then try giving them
distinct labels as I originally suggested:
- labels:
subinstance: 1
targets:
- pfsense.oberndorf.ca:443
- labels:
subinstance: 2
targets:
- pfsense.oberndorf.ca:443
Hello,
it's only working partly I think. If I add the same target several times to
the same job then prometheus treats targets with the exact naming as on.
This results in one target on prometheus' webui target list and tcpdump
confirms onle one scrape per 60s
- targets:
-
(Thinks: maybe it's *not* necessary to apply distinct labels? This feels
wrong somehow, but I can't pinpoint exactly why it would be bad)
On Tuesday 9 January 2024 at 14:43:51 UTC Brian Candler wrote:
> Unfortunately, the timeout can't be longer than the scrape interval,
> firstly because this
Unfortunately, the timeout can't be longer than the scrape interval,
firstly because this would require overlapping scrapes, and secondly the
results could be returned out-of-order: e.g.
xx:yy:00 scrape 1: takes 25 seconds, gives result at xx:yy:25
xx:yy:15 scrape 2: takes 5 seconds, gives
4 matches
Mail list logo