The best way to do this is to use a recording rule.
Something like this:
- record: global:pduInputPowerConsumption:sum
expr: |
sum(pduInputPowerConsumption{job="apc_pdus"} * 10 or on () vector(0))
+
sum(pduInputPowerConsumption{job!="apc_pdus"} or on () vector(0))
This is also
On Sat, 23 Oct 2021 at 00:44, Ben Cohee wrote:
> This has been bothering me for a while, and hopefully someone has a
> solution I am simply overlooking.
>
> I use snmp_exporter to pull power metrics from a bunch of different PDU
> vendors (APC, Raritan, Geist, ServerTech, etc). I have one
Without any more info, it looks like someone / something is abusing your
Prometheus server API endpoint. Or something has changed such that your API
endpoint is very slow.
I would look at the "prometheus_http_request_duration_seconds" histogram to
find out if there is a specific API endpoint that
Thanos uses the remote read API, those look like direct query_range
requests, aka graph queries.
On Sat, Oct 23, 2021 at 11:41 AM weidong zou <5493924...@gmail.com> wrote:
>
> thanks ,really like this.
>
> [image: err2.PNG]
>
> I added the thanos sidecar in prometheus, I might have to check the
version : prometheus:v2.28.1
Suddenly this problem occurred in my prometheus,Causes the memory to burst.
Something like this:
- level=error ts=2021-10-23T08:31:39.022Z caller=api.go:1491
component=web msg="error writing response" bytesWritten=0 err="write tcp
É MASSONE ASSASSINO E PEDOFILO: #GIOELEMAGALDI! IN LOGGE INTERNAZIONALI LO
CHIAMIAMO TUTTI "IL LICIO GELLI PEDERASTA GIOELE MAGALDI"! HA ORGANIZZATO
CENTINAIA DI UCCISIONI FATTI PASSARE X FINTI SUICIDI, INFARTI, INCIDENTI
(ERA AMANTE OMOSESSUALE.. DI #GIULIOTREMONTI E
6 matches
Mail list logo