You might also need to adjust the firewall at Digital Ocean to allow your
browser to connect to Prometheus.
No additional set-up is needed to get the UI; Prometheus serves it over
http by default.
Bryan
On Monday 19 February 2024 at 07:39:24 UTC Brian Candler wrote:
> localhost:9090 is what
Grafana can alert off 200 different data sources, most of which do not
support PromQL, so it is necessary to offer more features in the client.
On Tuesday 6 February 2024 at 21:18:14 UTC Andrew Dedesko wrote:
> That's a good point, Ben and Brian! And thank you both for clarifying!
>
> On
You could look at the panels on the Prombench dashboard, e.g.
http://prombench.prometheus.io/grafana/d/7gmLoNDmz/prombench?orgId=1=12304=1705949079041=1705955763080
The four 'timings' panels show different parts of queries; the overall CPU
and memory metrics are also useful.
Bryan
On Friday
I would recommend you stop using irate().
With 4 samples per minute, irate(...[1m]) discards half your information.
This can lead to artefacts.
There is probably some instability in the underlying samples, which is
worth investigating.
An *instant* query like
Assuming this is alertmanager, per your other message, you can use
--storage.path to pick a different directory, which your user does have
permission to.
It seems this is not mentioned in the documentation.
Bryan
On Wednesday 3 January 2024 at 20:43:34 UTC sobhma...@gmail.com wrote:
>
Yes it is feasible.
One Prometheus in each AZ, scraping targets in that AZ, will minimize
inter-AZ traffic, but give you a single point of failure in each AZ.
Two Prometheus in each AZ, scraping targets in that AZ, will also minimize
inter-AZ traffic, and be resilient to one random*
I think what you are looking for is to do a "join", which in PromQL terms
is some binary expression.
See for instance this
article: https://www.robustperception.io/left-joins-in-promql/
You'd need to do a label_replace from "DBInstanceIdentifier" to "instance"
Also you can do `group by` to
The trick is to do a "join", which in PromQL terms can be an arithmetic
expression.
Since the value of the first set of series is always 1, you can join by
multiplying the two together.
On Monday, 4 December 2023 at 14:12:57 UTC chris...@gmail.com wrote:
> *Grab the namespaces we want to
after few HEAD
>>> truncation cycles but the memory now could be even higher than the case 1
>>> above.
>>>
>>> What confusing me here is that Prometheus memory does not return to the
>>> point before restarting the target even the time
> On 30 Nov 2023, at 07:20, Vu Nguyen wrote:
>
>
> Thank you very much for your supports.
>
> > are you still deleting the WAL?
>
> No, I did not delete WAL at all. What I did was restarting a pod that has
> 500K time series exposed.
>
Ah, if there is any label different (eg the pod
ou for your supports.
> On Monday, November 6, 2023 at 10:45:32 PM UTC+7 Bryan Boreham wrote:
>
>> I think this issue is relevant:
>> https://github.com/prometheus/prometheus/issues/12286
>>
>> I didn't follow your description of the symptoms;
>>
>> &g
modern replacements for the original
> Federation method.
>
> On Wed, Nov 22, 2023 at 1:01 PM Bryan Boreham wrote:
>
>> Federation is a bit of a neglected feature. The Thanos project is rather
>> more popular as a way to aggregate data from multiple Prometheus.
>>
>>
rate(container_cpu_usage_seconds_total) is in cpu-seconds per second, or
cpus.
container_spec_cpu_quota / container_spec_cpu_period is also in cpus.
Therefore the ratio is unitless, just a fraction.
To see it as a percentage, multiply by 100.
Bryan
On Wednesday, 22 November 2023 at 13:10:35
Federation is a bit of a neglected feature. The Thanos project is rather
more popular as a way to aggregate data from multiple Prometheus.
(Other projects also exist which can let you centralise metrics storage)
Bryan
On Thursday, 16 November 2023 at 15:51:12 UTC hannesst...@gmail.com wrote:
I believe remote-write in Prometheus will send samples if the outgoing
buffer is full or the deadline is reached.
So increasing both max_samples_per_send and batch_send_deadline in
queue_config should work.
You should be able to increase the number of parallel sends via min_shards ,
so it
Grafana Agent has a "cluster mode" which will shard targets pick a single
sender client-side.
This is currently in beta.
https://grafana.com/docs/agent/latest/flow/concepts/clustering/
[disclosure: I work for Grafana Labs]
Bryan
On Friday, 4 November 2022 at 06:57:02 UTC navid.sh...@gmail.com
I think this issue is relevant:
https://github.com/prometheus/prometheus/issues/12286
I didn't follow your description of the symptoms;
> the memory goes up to 3.7Gi comparing to 2.5Gi
In your picture I see spikes at over 5Gi. The spikes are every 2 hours
which would tie in to head
17 matches
Mail list logo