Federation is just scraping: the central prometheus server scrapes the
other's /federate endpoint.
Scraping only collects metrics.
Alerts are completely separate:
- alerting rules are evaluated periodically inside the prometheus server
- if they are triggered, then alerts are *pushed* out to
Hi Brian,
Once again, thanks a lot for your assistance.
I went with using the metric_relabel_config you showed in your first post.
It worked nicely.
Cheers :)
Regards
Christian Oelsner
søndag den 20. november 2022 kl. 11.36.57 UTC+1 skrev Brian Candler:
> On Saturday, 19 November 2022 at
I'm trying to use the group_wait parameter in order to allow Alertmanager
to wait for all the alerts received from Prometheus, group them and send a
single notification.
I have the following configuration:
route:
receiver: default-receiver
group_by:
- alertname
- environment
I just figured it out. It is something on the firewall rules in my IPSEC tunnel
between my two networks. I tried it from a local machine on that network and I
could hit the webpage!
From: prometheus-users@googlegroups.com On
Behalf Of Brian Candler
Sent: Monday, November 21, 2022 1:41 AM
Hi,
We are using JMX_exporter to export metrics to Prometheus.
Multiple connector could be residing on workers in same host. So it is not
possible to assign port in static config. So we need to have metrics for
different connectors on different ports clearly.
My question is: whether it is
The UFW rule looks OK, as far as I can tell.
Status: active
To Action From
-- --
[ 1] 22/tcp ALLOW INAnywhere # SSH
[ 2] 9090/tcp ALLOW INAnywhere
It is possible to alert get from federate prometheus to central prometheus
or only metrics get?
On Tue, 22 Nov 2022, 12:41 Stuart Clark, wrote:
> On 22/11/2022 05:53, Prashant Singh wrote:
> > Dear All,
> >
> > I am using federate prometheus and configured to central prometheus .
> > but
On 22 Nov 08:30, Nenad Rakonjac wrote:
> Hi everyone,
>
> Is it possible to use two receivers for one alert? I want to implement
> something like this:
>
> - match:
> type: alert_alert
> receiver: receiverOne
>
> - match:
> type: alert_alert
> receiver: receiverTwo
>
> receivers:
>
> -
Option 3: use child routes. This avoids duplicating the match conditions.
- match:
type: alert_alert
routes: [ {receiver: receiverOne, continue: true}, {receiver:
receiverTwo} ]
The 'routes' branch is a separate subtree. Since each of the routes within
it has no 'match' conditions, it
Hi everyone,
Is it possible to use two receivers for one alert? I want to implement
something like this:
- match:
type: alert_alert
receiver: receiverOne
- match:
type: alert_alert
receiver: receiverTwo
receivers:
- name: receiverOne
email_configs:
- to: 'email...@gmail.com'
from:
Hi Stuart
Thank you so much! It was very useful! I got a lot of insights!
On Monday, November 21, 2022 at 5:25:52 AM UTC-3 Stuart Clark wrote:
> On 20/11/2022 22:53, Julio Leal wrote:
> > Hi everyone!
> > I'm doing a study about how much time we have in our prometheus
> instances.
> > First
Hi everyone
I'm trying understand and create an end of life of my prometheus instance.
I think that my promethes will die as my number of timeseries increases and
I need more memory ram.
How can I create a correlation between my timeseries growth and my memory
ram growth?
I already try to use:
12 matches
Mail list logo