Another option: if you cannot reach Slack from the individual environments, but you can provision some other global endpoint that would be reachable from them, you could have the local Alertmanagers send all of their alerts to a webhook receiver ( https://prometheus.io/docs/alerting/latest/configuration/#webhook_config), which you implement on the other side, and which can then do anything. Or alternatively, for the Slack receiver, you can actually configure the "api_url", and maybe you can point that to a reachable proxy address of yours that can then simply proxy the request to Slack?
On Sun, Jan 17, 2021 at 10:41 AM Julius Volz <[email protected]> wrote: > s/each location's Alertmanager/each location's Prometheus/ > > On Sun, Jan 17, 2021 at 10:40 AM Julius Volz <[email protected]> > wrote: > >> Hi, >> >> Indeed that's not ideal and annotations and such will only be available >> in the source Prometheus servers (and then pushed to the Alertmanagers). >> I'm not sure what exactly your network constraints are, but they seem to >> prohibit Alertmanager in each location talking directly to Slack. Would >> they also prohibit each location's Alertmanager pushing additionally to a >> global Alertmanager? Because if that's possible, you could point each >> location's Prometheus to both a local and a global Alertmanager, and both >> would receive all alerting info, and would be able to route alerts >> differently. >> >> Julius >> >> On Fri, Jan 15, 2021 at 9:39 AM 'Dennis Murtic' via Prometheus Users < >> [email protected]> wrote: >> >>> Hi all, >>> >>> Currently we have different environments where Prometheus and >>> Alertmanager are used. These environments are also protected by certain >>> network settings, so we need to use PushProx. To collect data from all >>> environments, we use a "global" Prometheus server that federates with the >>> environments Prometheus server. >>> Alarms are emailed directly from the environments. This means that the >>> alarm rules are also configured per environment. Now we would like to send >>> the alerts from all environments to one of our Slack channels as well. >>> Since the environments are protected by certain network settings, it is not >>> possible to send the alarms from the environments directly to our Slack >>> channel. One of our ideas is to configure an alert in the "global" alert >>> manager that looks like this: >>> >>> - alert: AlertName >>> expr: >>> ALERTS{alertname!="AlertName",alertstate="firing",severity="critical"} >>> labels: >>> severity: critical >>> annotations: >>> alertname: '{{ $labels.alertname }}' >>> summary: Alert {{ $labels.alertname }} fired. >>> >>> This seems to work, but unfortunately this is not the best solution as >>> alarms often switch between firing/resolved state and also the summary is >>> not available here. >>> >>> Does anyone have any ideas on how we can improve this? >>> Thanks! >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "Prometheus Users" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to [email protected]. >>> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/prometheus-users/CAFERo3qk1zjB3krqNePeomzgvBXdVzTyTVd%3DkD9SbCJuM3WWhA%40mail.gmail.com >>> <https://groups.google.com/d/msgid/prometheus-users/CAFERo3qk1zjB3krqNePeomzgvBXdVzTyTVd%3DkD9SbCJuM3WWhA%40mail.gmail.com?utm_medium=email&utm_source=footer> >>> . >>> >> >> >> -- >> Julius Volz >> PromLabs - promlabs.com >> > > > -- > Julius Volz > PromLabs - promlabs.com > -- Julius Volz PromLabs - promlabs.com -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/CAObpH5xQidiNmshqvkMF9aqf1weNwjhqP8t%3DfV2m79itpmcxaA%40mail.gmail.com.

