Just remove the job selector from your query and it will alert for all jobs
On 17 April 2021 10:40:57 BST, akshay sharma <[email protected]> wrote: >In my setup, Prometheus is monitoring multiple nodes simultaneously, >say, >x,y z. >I want to raise alerts, once cpu utilization exceeds "a" value for each >of >the nodes. > >Below is the alert rule. > >alert: cpu_utilization >expr: 100 - (avg by(instance) >(irate(node_cpu_seconds_total{job="--",mode="idle"}[5m])) * 100) > a >labels: > severity: critical >annotations: > summary: CPU utilization has crossed a% > > >*QUERY: * >*1) How can I use the same rule for multiple nodes/jobs? Is there any >way >to update job names dynamically? As I want to avoid multiple alert >rules >for each job. * > > >Thanks, > >-- >You received this message because you are subscribed to the Google >Groups "Prometheus Users" group. >To unsubscribe from this group and stop receiving emails from it, send >an email to [email protected]. >To view this discussion on the web visit >https://groups.google.com/d/msgid/prometheus-users/CAOrgXNK64LRNziLiOsYJtLxKuOzqzm%3DLKXv6LK1xE0UgbSqWTA%40mail.gmail.com. -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/DF85F6EE-5FA4-468C-87DD-72003A85050C%40Jahingo.com.

