The best way in my opinion is to change to meaningful instance labels
<https://www.robustperception.io/controlling-the-instance-label>. That is:
arrange at scrape time that your metric has {instance="foo"} instead of
{instance="172.16.17.100:9100"}. Then the label you want is right there
already.
To do this requires a bit of relabelling in the scrape job. Here's the
config I use:
- job_name: node
scrape_interval: 1m
file_sd_configs:
- files:
- /etc/prometheus/targets.d/node_targets.yml
metrics_path: /metrics
relabel_configs:
- source_labels: [__address__]
regex: '([^ ]+)' # name or address only
target_label: instance
- source_labels: [__address__]
regex: '(.+) (.+)' # name address
target_label: instance
replacement: '${1}'
- source_labels: [__address__]
regex: '(.+) (.+)' # name address
target_label: __address__
replacement: '${2}'
- source_labels: [__address__]
target_label: __address__
replacement: '${1}:9100'
Now in your targets file you can put a plain DNS name or IP address
(without the :9100 suffix), and this will become the instance label. Or:
you can put a name followed by a space and a DNS name or IP address, like
this:
- labels:
netbox_type: device
targets:
- foo 172.16.17.100
- bar 172.16.17.101
The target to be scraped will be "172.16.17.100:9100", but the instance
label will be "foo"
Other approaches are significantly more difficult. You can do a PromQL
many-to-one "join" between your alerting expression and node_uname_info,
matching on the "instance" label, to add other labels from node_uname_info
to your alert. But this means that every alerting expression becomes
significantly more complex. For the technique, see:
https://www.robustperception.io/how-to-have-labels-for-machine-roles
https://www.robustperception.io/exposing-the-software-version-to-prometheus
https://www.robustperception.io/left-joins-in-promql
https://prometheus.io/docs/prometheus/latest/querying/operators/#many-to-one-and-one-to-many-vector-matches
On Thursday, 20 January 2022 at 17:21:52 UTC [email protected] wrote:
> Hi All,
>
> Env:
> K8s: 1.24
> Helm 3.0
>
> I have Prometheus alert , I need node Name in the labels
>
> - alert: example-alert
> annotations:
> description: Memory on node currently at %
> is under pressure
> summary: Memory usage is under pressure, system may become unstable.
> expr: |
> 100 - ((node_memory_MemAvailable_bytes{job="node-exporter"} * 100) /
> node_memory_MemTotal_bytes{job="node-exporter"}) > 50
> for: 2m
> labels:
> nodeName:
> severity: warning
>
> {endpoint="metrics",instance="172.16.17.100:9100",job="node-exporter",namespace="monitoring",pod="mypromoperator-prometheus-node-exporter-gg5nl",service="mypromoperator-prometheus-node-exporter"}
> 67.09431138997289
> {endpoint="metrics",instance="172.16.17.101:9100",job="node-exporter",namespace="monitoring",pod="mypromoperator-prometheus-node-exporter-9mfn2",service="mypromoperator-prometheus-node-exporter"}
> 52.7483247365166e
>
> but want to see node name n the query , how to configure alert so that I
> will get node name
>
>
> Code :https://github.com/rajendar38/prometheus
>
>
>
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/prometheus-users/f86e07cf-1927-43ff-bd67-02c4e2fc59fen%40googlegroups.com.