Hi All,

While working on some alert rules I have noticed that the metric*
kube_pod_container_Info* has wrong values under the node label. This label
uses the below scrape/relabel config which looks right. Whereas the other
metric from the same job *kube_pod_info *has the correct node label value.
The node value in the *kube_pod_container_info *is giving the name of the
node where Prometheus is running (only returns 1 node value for all the
pods in the cluster).

  - source_labels: [__meta_kubernetes_pod_node_name]
    separator: ;
    regex: (.*)
    target_label: node
    replacement: $1
    action: replace

Another observation is that a bunch of metrics under this job
*kubernetes-service-endpoints* (like *kube_pod_container_status_restarts_total)
*are also reporting only one node value for all the pods on the cluster.

This is deployed using helm chart version 19.3.1 with the default scrape
config.

Any suggestions/ recommendations?
-- 
Regards,
Murali Krishna Kanagala

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAKimyZvTqw9jqW%3DoWFa8StAUV2UnBFOZCp6xE-YwaUz0ZmjF4g%40mail.gmail.com.

Reply via email to