sorry meant servicemonitor
On Wednesday, August 26, 2020 at 9:35:04 PM UTC-5 Rodrigo Martinez wrote:
> I did as well just add
> relabelings:
> - action: labeldrop
> regex: (pod|service|endpoint|namespace)
>
> since looks like up to date prometheusrule uses this for
> kube-state-metrics.
>
>
>
> On Wednesday, August 26, 2020 at 9:24:38 PM UTC-5 Rodrigo Martinez wrote:
>
>> Have been updating old prometheus rules and have noticed less errors.
>> However there are some taken from other components.
>>
>> Currently I have ceph running and using their prometheusrule from their
>> examples
>> I see issues
>>
>> kube_node_status_condition{condition="Ready",job="kube-state-metrics",status="true"}
>>
>> * on (node) group_right()
>> max(label_replace(ceph_disk_occupation{job="rook-ceph-mgr"},"node","$1","exported_instance","(.*)"))
>>
>> by (node)
>>
>> on cluster with no thanos integration no issues. But with thanos i see
>> collision .
>>
>> when looking at left side of metric I do see that it returns
>> metrics twice
>>
>> kube_node_status_condition{condition="Ready",instance="0.0.0.0:8080
>> ",job="kube-state-metrics",node="test-node",status="true"}
>> kube_node_status_condition{condition="Ready",endpoint="http",instance="
>> 0.0.0.0:8080
>> ",job="kube-state-metrics",namespace="monitoring",node="test-node",pod="kube-state-metrics-567789848b-9d77w",service="kube-state-metrics",status="true"}
>>
>> Have to see what I`m doing wrong on my end. But if its noticeable from
>> what is being displayed do let me know . As this issue was not seen until I
>> enabled thanos side car.
>>
>> Thanks
>>
>> On Tuesday, August 25, 2020 at 9:49:36 AM UTC-5 [email protected] wrote:
>>
>>> - source_labels: [__meta_kubernetes_service_name]
>>> separator: ;
>>> regex: (.*)
>>> target_label: job
>>> replacement: ${1}
>>> action: replace
>>> - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
>>> separator: ;
>>> regex: (.+)
>>> target_label: job
>>> replacement: ${1}
>>> action: replace
>>>
>>>
>>> I'm not sure what you're attempting to do here, but it is risky to mess
>>> with the "job" label. This is the one prometheus itself sets to identify
>>> the scrape job where the metric originated, and if you end up scraping the
>>> same target multiple times from different jobs, this label ensures that the
>>> timeseries have unique label sets.
>>>
>>
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/prometheus-users/08c92ab1-116b-4dd7-a3e9-2d6e3f43f3a7n%40googlegroups.com.