Running prometheus prometheus:2.18.2
thanos sidecar thanos:0.10.0

I did see that some metrics after adding thanos side car has label
exported_namespace/exported_pod
so guessing those are causing duplicates.

Currently I just have thanos side car running . Via enabling via prometheus
operator .
How do I verify the setup ?
I figured it might be a configuration issue on my end , as I have not seen
anyone else with this problem.


As well as this is the configuration for kube-state-metrics

- job_name: monitoring/kube-state-metrics/0
  honor_labels: true
  honor_timestamps: true
  scrape_interval: 15s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - monitoring
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
    separator: ;
    regex: kube-state-metrics
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_port_name]
    separator: ;
    regex: http
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_address_target_kind,
__meta_kubernetes_endpoint_address_target_name]
    separator: ;
    regex: Node;(.*)
    target_label: node
    replacement: ${1}
    action: replace
  - source_labels: [__meta_kubernetes_endpoint_address_target_kind,
__meta_kubernetes_endpoint_address_target_name]
    separator: ;
    regex: Pod;(.*)
    target_label: pod
    replacement: ${1}
    action: replace
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: job
    replacement: ${1}
    action: replace
  - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
    separator: ;
    regex: (.+)
    target_label: job
    replacement: ${1}
    action: replace
  - separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: http
    action: replace




On Tue, Aug 25, 2020 at 6:46 AM Bartłomiej Płotka <[email protected]>
wrote:

> It's very rare that someone have Prometheus -> remote read -> Thanos, it's
> rather Thanos sidecar connected to remote read that Prometheus expose.
>
> Rodrigo, can you confirm what situation you have?
>
> Kind Regards,
> Bartek Płotka (@bwplotka)
>
>
> On Tue, 25 Aug 2020 at 12:43, Julien Pivotto <[email protected]>
> wrote:
>
>>
>> Can we know which prometheus version you are running?
>>
>>
>> On 25 Aug 13:42, Julien Pivotto wrote:
>> > On 25 Aug 12:12, Bartłomiej Płotka wrote:
>> > > Hey,
>> > >
>> > > Interesting. Can you double-check your rules, what are they asking
>> for?
>> > > What queries are they making? Sounds like some typo in rule
>> configuration
>> > > you have with double asked matchers in the PromQL? Not sure how this
>> is
>> > > related to Thanos if you see this on Prometheus PromQL log.
>> >
>> >
>> > If thanos in configured as remote read, you can have the errors below
>> > because we see the metrics with and without the external labels.
>> >
>> > >
>> > > Kind Regards,
>> > > Bartek Płotka (@bwplotka)
>> > >
>> > >
>> > > On Mon, 24 Aug 2020 at 21:04, Rodrigo Martinez <[email protected]>
>> wrote:
>> > >
>> > > >
>> > > > I have noticed that when enabling thanos I now see in prometheus
>> logs of
>> > > >
>> > > > Error executing query: found duplicate series for the match group
>> > > > {namespace="monitoring", pod="kube-state-metrics-567789848b-9d77w"}
>> on the
>> > > > right hand-side of the operation:
>> > > > [{__name__="node_namespace_pod:kube_pod_info:",
>> namespace="monitoring",
>> > > > node="test01", pod="kube-state-metrics-567789848b-9d77w"},
>> > > > {__name__="node_namespace_pod:kube_pod_info:",
>> namespace="monitoring",
>> > > > node="test01",
>> pod="kube-state-metrics-567789848b-9d77w"}];many-to-many
>> > > > matching not allowed: matching labels must be unique on one side
>> > > >
>> > > > Primarily this is coming from kube-state-metrics job and other
>> > > > prometheusrule with metrics that have honor_labels set to true
>> > > >
>> > > >
>> > > >
>> > > > Just wondering overall how enabling thanos causes these errors now
>> > > >
>> > > > --
>> > > > You received this message because you are subscribed to the Google
>> Groups
>> > > > "Prometheus Users" group.
>> > > > To unsubscribe from this group and stop receiving emails from it,
>> send an
>> > > > email to [email protected].
>> > > > To view this discussion on the web visit
>> > > >
>> https://groups.google.com/d/msgid/prometheus-users/2da7a037-f8f1-4e84-b238-c7e45ae54498n%40googlegroups.com
>> > > > <
>> https://groups.google.com/d/msgid/prometheus-users/2da7a037-f8f1-4e84-b238-c7e45ae54498n%40googlegroups.com?utm_medium=email&utm_source=footer
>> >
>> > > > .
>> > > >
>> > >
>> > > --
>> > > You received this message because you are subscribed to the Google
>> Groups "Prometheus Users" group.
>> > > To unsubscribe from this group and stop receiving emails from it,
>> send an email to [email protected].
>> > > To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-users/CAMssQwYw1EaGTRaOdet0wOwHTY7aoyLF6XCwFRVcY6cbiP%3D8KA%40mail.gmail.com
>> .
>> >
>> > --
>> > Julien Pivotto
>> > @roidelapluie
>>
>> --
>> Julien Pivotto
>> @roidelapluie
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CADF_CUmV8%3DKRK%3Dagmcw3Bx1pRAUFZSJDnTjTX3dPTt80%2B-k90w%40mail.gmail.com.

Reply via email to