Hi,

just asked the same question on IRC but i don't know which is the best 
place to get support, so I'll ask also here :)

BTW, this is the IRC 
link: 
https://matrix.to/#/!HaYTjhTxVqshXFkNfu:matrix.org/$16137341243277ijEwp:matrix.org?via=matrix.org

*The Question*

 I'm seeing a behaviour that I'd very much like to understand, maybe you 
can help me...we've got a K8s cluster where Prometheus operator is 
installed (v0.35.1). Prometheus version is v2.11.0

Istio has also been installed in the cluster with the default "PERMISSIVE" 
mode, as to say that every envoy sidecar accepts plain http traffic.
Everything is deployed in default namespace, and everypod BUT 
prometheus/alertmanager/grafana is managed by Istio (i.e. the monitoring 
stack is out of the mesh)

Prometheus can successfully scrape all its targets (defined via 
ServiceMonitors), every target but 3/4 that it fails to scrape.

For example, from the logs of Prometheus i can see:

level=debug ts=2021-02-19T11:15:55.595Z caller=scrape.go:927 
component="scrape manager" scrape_pool=default/divolte/0 
target=http://10.172.22.36:7070/metrics msg="Scrape failed" err="server 
returned HTTP status 503 Service Unavailable"


But if i log into the Prometheus pod i can successully reach the pod that 
it's failing to scrape

/prometheus $ wget -SqO /dev/null http://10.172.22.36:7070/metrics
  HTTP/1.1 200 OK
  date: Fri, 19 Feb 2021 11:27:57 GMT
  content-type: text/plain; version=0.0.4; charset=utf-8
  content-length: 75758
  x-envoy-upstream-service-time: 57
  server: istio-envoy
  connection: close
  x-envoy-decorator-operation: divolte-srv.default.svc.cluster.local:7070/*


What am I missing? The scrape configuration is like that

- job_name: default/divolte/0
  honor_timestamps: true
  scrape_interval: 30s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - default
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_label_app]
    separator: ;
    regex: divolte
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_service_label_stack]
    separator: ;
    regex: livestreaming
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_port_name]
    separator: ;
    regex: http-metrics
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_address_target_kind, 
__meta_kubernetes_endpoint_address_target_name]
    separator: ;
    regex: Node;(.*)
    target_label: node
    replacement: ${1}
    action: replace
  - source_labels: [__meta_kubernetes_endpoint_address_target_kind, 
__meta_kubernetes_endpoint_address_target_name]
    separator: ;
    regex: Pod;(.*)
    target_label: pod
    replacement: ${1}
    action: replace
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: job
    replacement: ${1}
    action: replace
  - separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: http-metrics
    action: replace

Thank you!

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/23624d8b-e54d-4914-b4ba-0419fe6d9979n%40googlegroups.com.

Reply via email to