It generally takes some time for metrics to "show up", depending on the 
Prometheus instance configuration:

https://github.com/prometheus-operator/prometheus-operator/blob/787f54b055f797464d5832ace1c7f8318321c87a/Documentation/api.md

That may explain why restarting seems to fix the issue, as it forces a scrape 
cycle.

You may have to lower settings like scrapeInterval, evaluationInterval... These 
are probably better asking the Prometheus project, as it's already beyond my 
knowledge and the scope of the Camel K project.  

> On 29 Nov 2021, at 16:04, Roberto Camelk <betonetotbo.cam...@gmail.com> wrote:
> 
> Antonin, hi.
> 
> I'm getting a problem in my prometheus operator about the podmonitor
> discovery....
> 
> When I start a new integration, it's not discovered by the prometheus
> operator! I need to restart the pod operator to my integration to be
> discovered and scraped
> 
> This is my Prometheus object in k8s:
> 
> 
> apiVersion: monitoring.coreos.com/v1
> kind: Prometheus
> metadata:
>  annotations:
>    kubectl.kubernetes.io/last-applied-configuration: |
>      
> {"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"annotations":{"project.cattle.io/namespaces":"[\"cattle-prometheus\",\"cattle-system\",\"kube-node-lease\",\"kube-public\",\"kube-sy$
>    project.cattle.io/namespaces:
> '["cattle-prometheus","cattle-system","kube-node-lease","kube-public","kube-system","security-scan"]'
>  generation: 23
>  labels:
>    app: prometheus
>    chart: prometheus-0.0.1
>    heritage: Tiller
>    io.cattle.field/appId: cluster-monitoring
>    release: cluster-monitoring
>  name: cluster-monitoring
>  namespace: cattle-prometheus
>  resourceVersion: "68969602"
> spec:
>  additionalAlertManagerConfigs:
>    key: additional-alertmanager-configs.yaml
>    name: prometheus-cluster-monitoring-additional-alertmanager-configs
>  additionalScrapeConfigs:
>    key: additional-scrape-configs.yaml
>    name: prometheus-cluster-monitoring-additional-scrape-configs
>  affinity:
>    podAntiAffinity:
>      preferredDuringSchedulingIgnoredDuringExecution:
>      - podAffinityTerm:
>          labelSelector:
>            matchLabels:
>              app: prometheus
>              prometheus: cluster-monitoring
>          topologyKey: kubernetes.io/hostname
>        weight: 100
>  arbitraryFSAccessThroughSMs: {}
>  baseImage: rancher/prom-prometheus
>  configMaps:
>  - prometheus-cluster-monitoring-nginx
>  containers:
>  - command:
>    - /bin/sh
>    - -c
>    - cp /nginx/run-sh.tmpl /var/cache/nginx/nginx-start.sh; chmod +x
> /var/cache/nginx/nginx-start.sh;
>      /var/cache/nginx/nginx-start.sh
>    env:
>    - name: POD_IP
>      valueFrom:
>        fieldRef:
>          fieldPath: status.podIP
>    image: rancher/nginx:1.17.4-alpine
>    name: prometheus-proxy
>    ports:
>    - containerPort: 8080
>      name: nginx-http
>      protocol: TCP
>    resources:
>      limits:
>        cpu: 100m
>        memory: 100Mi
>      requests:
>        cpu: 50m
>        memory: 50Mi
>    securityContext:
>      runAsGroup: 101
>      runAsUser: 101
>    volumeMounts:
>    - mountPath: /nginx
>      name: configmap-prometheus-cluster-monitoring-nginx
>    - mountPath: /var/cache/nginx
>      name: nginx-home
>  - args:
>    - --proxy-url=http://127.0.0.1:9090
>    - --listen-address=$(POD_IP):9090
>    - --filter-reader-labels=prometheus
>    - --filter-reader-labels=prometheus_replica
>    command:
>    - prometheus-auth
>    env:
>    - name: POD_IP
>      valueFrom:
>        fieldRef:
>          fieldPath: status.podIP
>    image: rancher/prometheus-auth:v0.2.0
>    livenessProbe:
>      failureThreshold: 6
>      httpGet:
>        path: /-/healthy
>        port: web
>        scheme: HTTP
>      initialDelaySeconds: 300
>      periodSeconds: 10
>      successThreshold: 1
>      timeoutSeconds: 10
>    name: prometheus-agent
>    ports:
>    - containerPort: 9090
>      name: web
>      protocol: TCP
>    readinessProbe:
>      failureThreshold: 10
>      httpGet:
>        path: /-/ready
>        port: web
>        scheme: HTTP
>      initialDelaySeconds: 60
>      periodSeconds: 10
>      successThreshold: 1
>      timeoutSeconds: 10
>    resources:
>      limits:
>        cpu: 500m
>        memory: 200Mi
>      requests:
>        cpu: 100m
>        memory: 100Mi
>  evaluationInterval: 60s
>  externalLabels:
>    prometheus_from: arquitetura
>  listenLocal: true
>  logFormat: logfmt
>  logLevel: info
>  nodeSelector:
>    kubernetes.io/os: linux
>  podMetadata:
>    labels:
>    labels:
>      app: prometheus
>      chart: prometheus-0.0.1
>      release: cluster-monitoring
>  podMonitorNamespaceSelector: {}
>  podMonitorSelector:
>    matchLabels:
>      app: camel-k
>  replicas: 1
>  resources:
>    limits:
>      cpu: "1"
>      memory: 1000Mi
>    requests:
>      cpu: 750m
>      memory: 750Mi
>  retention: 12h
>  ruleNamespaceSelector:
>    matchExpressions:
>    - key: field.cattle.io/projectId
>      operator: In
>      values:
>      - p-gzj4v
>    - key: field.cattle.io/projectId
>      operator: In
>      values:
>      - p-gzj4v
>  ruleSelector:
>    matchExpressions:
>    - key: source
>      operator: In
>      values:
>      - rancher-alert
>      - rancher-monitoring
>  rules:
>    alert: {}
>  scrapeInterval: 60s
>  secrets:
>  - exporter-etcd-cert
> securityContext:
>    fsGroup: 2000
>    runAsNonRoot: true
>    runAsUser: 1000
>  serviceAccountName: cluster-monitoring
>  serviceMonitorNamespaceSelector:
>    matchExpressions:
>    - key: field.cattle.io/projectId
>      operator: In
>      values:
>      - p-gzj4v
>    - key: field.cattle.io/projectId
>      operator: In
>      values:
>      - p-gzj4v
>  serviceMonitorSelector: {}
>  tolerations:
>  - effect: NoSchedule
>    key: cattle.io/os
>   operator: Equal
>    value: linux
>  version: v2.17.2
>  volumes:
>  - emptyDir: {}
>    name: nginx-home
> 
> On Fri, Nov 26, 2021 at 11:41 AM Roberto Camelk
> <betonetotbo.cam...@gmail.com> wrote:
>> 
>> Thanks again Antonin.
>> 
>> You save my black-friday!
>> 
>> On Fri, Nov 26, 2021 at 11:02 AM Antonin Stefanutti
>> <anto...@stefanutti.fr.invalid> wrote:
>>> 
>>> Great!
>>> 
>>> For the metrics, there should be the following registered by the 
>>> MicroProfile Metrics Camel extension:
>>> 
>>> https://camel.apache.org/camel-quarkus/2.4.x/reference/extensions/microprofile-metrics.html#_usage
>>> 
>>> However, the final name is determined by the MicroProfile Metrics 
>>> specification, so for example the following metric:
>>> 
>>> camel.context.exchanges.completed.total
>>> 
>>> Have its name translated to:
>>> 
>>> application_camel_context_exchanges_completed_total
>>> 
>>> To check also if the metrics endpoint do have the metrics registered, you 
>>> can run:
>>> 
>>> $ kubectl exec deployment/<integration_name> -- curl -s 
>>> http://localhost:8080/q/metrics | grep 
>>> application_camel_route_exchanges_completed_total
>>> 
>>> # HELP application_camel_route_exchanges_completed_total The total number 
>>> of completed exchanges for a route or Camel Context
>>> # TYPE application_camel_route_exchanges_completed_total counter
>>> application_camel_route_exchanges_completed_total{camelContext="camel-1",routeId="route1"}
>>>  33.0
>>> 
>>>> On 26 Nov 2021, at 14:34, Roberto Camelk <betonetotbo.cam...@gmail.com> 
>>>> wrote:
>>>> 
>>>> Thanks a lot again Antonin. I owe you a beer or coffee!!!
>>>> 
>>>> This worked 100%.
>>>> 
>>>> To end, another question.... after this, prometheus start scraping the
>>>> metrics from my running route pod, but...
>>>> 
>>>> Some metrics IDs are not there, some metrics starting with
>>>> "org_apache_camel_" as I have see in this grafana dashboard sample:
>>>> https://github.com/weimeilin79/camel-k-example-prometheus/blob/ca347f0b8b702b84b7129dfcfcb3b84eff4e2f73/grafana/SampleCamelDashboard.json#L145
>>>> 
>>>> The other metrics (camel_k_* -
>>>> https://camel.apache.org/camel-k/next/observability/monitoring/operator.html#metrics)
>>>> are there as so the jvm metrics
>>>> (https://github.com/eclipse/microprofile-metrics/blob/master/spec/src/main/asciidoc/required-metrics.adoc#required-metrics).
>>>> 
>>>> Is this correct? Why am I not getting that "org_apache_camel_" metrics?
>>>> 
>>>> On Fri, Nov 26, 2021 at 9:13 AM Antonin Stefanutti
>>>> <anto...@stefanutti.fr.invalid> wrote:
>>>>> 
>>>>> It's likely that you need to configure the Prometheus instance, by 
>>>>> editing the Prometheus resource, e.g.:
>>>>> 
>>>>> apiVersion: monitoring.coreos.com/v1
>>>>> kind: Prometheus
>>>>> metadata:
>>>>> name: prometheus
>>>>> spec:
>>>>> serviceAccountName: prometheus
>>>>> podMonitorSelector:
>>>>>   matchLabels:
>>>>>     app: camel-k
>>>>> podMonitorNamespaceSelector: {}
>>>>> 
>>>>> It's important to have podMonitorNamespaceSelector: {} to discover all 
>>>>> the namespaces, otherwise it's only the resource's namespace.
>>>>> 
>>>>> Also you can use the Prometheus trait to set the labels on the 
>>>>> Integration PodMonitor accordingly, e.g.:
>>>>> 
>>>>> $ kamel run -t prometheus.enabled=true -t 
>>>>> prometheus.pod-monitor-labels="app=camel-k"
>>>>> 
>>>>> There are some examples in the Prometheus operator documentation:
>>>>> 
>>>>> https://github.com/prometheus-operator/prometheus-operator/tree/v0.52.1/example/user-guides/getting-started
>>>>> 
>>>>> And the troubleshooting guide at:
>>>>> 
>>>>> https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/troubleshooting.md
>>>>> 
>>>>> 
>>>>>> On 26 Nov 2021, at 12:32, Roberto Camelk <betonetotbo.cam...@gmail.com> 
>>>>>> wrote:
>>>>>> 
>>>>>> Antonin. Thanks !
>>>>>> 
>>>>>> But I continue stuck, please let me share some extra info about my
>>>>>> prometheus operator log:
>>>>>> 
>>>>>> level=debug ts=2021-11-26T11:12:05.837624346Z caller=operator.go:1840
>>>>>> component=prometheusoperator msg="filtering namespaces to select
>>>>>> PodMonitors from" namespaces=cattle-prometheus
>>>>>> namespace=cattle-prometheus prometheus=cluster-monitoring
>>>>>> 26/11/2021 08:12:05 level=debug ts=2021-11-26T11:12:05.837687534Z
>>>>>> caller=operator.go:1853 component=prometheusoperator msg="selected
>>>>>> PodMonitors" podmonitors= namespace=cattle-prometheus
>>>>>> prometheus=cluster-monitoring
>>>>>> 26/11/2021 08:12:05 level=debug ts=2021-11-26T11:12:05.942216811Z
>>>>>> caller=operator.go:1677 component=prometheusoperator msg="updating
>>>>>> Prometheus configuration secret skipped, no configuration change"
>>>>>> 26/11/2021 08:12:05 level=debug ts=2021-11-26T11:12:05.950980834Z
>>>>>> caller=operator.go:1776 component=prometheusoperator msg="filtering
>>>>>> namespaces to select ServiceMonitors from"
>>>>>> namespaces=cattle-prometheus,cattle-system,kube-node-lease,kube-public,security-scan,kube-system
>>>>>> namespace=cattle-prometheus prometheus=cluster-monitoring
>>>>>> 26/11/2021 08:12:05 level=debug ts=2021-11-26T11:12:05.951162973Z
>>>>>> caller=operator.go:1810 component=prometheusoperator msg="selected
>>>>>> ServiceMonitors"
>>>>>> servicemonitors=cattle-prometheus/grafana-cluster-monitoring,cattle-prometheus/exporter-kube-etcd-cluster-monitoring,cattle-prometheus/exporter-node-cluster-monitoring,cattle-prometheus/exporter-kube-controller-manager-cluster-monitoring,cattle-prometheus/exporter-kube-state-cluster-monitoring,cattle-prometheus/prometheus-cluster-monitoring,cattle-prometheus/prometheus-operator-monitoring-operator,cattle-prometheus/exporter-fluentd-cluster-monitoring,cattle-prometheus/exporter-kubelets-cluster-monitoring,cattle-prometheus/exporter-kube-scheduler-cluster-monitoring,cattle-prometheus/exporter-kubernetes-cluster-monitoring
>>>>>> namespace=cattle-prometheus prometheus=cluster-monitoring
>>>>>> 26/11/2021 08:12:05 level=debug ts=2021-11-26T11:12:05.977550133Z
>>>>>> caller=operator.go:1741 component=prometheusoperator msg="updated
>>>>>> tlsAssetsSecret" secretname=prometheus-cluster-monitoring-tls-assets
>>>>>> 26/11/2021 08:12:06 level=debug ts=2021-11-26T11:12:06.022196407Z
>>>>>> caller=operator.go:1169 component=prometheusoperator msg="new
>>>>>> statefulset generation inputs match current, skipping any actions"
>>>>>> 26/11/2021 08:12:26 level=debug ts=2021-11-26T11:12:26.321817491Z
>>>>>> caller=operator.go:734 component=prometheusoperator msg="PodMonitor
>>>>>> added"
>>>>>> 26/11/2021 08:12:27 level=debug ts=2021-11-26T11:12:27.755854021Z
>>>>>> caller=operator.go:748 component=prometheusoperator msg="PodMonitor
>>>>>> updated"
>>>>>> 26/11/2021 08:12:46 level=debug ts=2021-11-26T11:12:46.453112794Z
>>>>>> caller=operator.go:748 component=prometheusoperator msg="PodMonitor
>>>>>> updated"
>>>>>> 26/11/2021 08:17:35 level=debug ts=2021-11-26T11:17:35.194031009Z
>>>>>> caller=operator.go:759 component=prometheusoperator msg="PodMonitor
>>>>>> delete"
>>>>>> 
>>>>>> This last 4 lines is about my camel-k route, that I ran and stopped.
>>>>>> 
>>>>>> So, there are some problems, I think, about the logs above about the
>>>>>> podmonitor selectors telling: "selected PodMonitors" podmonitors=
>>>>>> namespace=cattle-prometheus prometheus=cluster-monitoring
>>>>>> 
>>>>>> My camel-k route is running at namespace "platform" and has no label
>>>>>> like "prometheus=cluster-monitoring".
>>>>>> 
>>>>>> Do you know how can I fix this? Adding additional scrape configs to
>>>>>> prometheus can solve this? Can you provide a snipet code?
>>>>>> 
>>>>>> On Thu, Nov 25, 2021 at 9:56 AM Antonin Stefanutti
>>>>>> <anto...@stefanutti.fr.invalid> wrote:
>>>>>>> 
>>>>>>> When run an Integration with `kamel run -t prometheus.enabled=true`, a 
>>>>>>> PodMonitor resource is created for the Prometheus operator to reconcile 
>>>>>>> and configure Prometheus to scrape the Integration metrics endpoint.
>>>>>>> 
>>>>>>> The PodMonitor metadata must match that of the Prometheus operator, 
>>>>>>> like the namespace, the labels, ...
>>>>>>> 
>>>>>>> Some documentation is available at:
>>>>>>> 
>>>>>>> https://camel.apache.org/camel-k/1.7.x/observability/monitoring/integration.html#_discovery
>>>>>>> 
>>>>>>> That contains some links to the Prometheus operator documentation for 
>>>>>>> troubleshooting why the metrics endpoint is not discovered.
>>>>>>> 
>>>>>>>> On 25 Nov 2021, at 12:25, Roberto Camelk 
>>>>>>>> <betonetotbo.cam...@gmail.com> wrote:
>>>>>>>> 
>>>>>>>> I have a Kubernetes running Rancher 2.4.3. I have the cluster
>>>>>>>> monitoring enabled in rancher, so that exists a Prometheus instance
>>>>>>>> running, so as a Prometheus Operator.
>>>>>>>> 
>>>>>>>> Recently I deployed a Apache Camel-K operator, and now I want to
>>>>>>>> enable the prometheus integration for collect metrics about my camel
>>>>>>>> routes.
>>>>>>>> 
>>>>>>>> So, my Camel-K operator is running in namescape camel-k and the
>>>>>>>> rancher embedded prometheus stack in cattle-prometheus namespace.
>>>>>>>> 
>>>>>>>> I just have launched my route with the trait --trait
>>>>>>>> prometheus.enabled=true, but the camel metrics aren't listing at my
>>>>>>>> prometheus.
>>>>>>>> 
>>>>>>>> Anyone knows why or what I need to configure to my camel-k route
>>>>>>>> deploy it's metrics at the rancher embedded prometheus?
>>>>>>> 
>>>>> 
>>> 

Reply via email to