Thanks a lot again Antonin. I owe you a beer or coffee!!!

This worked 100%.

To end, another question.... after this, prometheus start scraping the
metrics from my running route pod, but...

Some metrics IDs are not there, some metrics starting with
"org_apache_camel_" as I have see in this grafana dashboard sample:
https://github.com/weimeilin79/camel-k-example-prometheus/blob/ca347f0b8b702b84b7129dfcfcb3b84eff4e2f73/grafana/SampleCamelDashboard.json#L145

The other metrics (camel_k_* -
https://camel.apache.org/camel-k/next/observability/monitoring/operator.html#metrics)
are there as so the jvm metrics
(https://github.com/eclipse/microprofile-metrics/blob/master/spec/src/main/asciidoc/required-metrics.adoc#required-metrics).

Is this correct? Why am I not getting that "org_apache_camel_" metrics?

On Fri, Nov 26, 2021 at 9:13 AM Antonin Stefanutti
<anto...@stefanutti.fr.invalid> wrote:
>
> It's likely that you need to configure the Prometheus instance, by editing 
> the Prometheus resource, e.g.:
>
> apiVersion: monitoring.coreos.com/v1
> kind: Prometheus
> metadata:
>   name: prometheus
> spec:
>   serviceAccountName: prometheus
>   podMonitorSelector:
>     matchLabels:
>       app: camel-k
>   podMonitorNamespaceSelector: {}
>
> It's important to have podMonitorNamespaceSelector: {} to discover all the 
> namespaces, otherwise it's only the resource's namespace.
>
> Also you can use the Prometheus trait to set the labels on the Integration 
> PodMonitor accordingly, e.g.:
>
> $ kamel run -t prometheus.enabled=true -t 
> prometheus.pod-monitor-labels="app=camel-k"
>
> There are some examples in the Prometheus operator documentation:
>
> https://github.com/prometheus-operator/prometheus-operator/tree/v0.52.1/example/user-guides/getting-started
>
> And the troubleshooting guide at:
>
> https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/troubleshooting.md
>
>
> > On 26 Nov 2021, at 12:32, Roberto Camelk <betonetotbo.cam...@gmail.com> 
> > wrote:
> >
> > Antonin. Thanks !
> >
> > But I continue stuck, please let me share some extra info about my
> > prometheus operator log:
> >
> > level=debug ts=2021-11-26T11:12:05.837624346Z caller=operator.go:1840
> > component=prometheusoperator msg="filtering namespaces to select
> > PodMonitors from" namespaces=cattle-prometheus
> > namespace=cattle-prometheus prometheus=cluster-monitoring
> > 26/11/2021 08:12:05 level=debug ts=2021-11-26T11:12:05.837687534Z
> > caller=operator.go:1853 component=prometheusoperator msg="selected
> > PodMonitors" podmonitors= namespace=cattle-prometheus
> > prometheus=cluster-monitoring
> > 26/11/2021 08:12:05 level=debug ts=2021-11-26T11:12:05.942216811Z
> > caller=operator.go:1677 component=prometheusoperator msg="updating
> > Prometheus configuration secret skipped, no configuration change"
> > 26/11/2021 08:12:05 level=debug ts=2021-11-26T11:12:05.950980834Z
> > caller=operator.go:1776 component=prometheusoperator msg="filtering
> > namespaces to select ServiceMonitors from"
> > namespaces=cattle-prometheus,cattle-system,kube-node-lease,kube-public,security-scan,kube-system
> > namespace=cattle-prometheus prometheus=cluster-monitoring
> > 26/11/2021 08:12:05 level=debug ts=2021-11-26T11:12:05.951162973Z
> > caller=operator.go:1810 component=prometheusoperator msg="selected
> > ServiceMonitors"
> > servicemonitors=cattle-prometheus/grafana-cluster-monitoring,cattle-prometheus/exporter-kube-etcd-cluster-monitoring,cattle-prometheus/exporter-node-cluster-monitoring,cattle-prometheus/exporter-kube-controller-manager-cluster-monitoring,cattle-prometheus/exporter-kube-state-cluster-monitoring,cattle-prometheus/prometheus-cluster-monitoring,cattle-prometheus/prometheus-operator-monitoring-operator,cattle-prometheus/exporter-fluentd-cluster-monitoring,cattle-prometheus/exporter-kubelets-cluster-monitoring,cattle-prometheus/exporter-kube-scheduler-cluster-monitoring,cattle-prometheus/exporter-kubernetes-cluster-monitoring
> > namespace=cattle-prometheus prometheus=cluster-monitoring
> > 26/11/2021 08:12:05 level=debug ts=2021-11-26T11:12:05.977550133Z
> > caller=operator.go:1741 component=prometheusoperator msg="updated
> > tlsAssetsSecret" secretname=prometheus-cluster-monitoring-tls-assets
> > 26/11/2021 08:12:06 level=debug ts=2021-11-26T11:12:06.022196407Z
> > caller=operator.go:1169 component=prometheusoperator msg="new
> > statefulset generation inputs match current, skipping any actions"
> > 26/11/2021 08:12:26 level=debug ts=2021-11-26T11:12:26.321817491Z
> > caller=operator.go:734 component=prometheusoperator msg="PodMonitor
> > added"
> > 26/11/2021 08:12:27 level=debug ts=2021-11-26T11:12:27.755854021Z
> > caller=operator.go:748 component=prometheusoperator msg="PodMonitor
> > updated"
> > 26/11/2021 08:12:46 level=debug ts=2021-11-26T11:12:46.453112794Z
> > caller=operator.go:748 component=prometheusoperator msg="PodMonitor
> > updated"
> > 26/11/2021 08:17:35 level=debug ts=2021-11-26T11:17:35.194031009Z
> > caller=operator.go:759 component=prometheusoperator msg="PodMonitor
> > delete"
> >
> > This last 4 lines is about my camel-k route, that I ran and stopped.
> >
> > So, there are some problems, I think, about the logs above about the
> > podmonitor selectors telling: "selected PodMonitors" podmonitors=
> > namespace=cattle-prometheus prometheus=cluster-monitoring
> >
> > My camel-k route is running at namespace "platform" and has no label
> > like "prometheus=cluster-monitoring".
> >
> > Do you know how can I fix this? Adding additional scrape configs to
> > prometheus can solve this? Can you provide a snipet code?
> >
> > On Thu, Nov 25, 2021 at 9:56 AM Antonin Stefanutti
> > <anto...@stefanutti.fr.invalid> wrote:
> >>
> >> When run an Integration with `kamel run -t prometheus.enabled=true`, a 
> >> PodMonitor resource is created for the Prometheus operator to reconcile 
> >> and configure Prometheus to scrape the Integration metrics endpoint.
> >>
> >> The PodMonitor metadata must match that of the Prometheus operator, like 
> >> the namespace, the labels, ...
> >>
> >> Some documentation is available at:
> >>
> >> https://camel.apache.org/camel-k/1.7.x/observability/monitoring/integration.html#_discovery
> >>
> >> That contains some links to the Prometheus operator documentation for 
> >> troubleshooting why the metrics endpoint is not discovered.
> >>
> >>> On 25 Nov 2021, at 12:25, Roberto Camelk <betonetotbo.cam...@gmail.com> 
> >>> wrote:
> >>>
> >>> I have a Kubernetes running Rancher 2.4.3. I have the cluster
> >>> monitoring enabled in rancher, so that exists a Prometheus instance
> >>> running, so as a Prometheus Operator.
> >>>
> >>> Recently I deployed a Apache Camel-K operator, and now I want to
> >>> enable the prometheus integration for collect metrics about my camel
> >>> routes.
> >>>
> >>> So, my Camel-K operator is running in namescape camel-k and the
> >>> rancher embedded prometheus stack in cattle-prometheus namespace.
> >>>
> >>> I just have launched my route with the trait --trait
> >>> prometheus.enabled=true, but the camel metrics aren't listing at my
> >>> prometheus.
> >>>
> >>> Anyone knows why or what I need to configure to my camel-k route
> >>> deploy it's metrics at the rancher embedded prometheus?
> >>
>

Reply via email to